shadow variables - pg15 edition

Started by Justin Pryzbyover 3 years ago56 messages
#1Justin Pryzby
pryzby@telsasoft.com
17 attachment(s)

There's been no progress on this in the past discussions.

/messages/by-id/877k1psmpf.fsf@mailbox.samurai.com
/messages/by-id/CAApHDvpqBR7u9yzW4yggjG=QfN=FZsc8Wo2ckokpQtif-+iQ2A@mail.gmail.com
/messages/by-id/MN2PR18MB2927F7B5F690065E1194B258E35D0@MN2PR18MB2927.namprd18.prod.outlook.com

But an unfortunate consequence of not fixing the historic issues is that it
precludes the possibility that anyone could be expected to notice if they
introduce more instances of the same problem (as in the first half of these
patches). Then the hole which has already been dug becomes deeper, further
increasing the burden of fixing the historic issues before being able to use
-Wshadow.

The first half of the patches fix shadow variables newly-introduced in v15
(including one of my own patches), the rest are fixing the lowest hanging fruit
of the "short list" from COPT=-Wshadow=compatible-local

I can't see that any of these are bugs, but it seems like a good goal to move
towards allowing use of the -Wshadow* options to help avoid future errors, as
well as cleanliness and readability (rather than allowing it to get harder to
use -Wshadow).

--
Justin

Attachments:

0001-avoid-shadow-vars-pg_dump.c-i_oid.patchtext/x-diff; charset=us-asciiDownload
From 0b05b375a87d89f5d88e87d11956cf2ac15ea00f Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:38:57 -0500
Subject: [PATCH 01/17] avoid shadow vars: pg_dump.c: i_oid

backpatch to v15

commit d498e052b4b84ae21b3b68d5b3fda6ead65d1d4d
Author: Robert Haas <rhaas@postgresql.org>
Date:   Fri Jul 8 10:15:19 2022 -0400

    Preserve relfilenode of pg_largeobject and its index across pg_upgrade.
---
 src/bin/pg_dump/pg_dump.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index da6605175a0..322947c5609 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3144,7 +3144,6 @@ dumpDatabase(Archive *fout)
 		PQExpBuffer loHorizonQry = createPQExpBuffer();
 		int			i_relfrozenxid,
 					i_relfilenode,
-					i_oid,
 					i_relminmxid;
 
 		/*
-- 
2.17.1

0002-avoid-shadow-vars-pg_dump.c-tbinfo.patchtext/x-diff; charset=us-asciiDownload
From a76bac21fe428cdd6241bff6827e08d9d71e1bdf Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 15:55:13 -0500
Subject: [PATCH 02/17] avoid shadow vars: pg_dump.c: tbinfo

backpatch to v15

commit 9895961529ef8ff3fc12b39229f9a93e08bca7b7
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date:   Mon Dec 6 13:07:31 2021 -0500

    Avoid per-object queries in performance-critical paths in pg_dump.
---
 src/bin/pg_dump/pg_dump.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 322947c5609..5c196d66985 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -7080,21 +7080,21 @@ getConstraints(Archive *fout, TableInfo tblinfo[], int numTables)
 	appendPQExpBufferChar(tbloids, '{');
 	for (int i = 0; i < numTables; i++)
 	{
-		TableInfo  *tbinfo = &tblinfo[i];
+		TableInfo  *mytbinfo = &tblinfo[i];
 
 		/*
 		 * For partitioned tables, foreign keys have no triggers so they must
 		 * be included anyway in case some foreign keys are defined.
 		 */
-		if ((!tbinfo->hastriggers &&
-			 tbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
-			!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
+		if ((!mytbinfo->hastriggers &&
+			 mytbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
+			!(mytbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
 			continue;
 
 		/* OK, we need info for this table */
 		if (tbloids->len > 1)	/* do we have more than the '{'? */
 			appendPQExpBufferChar(tbloids, ',');
-		appendPQExpBuffer(tbloids, "%u", tbinfo->dobj.catId.oid);
+		appendPQExpBuffer(tbloids, "%u", mytbinfo->dobj.catId.oid);
 	}
 	appendPQExpBufferChar(tbloids, '}');
 
-- 
2.17.1

0003-avoid-shadow-vars-pg_dump.c-owning_tab.patchtext/x-diff; charset=us-asciiDownload
From f6a814fd50800942081250b05f8e6d143b8d8266 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 16:22:52 -0500
Subject: [PATCH 03/17] avoid shadow vars: pg_dump.c: owning_tab

backpatch to v15

commit 344d62fb9a978a72cf8347f0369b9ee643fd0b31
Author: Peter Eisentraut <peter@eisentraut.org>
Date:   Thu Apr 7 16:13:23 2022 +0200

    Unlogged sequences
---
 src/bin/pg_dump/pg_dump.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 5c196d66985..ecf29f3c52a 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -16799,7 +16799,7 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
 	 */
 	if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
 	{
-		TableInfo  *owning_tab = findTableByOid(tbinfo->owning_tab);
+		owning_tab = findTableByOid(tbinfo->owning_tab);
 
 		if (owning_tab == NULL)
 			pg_fatal("failed sanity check, parent table with OID %u of sequence with OID %u not found",
-- 
2.17.1

0004-avoid-shadow-vars-tablesync.c-first.patchtext/x-diff; charset=us-asciiDownload
From 1a979be65baab871754f86669c5f0327fad6cab5 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 08:52:03 -0500
Subject: [PATCH 04/17] avoid shadow vars: tablesync.c: first

backpatch to v15

commit 923def9a533a7d986acfb524139d8b9e5466d0a5
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date:   Sat Mar 26 00:45:21 2022 +0100

    Allow specifying column lists for logical replication
---
 src/backend/replication/logical/tablesync.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 6a01ffd273f..95d1081f4ec 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -762,8 +762,8 @@ fetch_remote_table_info(char *nspname, char *relname,
 		TupleTableSlot *slot;
 		Oid			attrsRow[] = {INT2VECTOROID};
 		StringInfoData pub_names;
-		bool		first = true;
 
+		first = true;
 		initStringInfo(&pub_names);
 		foreach(lc, MySubscription->publications)
 		{
-- 
2.17.1

0005-avoid-shadow-vars-tablesync.c-slot.patchtext/x-diff; charset=us-asciiDownload
From 555a4545460f3086fd69ca95ac41f18c6ceaab80 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:01:16 -0500
Subject: [PATCH 05/17] avoid shadow vars: tablesync.c: slot

backpatch to v15

commit 923def9a533a7d986acfb524139d8b9e5466d0a5
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date:   Sat Mar 26 00:45:21 2022 +0100

    Allow specifying column lists for logical replication
---
 src/backend/replication/logical/tablesync.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 95d1081f4ec..5bb9b545e9a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -759,7 +759,6 @@ fetch_remote_table_info(char *nspname, char *relname,
 	if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
 	{
 		WalRcvExecResult *pubres;
-		TupleTableSlot *slot;
 		Oid			attrsRow[] = {INT2VECTOROID};
 		StringInfoData pub_names;
 
-- 
2.17.1

0006-avoid-shadow-vars-basebackup_target.c-ttype.patchtext/x-diff; charset=us-asciiDownload
From 5ac6d302f769db6f4625be0cf6a5bae4aa60de40 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 18:51:10 -0500
Subject: [PATCH 06/17] avoid shadow vars: basebackup_target.c: ttype

backpatch to v15

commit e4ba69f3f4a1b997aa493cc02e563a91c0f35b87
Author: Robert Haas <rhaas@postgresql.org>
Date:   Tue Mar 15 13:22:04 2022 -0400

    Allow extensions to add new backup targets.
---
 src/backend/backup/basebackup_target.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/backup/basebackup_target.c b/src/backend/backup/basebackup_target.c
index 83928e32055..8d10fe15530 100644
--- a/src/backend/backup/basebackup_target.c
+++ b/src/backend/backup/basebackup_target.c
@@ -73,7 +73,7 @@ BaseBackupAddTarget(char *name,
 	/* Search the target type list for an existing entry with this name. */
 	foreach(lc, BaseBackupTargetTypeList)
 	{
-		BaseBackupTargetType *ttype = lfirst(lc);
+		ttype = lfirst(lc);
 
 		if (strcmp(ttype->name, name) == 0)
 		{
-- 
2.17.1

0007-avoid-shadow-vars-parse_jsontable.c-jtc.patchtext/x-diff; charset=us-asciiDownload
From 744cb8dd010d61bef46f9623511a253429bb46cb Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:45:28 -0500
Subject: [PATCH 07/17] avoid shadow vars: parse_jsontable.c: jtc

backpatch to v15

commit fadb48b00e02ccfd152baa80942de30205ab3c4f
Author: Andrew Dunstan <andrew@dunslane.net>
Date:   Tue Apr 5 14:09:04 2022 -0400

    PLAN clauses for JSON_TABLE
---
 src/backend/parser/parse_jsontable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/parser/parse_jsontable.c b/src/backend/parser/parse_jsontable.c
index bc3272017ef..c2318b126f2 100644
--- a/src/backend/parser/parse_jsontable.c
+++ b/src/backend/parser/parse_jsontable.c
@@ -341,9 +341,9 @@ transformJsonTableChildPlan(JsonTableContext *cxt, JsonTablePlan *plan,
 		/* transform all nested columns into cross/union join */
 		foreach(lc, columns)
 		{
-			JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
 			Node	   *node;
 
+			jtc = castNode(JsonTableColumn, lfirst(lc));
 			if (jtc->coltype != JTC_NESTED)
 				continue;
 
-- 
2.17.1

0008-avoid-shadow-vars-res.patchtext/x-diff; charset=us-asciiDownload
From 660a31762a9122c240227f1f542ac4e284b5e4c5 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 00:22:45 -0500
Subject: [PATCH 08/17] avoid shadow vars: res

backpatch to v15

commit 1a36bc9dba8eae90963a586d37b6457b32b2fed4
Author: Andrew Dunstan <andrew@dunslane.net>
Date:   Thu Mar 3 13:11:14 2022 -0500

    SQL/JSON query functions
---
 src/backend/utils/adt/jsonpath_exec.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index 10c7e64aab3..d1e3385975a 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32 typmod, JsonbValue *res)
 
 				if (JsonContainerIsScalar(&jb->root))
 				{
-					bool		res PG_USED_FOR_ASSERTS_ONLY;
+					bool		tmp PG_USED_FOR_ASSERTS_ONLY;
 
-					res = JsonbExtractScalar(&jb->root, jbv);
-					Assert(res);
+					tmp = JsonbExtractScalar(&jb->root, jbv);
+					Assert(tmp);
 				}
 				else
 					JsonbInitBinary(jbv, jb);
-- 
2.17.1

0009-avoid-shadow-vars-clauses.c-querytree_list.patchtext/x-diff; charset=us-asciiDownload
From ba98717eba1ffa94dd2dd23a0dd29f30b035f56b Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:43:15 -0500
Subject: [PATCH 09/17] avoid shadow vars: clauses.c: querytree_list

commit e717a9a18b2e34c9c40e5259ad4d31cd7e420750
Author: Peter Eisentraut <peter@eisentraut.org>
Date:   Wed Apr 7 21:30:08 2021 +0200

    SQL-standard function body
---
 src/backend/optimizer/util/clauses.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 533df86ff77..e846d414f00 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4540,7 +4540,6 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 	if (!isNull)
 	{
 		Node	   *n;
-		List	   *querytree_list;
 
 		n = stringToNode(TextDatumGetCString(tmp));
 		if (IsA(n, List))
-- 
2.17.1

0010-avoid-shadow-vars-tablecmds.c-constraintOid.patchtext/x-diff; charset=us-asciiDownload
From a20657e50017676f04d11a24cbd046ce768af248 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:28:02 -0500
Subject: [PATCH 10/17] avoid shadow vars: tablecmds.c: constraintOid

commit eb7ed3f3063401496e4aa4bd68fa33f0be31a72f
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Feb 19 16:59:37 2018 -0300

    Allow UNIQUE indexes on partitioned tables
---
 src/backend/commands/tablecmds.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 70b94bbb397..1c0cf7c1a06 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -18098,7 +18098,6 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel)
 		if (!found)
 		{
 			IndexStmt  *stmt;
-			Oid			constraintOid;
 
 			stmt = generateClonedIndexStmt(NULL,
 										   idxRel, attmap,
-- 
2.17.1

0011-avoid-shadow-vars-tablecmds.c-copyTuple.patchtext/x-diff; charset=us-asciiDownload
From 6866edb3c2a738cf14abf3df79db4fc4ce8ec1e4 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:17:46 -0500
Subject: [PATCH 11/17] avoid shadow vars: tablecmds.c: copyTuple

commit 6f70d7ca1d1937a9f7b79eff6fb18ed1bb2a4c47
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Wed May 5 12:14:21 2021 -0400

    Have ALTER CONSTRAINT recurse on partitioned tables
---
 src/backend/commands/tablecmds.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 1c0cf7c1a06..d6483cf1f9a 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10865,7 +10865,6 @@ ATExecAlterConstrRecurse(Constraint *cmdcon, Relation conrel, Relation tgrel,
 		{
 			Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tgtuple);
 			Form_pg_trigger copy_tg;
-			HeapTuple	copyTuple;
 
 			/*
 			 * Remember OIDs of other relation(s) involved in FK constraint.
-- 
2.17.1

0012-avoid-shadow-vars-copyfrom.c-attnum.patchtext/x-diff; charset=us-asciiDownload
From 639a1b6bc67c52e242f5cbe4f14070fdce1d5497 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 16:55:10 -0500
Subject: [PATCH 12/17] avoid shadow vars: copyfrom.c: attnum

commit 3a1433674696fbb968bc2120ebd36d9766f49af5
Author: Bruce Momjian <bruce@momjian.us>
Date:   Thu Apr 15 22:36:03 2004 +0000

    Modify COPY for() loop to use attnum as a variable name, not 'i'.
---
 src/backend/commands/copyfrom.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index a976008b3d4..e8bb168aea8 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -1202,7 +1202,6 @@ BeginCopyFrom(ParseState *pstate,
 				num_defaults;
 	FmgrInfo   *in_functions;
 	Oid		   *typioparams;
-	int			attnum;
 	Oid			in_func_oid;
 	int		   *defmap;
 	ExprState **defexprs;
@@ -1401,7 +1400,7 @@ BeginCopyFrom(ParseState *pstate,
 	defmap = (int *) palloc(num_phys_attrs * sizeof(int));
 	defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
 
-	for (attnum = 1; attnum <= num_phys_attrs; attnum++)
+	for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
 	{
 		Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1);
 
-- 
2.17.1

0013-avoid-shadow-vars-nodeAgg-transno.patchtext/x-diff; charset=us-asciiDownload
From f4eb4dab974b60e62f2444cc19bb50eeb1933018 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 18:36:13 -0500
Subject: [PATCH 13/17] avoid shadow vars: nodeAgg: transno

commit db80acfc9d50ac56811d22802ab3d822ab313055
Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Date:   Tue Dec 20 09:20:17 2016 +0200

    Fix sharing Agg transition state of DISTINCT or ordered aggs.
---
 src/backend/executor/nodeAgg.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 96d200e4461..933c3049016 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -1296,13 +1296,12 @@ finalize_aggregates(AggState *aggstate,
 	Datum	   *aggvalues = econtext->ecxt_aggvalues;
 	bool	   *aggnulls = econtext->ecxt_aggnulls;
 	int			aggno;
-	int			transno;
 
 	/*
 	 * If there were any DISTINCT and/or ORDER BY aggregates, sort their
 	 * inputs and run the transition functions.
 	 */
-	for (transno = 0; transno < aggstate->numtrans; transno++)
+	for (int transno = 0; transno < aggstate->numtrans; transno++)
 	{
 		AggStatePerTrans pertrans = &aggstate->pertrans[transno];
 		AggStatePerGroup pergroupstate;
-- 
2.17.1

0014-avoid-shadow-vars-trigger.c-partitionId.patchtext/x-diff; charset=us-asciiDownload
From d8467edb33575a56ae522e97ecfb329a27b1f462 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:36:12 -0500
Subject: [PATCH 14/17] avoid shadow vars: trigger.c: partitionId

commit 80ba4bb383538a2ee846fece6a7b8da9518b6866
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Thu Jul 22 18:33:47 2021 -0400

    Make ALTER TRIGGER RENAME consistent for partitioned tables
---
 src/backend/commands/trigger.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 62a09fb131b..bb4385c6ea9 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1726,9 +1726,9 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
 
 			for (int i = 0; i < partdesc->nparts; i++)
 			{
-				Oid			partitionId = partdesc->oids[i];
+				Oid			partid = partdesc->oids[i];
 
-				renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+				renametrig_partition(tgrel, partid, tgform->oid, newname,
 									 NameStr(tgform->tgname));
 			}
 		}
-- 
2.17.1

0015-avoid-shadow-vars-execPartition.c-found_whole_row.patchtext/x-diff; charset=us-asciiDownload
From f24e62293892170cc500907b15e70d75b2503ae1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:20:38 -0500
Subject: [PATCH 15/17] avoid shadow vars: execPartition.c: found_whole_row

commit 158b7bc6d77948d2f474dc9f2777c87f81d1365a
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Apr 16 15:50:57 2018 -0300

    Ignore whole-rows in INSERT/CONFLICT with partitioned tables

See also:

commit 555ee77a9668e3f1b03307055b5027e13bf1a715
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Mar 26 10:43:54 2018 -0300

    Handle INSERT .. ON CONFLICT with partitioned tables
---
 src/backend/executor/execPartition.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index eb491061024..6998ba8ae23 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
 			{
 				List	   *onconflset;
 				List	   *onconflcols;
-				bool		found_whole_row;
 
 				/*
 				 * Translate expressions in onConflictSet to account for
-- 
2.17.1

0016-avoid-shadow-vars-brin-keyno.patchtext/x-diff; charset=us-asciiDownload
From 79fe22270a9ab91c7a561c2bff2a64b20c1797e7 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 17:10:55 -0500
Subject: [PATCH 16/17] avoid shadow vars: brin keyno

commit a681e3c107aa97eb554f118935c4d2278892c3dd
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date:   Fri Mar 26 13:17:56 2021 +0100

    Support the old signature of BRIN consistent function
---
 src/backend/access/brin/brin.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index e88f7efa7e4..69f21abfb59 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -372,7 +372,6 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 			  **nullkeys;
 	int		   *nkeys,
 			   *nnullkeys;
-	int			keyno;
 	char	   *ptr;
 	Size		len;
 	char	   *tmp PG_USED_FOR_ASSERTS_ONLY;
@@ -454,7 +453,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 	memset(nnullkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
 
 	/* Preprocess the scan keys - split them into per-attribute arrays. */
-	for (keyno = 0; keyno < scan->numberOfKeys; keyno++)
+	for (int keyno = 0; keyno < scan->numberOfKeys; keyno++)
 	{
 		ScanKey		key = &scan->keyData[keyno];
 		AttrNumber	keyattno = key->sk_attno;
-- 
2.17.1

0017-avoid-shadow-vars-bufmgr.c-j.patchtext/x-diff; charset=us-asciiDownload
From 50ded6f49a3f2e7ce4b221201ea6d38a5bda83c5 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 23:52:21 -0500
Subject: [PATCH 17/17] avoid shadow vars: bufmgr.c: j

commit bea449c635c0e68e21610593594c1e5d52842cdd
Author: Amit Kapila <akapila@postgresql.org>
Date:   Wed Jan 13 07:46:11 2021 +0530

    Optimize DropRelFileNodesAllBuffers() for recovery.
---
 src/backend/storage/buffer/bufmgr.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 9c1bd508d36..a748efdb942 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -3183,7 +3183,6 @@ void
 DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 {
 	int			i;
-	int			j;
 	int			n = 0;
 	SMgrRelation *rels;
 	BlockNumber (*block)[MAX_FORKNUM + 1];
@@ -3232,7 +3231,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 	 */
 	for (i = 0; i < n && cached; i++)
 	{
-		for (j = 0; j <= MAX_FORKNUM; j++)
+		for (int j = 0; j <= MAX_FORKNUM; j++)
 		{
 			/* Get the number of blocks for a relation's fork. */
 			block[i][j] = smgrnblocks_cached(rels[i], j);
@@ -3259,7 +3258,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 	{
 		for (i = 0; i < n; i++)
 		{
-			for (j = 0; j <= MAX_FORKNUM; j++)
+			for (int j = 0; j <= MAX_FORKNUM; j++)
 			{
 				/* ignore relation forks that doesn't exist */
 				if (!BlockNumberIsValid(block[i][j]))
-- 
2.17.1

#2Peter Smith
smithpb2250@gmail.com
In reply to: Justin Pryzby (#1)
Re: shadow variables - pg15 edition

On Thu, Aug 18, 2022 at 12:54 AM Justin Pryzby <pryzby@telsasoft.com> wrote:

There's been no progress on this in the past discussions.

/messages/by-id/877k1psmpf.fsf@mailbox.samurai.com
/messages/by-id/CAApHDvpqBR7u9yzW4yggjG=QfN=FZsc8Wo2ckokpQtif-+iQ2A@mail.gmail.com
/messages/by-id/MN2PR18MB2927F7B5F690065E1194B258E35D0@MN2PR18MB2927.namprd18.prod.outlook.com

But an unfortunate consequence of not fixing the historic issues is that it
precludes the possibility that anyone could be expected to notice if they
introduce more instances of the same problem (as in the first half of these
patches). Then the hole which has already been dug becomes deeper, further
increasing the burden of fixing the historic issues before being able to use
-Wshadow.

The first half of the patches fix shadow variables newly-introduced in v15
(including one of my own patches), the rest are fixing the lowest hanging fruit
of the "short list" from COPT=-Wshadow=compatible-local

I can't see that any of these are bugs, but it seems like a good goal to move
towards allowing use of the -Wshadow* options to help avoid future errors, as
well as cleanliness and readability (rather than allowing it to get harder to
use -Wshadow).

Hey, thanks for picking this up!

I'd started looking at these [1]/messages/by-id/CAHut+Puv4LaQKVQSErtV_=3MezUdpipVOMt7tJ3fXHxt_YK-Zw@mail.gmail.com last year and spent a day trying to
categorise them all in a spreadsheet (shadows a global, shadows a
parameter, shadows a local var etc) but I became swamped by the
volume, and then other work/life got in the way.

+1 from me.

------
[1]: /messages/by-id/CAHut+Puv4LaQKVQSErtV_=3MezUdpipVOMt7tJ3fXHxt_YK-Zw@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#3Michael Paquier
michael@paquier.xyz
In reply to: Peter Smith (#2)
Re: shadow variables - pg15 edition

On Thu, Aug 18, 2022 at 08:49:14AM +1000, Peter Smith wrote:

I'd started looking at these [1] last year and spent a day trying to
categorise them all in a spreadsheet (shadows a global, shadows a
parameter, shadows a local var etc) but I became swamped by the
volume, and then other work/life got in the way.

+1 from me.

A lot of the changes proposed here update the code so as the same
variable gets used across more code paths by removing declarations,
but we have two variables defined because both are aimed to be used in
a different context (see AttachPartitionEnsureIndexes() in tablecmds.c
for example).

Wouldn't it be a saner approach in a lot of cases to rename the
shadowed variables (aka the ones getting removed in your patches) and
keep them local to the code paths where we use them?
--
Michael

#4Justin Pryzby
pryzby@telsasoft.com
In reply to: Michael Paquier (#3)
26 attachment(s)
Re: shadow variables - pg15 edition

On Thu, Aug 18, 2022 at 09:39:02AM +0900, Michael Paquier wrote:

On Thu, Aug 18, 2022 at 08:49:14AM +1000, Peter Smith wrote:

I'd started looking at these [1] last year and spent a day trying to
categorise them all in a spreadsheet (shadows a global, shadows a
parameter, shadows a local var etc) but I became swamped by the
volume, and then other work/life got in the way.

+1 from me.

A lot of the changes proposed here update the code so as the same
variable gets used across more code paths by removing declarations,
but we have two variables defined because both are aimed to be used in
a different context (see AttachPartitionEnsureIndexes() in tablecmds.c
for example).

Wouldn't it be a saner approach in a lot of cases to rename the
shadowed variables (aka the ones getting removed in your patches) and
keep them local to the code paths where we use them?

The cases where I removed a declaration are ones where the variable either
hasn't yet been assigned in the outer scope (so it's safe to use first in the
inner scope, since its value is later overwriten in the outer scope). Or it's
no longer used in the outer scope, so it's safe to re-use it in the inner scope
(as in AttachPartitionEnsureIndexes). Since you think it's saner, I changed to
rename them.

In the case of "first", the var is used in two independent loops, the same way,
and re-initialized. In the case of found_whole_row, the var is ignored, as the
comments say, so it would be silly to declare more vars to be additionally
ignored.

--
Justin

PS. I hadn't sent the other patches which rename the variables, having assumed
that the discussion would be bikeshedded to death and derail without having
fixed the lowest hanging fruits. I'm attaching them those now to see what
happens.

Attachments:

0001-avoid-shadow-vars-pg_dump.c-i_oid.patchtext/x-diff; charset=us-asciiDownload
From 97768e5a439bef016e6ebd5221ed148f076c6e3f Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:38:57 -0500
Subject: [PATCH 01/26] avoid shadow vars: pg_dump.c: i_oid

backpatch to v15

commit d498e052b4b84ae21b3b68d5b3fda6ead65d1d4d
Author: Robert Haas <rhaas@postgresql.org>
Date:   Fri Jul 8 10:15:19 2022 -0400

    Preserve relfilenode of pg_largeobject and its index across pg_upgrade.
---
 src/bin/pg_dump/pg_dump.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index da6605175a0..322947c5609 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3144,7 +3144,6 @@ dumpDatabase(Archive *fout)
 		PQExpBuffer loHorizonQry = createPQExpBuffer();
 		int			i_relfrozenxid,
 					i_relfilenode,
-					i_oid,
 					i_relminmxid;
 
 		/*
-- 
2.17.1

0002-avoid-shadow-vars-pg_dump.c-tbinfo.patchtext/x-diff; charset=us-asciiDownload
From ce729535c47d72db775ebcf1f185799c78615148 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 15:55:13 -0500
Subject: [PATCH 02/26] avoid shadow vars: pg_dump.c: tbinfo

backpatch to v15

commit 9895961529ef8ff3fc12b39229f9a93e08bca7b7
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date:   Mon Dec 6 13:07:31 2021 -0500

    Avoid per-object queries in performance-critical paths in pg_dump.
---
 src/bin/pg_dump/pg_dump.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 322947c5609..5c196d66985 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -7080,21 +7080,21 @@ getConstraints(Archive *fout, TableInfo tblinfo[], int numTables)
 	appendPQExpBufferChar(tbloids, '{');
 	for (int i = 0; i < numTables; i++)
 	{
-		TableInfo  *tbinfo = &tblinfo[i];
+		TableInfo  *mytbinfo = &tblinfo[i];
 
 		/*
 		 * For partitioned tables, foreign keys have no triggers so they must
 		 * be included anyway in case some foreign keys are defined.
 		 */
-		if ((!tbinfo->hastriggers &&
-			 tbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
-			!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
+		if ((!mytbinfo->hastriggers &&
+			 mytbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
+			!(mytbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
 			continue;
 
 		/* OK, we need info for this table */
 		if (tbloids->len > 1)	/* do we have more than the '{'? */
 			appendPQExpBufferChar(tbloids, ',');
-		appendPQExpBuffer(tbloids, "%u", tbinfo->dobj.catId.oid);
+		appendPQExpBuffer(tbloids, "%u", mytbinfo->dobj.catId.oid);
 	}
 	appendPQExpBufferChar(tbloids, '}');
 
-- 
2.17.1

0003-avoid-shadow-vars-pg_dump.c-owning_tab.patchtext/x-diff; charset=us-asciiDownload
From 478fa745d4ddc38fe15f54d7d396ebf7a106772b Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 16:22:52 -0500
Subject: [PATCH 03/26] avoid shadow vars: pg_dump.c: owning_tab

backpatch to v15

commit 344d62fb9a978a72cf8347f0369b9ee643fd0b31
Author: Peter Eisentraut <peter@eisentraut.org>
Date:   Thu Apr 7 16:13:23 2022 +0200

    Unlogged sequences
---
 src/bin/pg_dump/pg_dump.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 5c196d66985..4b5d8df1e4e 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -16799,21 +16799,21 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
 	 */
 	if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
 	{
-		TableInfo  *owning_tab = findTableByOid(tbinfo->owning_tab);
+		TableInfo  *this_owning_tab = findTableByOid(tbinfo->owning_tab);
 
-		if (owning_tab == NULL)
+		if (this_owning_tab == NULL)
 			pg_fatal("failed sanity check, parent table with OID %u of sequence with OID %u not found",
 					 tbinfo->owning_tab, tbinfo->dobj.catId.oid);
 
-		if (owning_tab->dobj.dump & DUMP_COMPONENT_DEFINITION)
+		if (this_owning_tab->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		{
 			resetPQExpBuffer(query);
 			appendPQExpBuffer(query, "ALTER SEQUENCE %s",
 							  fmtQualifiedDumpable(tbinfo));
 			appendPQExpBuffer(query, " OWNED BY %s",
-							  fmtQualifiedDumpable(owning_tab));
+							  fmtQualifiedDumpable(this_owning_tab));
 			appendPQExpBuffer(query, ".%s;\n",
-							  fmtId(owning_tab->attnames[tbinfo->owning_col - 1]));
+							  fmtId(this_owning_tab->attnames[tbinfo->owning_col - 1]));
 
 			if (tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 				ArchiveEntry(fout, nilCatalogId, createDumpId(),
-- 
2.17.1

0004-avoid-shadow-vars-tablesync.c-first.patchtext/x-diff; charset=us-asciiDownload
From f67d6fe9b9bca6334f596478fb0317025ae51226 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 08:52:03 -0500
Subject: [PATCH 04/26] avoid shadow vars: tablesync.c: first

backpatch to v15

commit 923def9a533a7d986acfb524139d8b9e5466d0a5
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date:   Sat Mar 26 00:45:21 2022 +0100

    Allow specifying column lists for logical replication
---
 src/backend/replication/logical/tablesync.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index bfcb80b4955..71b503f4217 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -762,8 +762,8 @@ fetch_remote_table_info(char *nspname, char *relname,
 		TupleTableSlot *slot;
 		Oid			attrsRow[] = {INT2VECTOROID};
 		StringInfoData pub_names;
-		bool		first = true;
 
+		first = true;
 		initStringInfo(&pub_names);
 		foreach(lc, MySubscription->publications)
 		{
-- 
2.17.1

0005-avoid-shadow-vars-tablesync.c-slot.patchtext/x-diff; charset=us-asciiDownload
From 51c4c49e81c802d74e348222df66fcff3b841814 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:01:16 -0500
Subject: [PATCH 05/26] avoid shadow vars: tablesync.c: slot

backpatch to v15

commit 923def9a533a7d986acfb524139d8b9e5466d0a5
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date:   Sat Mar 26 00:45:21 2022 +0100

    Allow specifying column lists for logical replication
---
 src/backend/replication/logical/tablesync.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 71b503f4217..cfc47dc8df0 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -759,7 +759,7 @@ fetch_remote_table_info(char *nspname, char *relname,
 	if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
 	{
 		WalRcvExecResult *pubres;
-		TupleTableSlot *slot;
+		TupleTableSlot *thisslot;
 		Oid			attrsRow[] = {INT2VECTOROID};
 		StringInfoData pub_names;
 
@@ -819,10 +819,10 @@ fetch_remote_table_info(char *nspname, char *relname,
 		 * If we find a NULL value, it means all the columns should be
 		 * replicated.
 		 */
-		slot = MakeSingleTupleTableSlot(pubres->tupledesc, &TTSOpsMinimalTuple);
-		if (tuplestore_gettupleslot(pubres->tuplestore, true, false, slot))
+		thisslot = MakeSingleTupleTableSlot(pubres->tupledesc, &TTSOpsMinimalTuple);
+		if (tuplestore_gettupleslot(pubres->tuplestore, true, false, thisslot))
 		{
-			Datum		cfval = slot_getattr(slot, 1, &isnull);
+			Datum		cfval = slot_getattr(thisslot, 1, &isnull);
 
 			if (!isnull)
 			{
@@ -838,9 +838,9 @@ fetch_remote_table_info(char *nspname, char *relname,
 					included_cols = bms_add_member(included_cols, elems[natt]);
 			}
 
-			ExecClearTuple(slot);
+			ExecClearTuple(thisslot);
 		}
-		ExecDropSingleTupleTableSlot(slot);
+		ExecDropSingleTupleTableSlot(thisslot);
 
 		walrcv_clear_result(pubres);
 
-- 
2.17.1

0006-avoid-shadow-vars-basebackup_target.c-ttype.patchtext/x-diff; charset=us-asciiDownload
From ac32d509971319d8804b184d014730384a7c93ae Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 18:51:10 -0500
Subject: [PATCH 06/26] avoid shadow vars: basebackup_target.c: ttype

backpatch to v15

commit e4ba69f3f4a1b997aa493cc02e563a91c0f35b87
Author: Robert Haas <rhaas@postgresql.org>
Date:   Tue Mar 15 13:22:04 2022 -0400

    Allow extensions to add new backup targets.
---
 src/backend/backup/basebackup_target.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/backend/backup/basebackup_target.c b/src/backend/backup/basebackup_target.c
index 83928e32055..1553568a37d 100644
--- a/src/backend/backup/basebackup_target.c
+++ b/src/backend/backup/basebackup_target.c
@@ -73,9 +73,9 @@ BaseBackupAddTarget(char *name,
 	/* Search the target type list for an existing entry with this name. */
 	foreach(lc, BaseBackupTargetTypeList)
 	{
-		BaseBackupTargetType *ttype = lfirst(lc);
+		BaseBackupTargetType *this_ttype = lfirst(lc);
 
-		if (strcmp(ttype->name, name) == 0)
+		if (strcmp(this_ttype->name, name) == 0)
 		{
 			/*
 			 * We found one, so update it.
@@ -84,8 +84,8 @@ BaseBackupAddTarget(char *name,
 			 * the same name multiple times, but if it happens, this seems
 			 * like the sanest behavior.
 			 */
-			ttype->check_detail = check_detail;
-			ttype->get_sink = get_sink;
+			this_ttype->check_detail = check_detail;
+			this_ttype->get_sink = get_sink;
 			return;
 		}
 	}
-- 
2.17.1

0007-avoid-shadow-vars-parse_jsontable.c-jtc.patchtext/x-diff; charset=us-asciiDownload
From 180081aac947f65bf87c22a9da68b9383e521cd4 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:45:28 -0500
Subject: [PATCH 07/26] avoid shadow vars: parse_jsontable.c: jtc

backpatch to v15

commit fadb48b00e02ccfd152baa80942de30205ab3c4f
Author: Andrew Dunstan <andrew@dunslane.net>
Date:   Tue Apr 5 14:09:04 2022 -0400

    PLAN clauses for JSON_TABLE
---
 src/backend/parser/parse_jsontable.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/backend/parser/parse_jsontable.c b/src/backend/parser/parse_jsontable.c
index bc3272017ef..84ff3fac140 100644
--- a/src/backend/parser/parse_jsontable.c
+++ b/src/backend/parser/parse_jsontable.c
@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext *cxt, JsonTablePlan *plan,
 		/* transform all nested columns into cross/union join */
 		foreach(lc, columns)
 		{
-			JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
+			JsonTableColumn *thisjtc = castNode(JsonTableColumn, lfirst(lc));
 			Node	   *node;
 
-			if (jtc->coltype != JTC_NESTED)
+			if (thisjtc->coltype != JTC_NESTED)
 				continue;
 
-			node = transformNestedJsonTableColumn(cxt, jtc, plan);
+			node = transformNestedJsonTableColumn(cxt, thisjtc, plan);
 
 			/* join transformed node with previous sibling nodes */
 			res = res ? makeJsonTableSiblingJoin(cross, res, node) : node;
-- 
2.17.1

0008-avoid-shadow-vars-res.patchtext/x-diff; charset=us-asciiDownload
From 2fc3e45ae6c9d40de537577c75a32ec9e766ac76 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 00:22:45 -0500
Subject: [PATCH 08/26] avoid shadow vars: res

backpatch to v15

commit 1a36bc9dba8eae90963a586d37b6457b32b2fed4
Author: Andrew Dunstan <andrew@dunslane.net>
Date:   Thu Mar 3 13:11:14 2022 -0500

    SQL/JSON query functions
---
 src/backend/utils/adt/jsonpath_exec.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index 5b6a4805721..ff1ec607eb1 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32 typmod, JsonbValue *res)
 
 				if (JsonContainerIsScalar(&jb->root))
 				{
-					bool		res PG_USED_FOR_ASSERTS_ONLY;
+					bool		tmp PG_USED_FOR_ASSERTS_ONLY;
 
-					res = JsonbExtractScalar(&jb->root, jbv);
-					Assert(res);
+					tmp = JsonbExtractScalar(&jb->root, jbv);
+					Assert(tmp);
 				}
 				else
 					JsonbInitBinary(jbv, jb);
-- 
2.17.1

0009-avoid-shadow-vars-clauses.c-querytree_list.patchtext/x-diff; charset=us-asciiDownload
From 058d0ccbb7553def2ee3cfdc36b18ac49bfe81f3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:43:15 -0500
Subject: [PATCH 09/26] avoid shadow vars: clauses.c: querytree_list

commit e717a9a18b2e34c9c40e5259ad4d31cd7e420750
Author: Peter Eisentraut <peter@eisentraut.org>
Date:   Wed Apr 7 21:30:08 2021 +0200

    SQL-standard function body
---
 src/backend/optimizer/util/clauses.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 533df86ff77..cfccdd08b5a 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4540,16 +4540,16 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 	if (!isNull)
 	{
 		Node	   *n;
-		List	   *querytree_list;
+		List	   *this_querytree_list;
 
 		n = stringToNode(TextDatumGetCString(tmp));
 		if (IsA(n, List))
-			querytree_list = linitial_node(List, castNode(List, n));
+			this_querytree_list = linitial_node(List, castNode(List, n));
 		else
-			querytree_list = list_make1(n);
-		if (list_length(querytree_list) != 1)
+			this_querytree_list = list_make1(n);
+		if (list_length(this_querytree_list) != 1)
 			goto fail;
-		querytree = linitial(querytree_list);
+		querytree = linitial(this_querytree_list);
 
 		/*
 		 * Because we'll insist below that the querytree have an empty rtable
-- 
2.17.1

0010-avoid-shadow-vars-tablecmds.c-constraintOid.patchtext/x-diff; charset=us-asciiDownload
From dc97085ee1fe2a4a8146ceb99a2829667a55b6d8 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:28:02 -0500
Subject: [PATCH 10/26] avoid shadow vars: tablecmds.c: constraintOid

commit eb7ed3f3063401496e4aa4bd68fa33f0be31a72f
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Feb 19 16:59:37 2018 -0300

    Allow UNIQUE indexes on partitioned tables
---
 src/backend/commands/tablecmds.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 8d7c68b8b3c..93b217ee14d 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -18098,14 +18098,14 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel)
 		if (!found)
 		{
 			IndexStmt  *stmt;
-			Oid			constraintOid;
+			Oid			this_conid;
 
 			stmt = generateClonedIndexStmt(NULL,
 										   idxRel, attmap,
-										   &constraintOid);
+										   &this_conid);
 			DefineIndex(RelationGetRelid(attachrel), stmt, InvalidOid,
 						RelationGetRelid(idxRel),
-						constraintOid,
+						this_conid,
 						true, false, false, false, false);
 		}
 
-- 
2.17.1

0011-avoid-shadow-vars-tablecmds.c-copyTuple.patchtext/x-diff; charset=us-asciiDownload
From 0ff0dad2688b130b91cf0d532e042338544266d6 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:17:46 -0500
Subject: [PATCH 11/26] avoid shadow vars: tablecmds.c: copyTuple

commit 6f70d7ca1d1937a9f7b79eff6fb18ed1bb2a4c47
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Wed May 5 12:14:21 2021 -0400

    Have ALTER CONSTRAINT recurse on partitioned tables
---
 src/backend/commands/tablecmds.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 93b217ee14d..3403307c893 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10865,7 +10865,7 @@ ATExecAlterConstrRecurse(Constraint *cmdcon, Relation conrel, Relation tgrel,
 		{
 			Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tgtuple);
 			Form_pg_trigger copy_tg;
-			HeapTuple	copyTuple;
+			HeapTuple	this_copyTuple;
 
 			/*
 			 * Remember OIDs of other relation(s) involved in FK constraint.
@@ -10889,16 +10889,16 @@ ATExecAlterConstrRecurse(Constraint *cmdcon, Relation conrel, Relation tgrel,
 				tgform->tgfoid != F_RI_FKEY_CHECK_UPD)
 				continue;
 
-			copyTuple = heap_copytuple(tgtuple);
-			copy_tg = (Form_pg_trigger) GETSTRUCT(copyTuple);
+			this_copyTuple = heap_copytuple(tgtuple);
+			copy_tg = (Form_pg_trigger) GETSTRUCT(this_copyTuple);
 
 			copy_tg->tgdeferrable = cmdcon->deferrable;
 			copy_tg->tginitdeferred = cmdcon->initdeferred;
-			CatalogTupleUpdate(tgrel, &copyTuple->t_self, copyTuple);
+			CatalogTupleUpdate(tgrel, &this_copyTuple->t_self, this_copyTuple);
 
 			InvokeObjectPostAlterHook(TriggerRelationId, tgform->oid, 0);
 
-			heap_freetuple(copyTuple);
+			heap_freetuple(this_copyTuple);
 		}
 
 		systable_endscan(tgscan);
-- 
2.17.1

0012-avoid-shadow-vars-heap.c-rel-tuple.patchtext/x-diff; charset=us-asciiDownload
From 8de2ba6b4b419280ccc6a6b4ea17e34d09458fbc Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 21:02:16 -0500
Subject: [PATCH 12/26] avoid shadow vars: heap.c: rel, tuple

commit 0d692a0dc9f0e532c67c577187fe5d7d323cb95b
Author: Robert Haas <rhaas@postgresql.org>
Date:   Sat Jan 1 23:48:11 2011 -0500

    Basic foreign table support.

commit 258cef12540fa1cb244881a0f019cefd698c809e
Author: Robert Haas <rhaas@postgresql.org>
Date:   Tue Apr 11 09:08:36 2017 -0400

    Fix possibile deadlock when dropping partitions.
---
 src/backend/catalog/heap.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9b03579e6e0..9a83ebf3231 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1818,19 +1818,19 @@ heap_drop_with_catalog(Oid relid)
 	 */
 	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
 	{
-		Relation	rel;
-		HeapTuple	tuple;
+		Relation	pg_foreign_table;
+		HeapTuple	foreigntuple;
 
-		rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+		pg_foreign_table = table_open(ForeignTableRelationId, RowExclusiveLock);
 
-		tuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
-		if (!HeapTupleIsValid(tuple))
+		foreigntuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(foreigntuple))
 			elog(ERROR, "cache lookup failed for foreign table %u", relid);
 
-		CatalogTupleDelete(rel, &tuple->t_self);
+		CatalogTupleDelete(pg_foreign_table, &foreigntuple->t_self);
 
-		ReleaseSysCache(tuple);
-		table_close(rel, RowExclusiveLock);
+		ReleaseSysCache(foreigntuple);
+		table_close(pg_foreign_table, RowExclusiveLock);
 	}
 
 	/*
-- 
2.17.1

0013-avoid-shadow-vars-copyfrom.c-attnum.patchtext/x-diff; charset=us-asciiDownload
From d6d3eca1d77b40bb18328c0ecf78966d7abd27bb Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 16:55:10 -0500
Subject: [PATCH 13/26] avoid shadow vars: copyfrom.c: attnum

commit 3a1433674696fbb968bc2120ebd36d9766f49af5
Author: Bruce Momjian <bruce@momjian.us>
Date:   Thu Apr 15 22:36:03 2004 +0000

    Modify COPY for() loop to use attnum as a variable name, not 'i'.
---
 src/backend/commands/copyfrom.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index a976008b3d4..e8bb168aea8 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -1202,7 +1202,6 @@ BeginCopyFrom(ParseState *pstate,
 				num_defaults;
 	FmgrInfo   *in_functions;
 	Oid		   *typioparams;
-	int			attnum;
 	Oid			in_func_oid;
 	int		   *defmap;
 	ExprState **defexprs;
@@ -1401,7 +1400,7 @@ BeginCopyFrom(ParseState *pstate,
 	defmap = (int *) palloc(num_phys_attrs * sizeof(int));
 	defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
 
-	for (attnum = 1; attnum <= num_phys_attrs; attnum++)
+	for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
 	{
 		Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1);
 
-- 
2.17.1

0014-avoid-shadow-vars-nodeAgg-transno.patchtext/x-diff; charset=us-asciiDownload
From a8d6f5b7831964a23c9b2a3fc1e0d0a4d71f74c3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 18:36:13 -0500
Subject: [PATCH 14/26] avoid shadow vars: nodeAgg: transno

commit db80acfc9d50ac56811d22802ab3d822ab313055
Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Date:   Tue Dec 20 09:20:17 2016 +0200

    Fix sharing Agg transition state of DISTINCT or ordered aggs.
---
 src/backend/executor/nodeAgg.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 96d200e4461..933c3049016 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -1296,13 +1296,12 @@ finalize_aggregates(AggState *aggstate,
 	Datum	   *aggvalues = econtext->ecxt_aggvalues;
 	bool	   *aggnulls = econtext->ecxt_aggnulls;
 	int			aggno;
-	int			transno;
 
 	/*
 	 * If there were any DISTINCT and/or ORDER BY aggregates, sort their
 	 * inputs and run the transition functions.
 	 */
-	for (transno = 0; transno < aggstate->numtrans; transno++)
+	for (int transno = 0; transno < aggstate->numtrans; transno++)
 	{
 		AggStatePerTrans pertrans = &aggstate->pertrans[transno];
 		AggStatePerGroup pergroupstate;
-- 
2.17.1

0015-avoid-shadow-vars-trigger.c-partitionId.patchtext/x-diff; charset=us-asciiDownload
From 25e68782b1686f8e164174cc03be04deb1950920 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:36:12 -0500
Subject: [PATCH 15/26] avoid shadow vars: trigger.c: partitionId

commit 80ba4bb383538a2ee846fece6a7b8da9518b6866
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Thu Jul 22 18:33:47 2021 -0400

    Make ALTER TRIGGER RENAME consistent for partitioned tables
---
 src/backend/commands/trigger.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 62a09fb131b..bb4385c6ea9 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1726,9 +1726,9 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
 
 			for (int i = 0; i < partdesc->nparts; i++)
 			{
-				Oid			partitionId = partdesc->oids[i];
+				Oid			partid = partdesc->oids[i];
 
-				renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+				renametrig_partition(tgrel, partid, tgform->oid, newname,
 									 NameStr(tgform->tgname));
 			}
 		}
-- 
2.17.1

0016-avoid-shadow-vars-execPartition.c-found_whole_row.patchtext/x-diff; charset=us-asciiDownload
From 4a5814a70946496eba42f9ac43e9cafac314cc3e Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:20:38 -0500
Subject: [PATCH 16/26] avoid shadow vars: execPartition.c: found_whole_row

commit 158b7bc6d77948d2f474dc9f2777c87f81d1365a
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Apr 16 15:50:57 2018 -0300

    Ignore whole-rows in INSERT/CONFLICT with partitioned tables

See also:

commit 555ee77a9668e3f1b03307055b5027e13bf1a715
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Mar 26 10:43:54 2018 -0300

    Handle INSERT .. ON CONFLICT with partitioned tables
---
 src/backend/executor/execPartition.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index ac03271882f..901dd435efd 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
 			{
 				List	   *onconflset;
 				List	   *onconflcols;
-				bool		found_whole_row;
 
 				/*
 				 * Translate expressions in onConflictSet to account for
-- 
2.17.1

0017-avoid-shadow-vars-brin-keyno.patchtext/x-diff; charset=us-asciiDownload
From 89289938e3b6aa46d4dcc387da841d1ba10b2787 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 17:10:55 -0500
Subject: [PATCH 17/26] avoid shadow vars: brin keyno

commit a681e3c107aa97eb554f118935c4d2278892c3dd
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date:   Fri Mar 26 13:17:56 2021 +0100

    Support the old signature of BRIN consistent function
---
 src/backend/access/brin/brin.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index e88f7efa7e4..69f21abfb59 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -372,7 +372,6 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 			  **nullkeys;
 	int		   *nkeys,
 			   *nnullkeys;
-	int			keyno;
 	char	   *ptr;
 	Size		len;
 	char	   *tmp PG_USED_FOR_ASSERTS_ONLY;
@@ -454,7 +453,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 	memset(nnullkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
 
 	/* Preprocess the scan keys - split them into per-attribute arrays. */
-	for (keyno = 0; keyno < scan->numberOfKeys; keyno++)
+	for (int keyno = 0; keyno < scan->numberOfKeys; keyno++)
 	{
 		ScanKey		key = &scan->keyData[keyno];
 		AttrNumber	keyattno = key->sk_attno;
-- 
2.17.1

0018-avoid-shadow-vars-bufmgr.c-j.patchtext/x-diff; charset=us-asciiDownload
From 0ff18c1521f4057e2dc0affb7321ae1da747586b Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 23:52:21 -0500
Subject: [PATCH 18/26] avoid shadow vars: bufmgr.c: j

commit bea449c635c0e68e21610593594c1e5d52842cdd
Author: Amit Kapila <akapila@postgresql.org>
Date:   Wed Jan 13 07:46:11 2021 +0530

    Optimize DropRelFileNodesAllBuffers() for recovery.
---
 src/backend/storage/buffer/bufmgr.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 9c1bd508d36..a748efdb942 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -3183,7 +3183,6 @@ void
 DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 {
 	int			i;
-	int			j;
 	int			n = 0;
 	SMgrRelation *rels;
 	BlockNumber (*block)[MAX_FORKNUM + 1];
@@ -3232,7 +3231,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 	 */
 	for (i = 0; i < n && cached; i++)
 	{
-		for (j = 0; j <= MAX_FORKNUM; j++)
+		for (int j = 0; j <= MAX_FORKNUM; j++)
 		{
 			/* Get the number of blocks for a relation's fork. */
 			block[i][j] = smgrnblocks_cached(rels[i], j);
@@ -3259,7 +3258,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 	{
 		for (i = 0; i < n; i++)
 		{
-			for (j = 0; j <= MAX_FORKNUM; j++)
+			for (int j = 0; j <= MAX_FORKNUM; j++)
 			{
 				/* ignore relation forks that doesn't exist */
 				if (!BlockNumberIsValid(block[i][j]))
-- 
2.17.1

0019-avoid-shadow-vars-psql-command.c-host.patchtext/x-diff; charset=us-asciiDownload
From aeb44609d8734ac799bdb941addac0fb82f7e549 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 08:49:46 -0500
Subject: [PATCH 19/26] avoid shadow vars: psql/command.c: host

commit 87e0b7422d70ff4fb69612ef7ba3cbee6ed8d2ae
Author: Robert Haas <rhaas@postgresql.org>
Date:   Fri Jul 23 14:56:54 2010 +0000

    Have psql avoid describing local sockets as host names.
---
 src/bin/psql/command.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index a81bd3307b4..09950cac60a 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -3551,27 +3551,27 @@ do_connect(enum trivalue reuse_previous_specification,
 			param_is_newly_set(PQhost(o_conn), PQhost(pset.db)) ||
 			param_is_newly_set(PQport(o_conn), PQport(pset.db)))
 		{
-			char	   *host = PQhost(pset.db);
-			char	   *hostaddr = PQhostaddr(pset.db);
+			char	   *dbhost = PQhost(pset.db);
+			char	   *dbhostaddr = PQhostaddr(pset.db);
 
-			if (is_unixsock_path(host))
+			if (is_unixsock_path(dbhost))
 			{
 				/* hostaddr overrides host */
-				if (hostaddr && *hostaddr)
+				if (dbhostaddr && *dbhostaddr)
 					printf(_("You are now connected to database \"%s\" as user \"%s\" on address \"%s\" at port \"%s\".\n"),
-						   PQdb(pset.db), PQuser(pset.db), hostaddr, PQport(pset.db));
+						   PQdb(pset.db), PQuser(pset.db), dbhostaddr, PQport(pset.db));
 				else
 					printf(_("You are now connected to database \"%s\" as user \"%s\" via socket in \"%s\" at port \"%s\".\n"),
-						   PQdb(pset.db), PQuser(pset.db), host, PQport(pset.db));
+						   PQdb(pset.db), PQuser(pset.db), dbhost, PQport(pset.db));
 			}
 			else
 			{
-				if (hostaddr && *hostaddr && strcmp(host, hostaddr) != 0)
+				if (dbhostaddr && *dbhostaddr && strcmp(dbhost, dbhostaddr) != 0)
 					printf(_("You are now connected to database \"%s\" as user \"%s\" on host \"%s\" (address \"%s\") at port \"%s\".\n"),
-						   PQdb(pset.db), PQuser(pset.db), host, hostaddr, PQport(pset.db));
+						   PQdb(pset.db), PQuser(pset.db), dbhost, dbhostaddr, PQport(pset.db));
 				else
 					printf(_("You are now connected to database \"%s\" as user \"%s\" on host \"%s\" at port \"%s\".\n"),
-						   PQdb(pset.db), PQuser(pset.db), host, PQport(pset.db));
+						   PQdb(pset.db), PQuser(pset.db), dbhost, PQport(pset.db));
 			}
 		}
 		else
-- 
2.17.1

0020-avoid-shadow-vars-ruleutils-dpns.patchtext/x-diff; charset=us-asciiDownload
From c0a638c97f05e493bf6fee86895d746db89bcbb1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 08:56:42 -0500
Subject: [PATCH 20/26] avoid shadow vars: ruleutils: dpns

commit e717a9a18b2e34c9c40e5259ad4d31cd7e420750
Author: Peter Eisentraut <peter@eisentraut.org>
Date:   Wed Apr 7 21:30:08 2021 +0200

    SQL-standard function body
---
 src/backend/utils/adt/ruleutils.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 8964f73b929..44a3b064cad 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -8112,9 +8112,9 @@ get_parameter(Param *param, deparse_context *context)
 				 */
 				foreach(lc, context->namespaces)
 				{
-					deparse_namespace *dpns = lfirst(lc);
+					deparse_namespace *tmp = lfirst(lc);
 
-					if (dpns->rtable_names != NIL)
+					if (tmp->rtable_names != NIL)
 					{
 						should_qualify = true;
 						break;
-- 
2.17.1

0021-avoid-shadow-vars-costsize.c-subpath.patchtext/x-diff; charset=us-asciiDownload
From c95f78e6a9b0e29fe195fb1cad34309eb5f5a8b3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 15:54:19 -0500
Subject: [PATCH 21/26] avoid shadow vars: costsize.c: subpath

commit 959d00e9dbe4cfcf4a63bb655ac2c29a5e579246
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date:   Fri Apr 5 19:20:30 2019 -0400

    Use Append rather than MergeAppend for scanning ordered partitions.
---
 src/backend/optimizer/path/costsize.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 1e94c5aa7c4..504b13da7be 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2545,10 +2545,10 @@ cost_append(AppendPath *apath, PlannerInfo *root)
 			/* Compute rows and costs as sums of subplan rows and costs. */
 			foreach(l, apath->subpaths)
 			{
-				Path	   *subpath = (Path *) lfirst(l);
+				Path	   *sub = (Path *) lfirst(l);
 
-				apath->path.rows += subpath->rows;
-				apath->path.total_cost += subpath->total_cost;
+				apath->path.rows += sub->rows;
+				apath->path.total_cost += sub->total_cost;
 			}
 		}
 		else
-- 
2.17.1

0022-avoid-shadow-vars-partitionfuncs.c-relid.patchtext/x-diff; charset=us-asciiDownload
From a9c0e3e1b79a961b3386e0cc489966fad820bbb1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 15:59:47 -0500
Subject: [PATCH 22/26] avoid shadow vars: partitionfuncs.c: relid

commit b96f6b19487fb9802216311b242c01c27c1938de
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Mar 4 16:14:29 2019 -0300

    pg_partition_ancestors
---
 src/backend/utils/adt/partitionfuncs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/backend/utils/adt/partitionfuncs.c b/src/backend/utils/adt/partitionfuncs.c
index 109dc8023e1..59983381924 100644
--- a/src/backend/utils/adt/partitionfuncs.c
+++ b/src/backend/utils/adt/partitionfuncs.c
@@ -238,9 +238,9 @@ pg_partition_ancestors(PG_FUNCTION_ARGS)
 
 	if (funcctx->call_cntr < list_length(ancestors))
 	{
-		Oid			relid = list_nth_oid(ancestors, funcctx->call_cntr);
+		Oid			thisrelid = list_nth_oid(ancestors, funcctx->call_cntr);
 
-		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(thisrelid));
 	}
 
 	SRF_RETURN_DONE(funcctx);
-- 
2.17.1

0023-avoid-shadow-vars-rangetypes_gist.c-range.patchtext/x-diff; charset=us-asciiDownload
From ab02a73d6d100100cd5a87073e5cdc3c9a0865d1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:19:50 -0500
Subject: [PATCH 23/26] avoid shadow vars: rangetypes_gist.c: range

commit 80da9e68fdd70b796b3a7de3821589513596c0f7
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date:   Sun Mar 4 22:50:06 2012 -0500

    Rewrite GiST support code for rangetypes.
---
 src/backend/utils/adt/rangetypes_gist.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/backend/utils/adt/rangetypes_gist.c b/src/backend/utils/adt/rangetypes_gist.c
index fbf39dbf303..a14b8261fdb 100644
--- a/src/backend/utils/adt/rangetypes_gist.c
+++ b/src/backend/utils/adt/rangetypes_gist.c
@@ -1350,10 +1350,10 @@ range_gist_double_sorting_split(TypeCacheEntry *typcache,
 	/* Fill arrays of bounds */
 	for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
 	{
-		RangeType  *range = DatumGetRangeTypeP(entryvec->vector[i].key);
+		RangeType  *thisrange = DatumGetRangeTypeP(entryvec->vector[i].key);
 		bool		empty;
 
-		range_deserialize(typcache, range,
+		range_deserialize(typcache, thisrange,
 						  &by_lower[i - FirstOffsetNumber].lower,
 						  &by_lower[i - FirstOffsetNumber].upper,
 						  &empty);
-- 
2.17.1

0024-avoid-shadow-vars-ecpglib-execute.c-len.patchtext/x-diff; charset=us-asciiDownload
From d35ec6a168342182db48442545dd58649ac85094 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:21:49 -0500
Subject: [PATCH 24/26] avoid shadow vars: ecpglib/execute.c: len

commit a4f25b6a9c2dbf5f38e498922e3761cb3bf46ba0
Author: Michael Meskes <meskes@postgresql.org>
Date:   Sun Mar 16 10:42:54 2003 +0000

    Started working on a seperate pgtypes library. First test work. PLEASE test compilation on iother systems.
---
 src/interfaces/ecpg/ecpglib/execute.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/interfaces/ecpg/ecpglib/execute.c b/src/interfaces/ecpg/ecpglib/execute.c
index bd94bd4e6c6..2d34f76cd73 100644
--- a/src/interfaces/ecpg/ecpglib/execute.c
+++ b/src/interfaces/ecpg/ecpglib/execute.c
@@ -367,10 +367,10 @@ ecpg_store_result(const PGresult *results, int act_field,
 						/* check strlen for each tuple */
 						for (act_tuple = 0; act_tuple < ntuples; act_tuple++)
 						{
-							int			len = strlen(PQgetvalue(results, act_tuple, act_field)) + 1;
+							int			thislen = strlen(PQgetvalue(results, act_tuple, act_field)) + 1;
 
-							if (len > var->varcharsize)
-								var->varcharsize = len;
+							if (thislen > var->varcharsize)
+								var->varcharsize = thislen;
 						}
 						var->offset *= var->varcharsize;
 						len = var->offset * ntuples;
-- 
2.17.1

0025-avoid-shadow-vars-autovacuum.c-db.patchtext/x-diff; charset=us-asciiDownload
From 2faa21ca588f0af51fbd69b4d37dd5c82f8bb5cd Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 21:05:47 -0500
Subject: [PATCH 25/26] avoid shadow vars: autovacuum.c: db

commit e2a186b03cc1a87cf26644db18f28a20f10bd739
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Apr 16 18:30:04 2007 +0000

    Add a multi-worker capability to autovacuum.  This allows multiple worker
---
 src/backend/postmaster/autovacuum.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 70a9176c54c..9d5ddb80974 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1102,14 +1102,14 @@ rebuild_database_list(Oid newdb)
 		 */
 		for (i = 0; i < nelems; i++)
 		{
-			avl_dbase  *db = &(dbary[i]);
+			avl_dbase  *thisdb = &(dbary[i]);
 
 			current_time = TimestampTzPlusMilliseconds(current_time,
 													   millis_increment);
-			db->adl_next_worker = current_time;
+			thisdb->adl_next_worker = current_time;
 
 			/* later elements should go closer to the head of the list */
-			dlist_push_head(&DatabaseList, &db->adl_node);
+			dlist_push_head(&DatabaseList, &thisdb->adl_node);
 		}
 	}
 
-- 
2.17.1

0026-avoid-shadow-vars-basebackup.c-ti.patchtext/x-diff; charset=us-asciiDownload
From a1c1323ca209e7de7198066843d854cbc9fab127 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 00:05:30 -0500
Subject: [PATCH 26/26] avoid shadow vars: basebackup.c: ti

commit 3866ff6149a3b072561e65b3f71f63498e77b6b2
Author: Magnus Hagander <magnus@hagander.net>
Date:   Sat Jan 15 19:18:14 2011 +0100

    Enumerate available tablespaces after starting the backup
---
 src/backend/backup/basebackup.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index 715428029b3..b3367dccf74 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -306,9 +306,9 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)
 		/* Send off our tablespaces one by one */
 		foreach(lc, state.tablespaces)
 		{
-			tablespaceinfo *ti = (tablespaceinfo *) lfirst(lc);
+			tablespaceinfo *thisti = (tablespaceinfo *) lfirst(lc);
 
-			if (ti->path == NULL)
+			if (thisti->path == NULL)
 			{
 				struct stat statbuf;
 				bool		sendtblspclinks = true;
@@ -342,11 +342,11 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)
 			}
 			else
 			{
-				char	   *archive_name = psprintf("%s.tar", ti->oid);
+				char	   *archive_name = psprintf("%s.tar", thisti->oid);
 
 				bbsink_begin_archive(sink, archive_name);
 
-				sendTablespace(sink, ti->path, ti->oid, false, &manifest);
+				sendTablespace(sink, thisti->path, thisti->oid, false, &manifest);
 			}
 
 			/*
@@ -355,7 +355,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)
 			 * include the xlog files below and stop afterwards. This is safe
 			 * since the main data directory is always sent *last*.
 			 */
-			if (opt->includewal && ti->path == NULL)
+			if (opt->includewal && thisti->path == NULL)
 			{
 				Assert(lnext(state.tablespaces, lc) == NULL);
 			}
-- 
2.17.1

#5David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#1)
1 attachment(s)
Re: shadow variables - pg15 edition

On Thu, 18 Aug 2022 at 02:54, Justin Pryzby <pryzby@telsasoft.com> wrote:

The first half of the patches fix shadow variables newly-introduced in v15
(including one of my own patches), the rest are fixing the lowest hanging fruit
of the "short list" from COPT=-Wshadow=compatible-local

I wonder if it's better to fix the "big hitters" first. The idea
there would be to try to reduce the number of these warnings as
quickly and easily as possible. If we can get the numbers down fairly
significantly without too much effort, then that should provide us
with a bit more motivation to get rid of the remaining ones.

Here are the warnings grouped by the name of the variable:

$ make -s 2>&1 | grep "warning: declaration of" | grep -oP
"‘([_a-zA-Z]{1}[_a-zA-Z0-9]*)’" | sort | uniq -c
2 ‘aclresult’
3 ‘attnum’
1 ‘cell’
1 ‘cell__state’
2 ‘cmp’
2 ‘command’
1 ‘constraintOid’
1 ‘copyTuple’
1 ‘data’
1 ‘db’
1 ‘_do_rethrow’
1 ‘dpns’
1 ‘econtext’
1 ‘entry’
36 ‘expected’
1 ‘first’
1 ‘found_whole_row’
1 ‘host’
20 ‘i’
1 ‘iclause’
1 ‘idxs’
1 ‘i_oid’
4 ‘isnull’
1 ‘it’
2 ‘item’
1 ‘itemno’
1 ‘j’
1 ‘jtc’
1 ‘k’
1 ‘keyno’
7 ‘l’
13 ‘lc’
4 ‘lc__state’
1 ‘len’
1 ‘_local_sigjmp_buf’
1 ‘name’
2 ‘now’
1 ‘owning_tab’
1 ‘page’
1 ‘partitionId’
2 ‘path’
3 ‘proc’
1 ‘proclock’
1 ‘querytree_list’
1 ‘range’
1 ‘rel’
1 ‘relation’
1 ‘relid’
1 ‘rightop’
2 ‘rinfo’
1 ‘_save_context_stack’
1 ‘save_errno’
1 ‘_save_exception_stack’
1 ‘slot’
1 ‘sqlca’
9 ‘startelem’
1 ‘stmt_list’
2 ‘str’
1 ‘subpath’
1 ‘tbinfo’
1 ‘ti’
1 ‘transno’
1 ‘ttype’
1 ‘tuple’
5 ‘val’
1 ‘value2’
1 ‘wco’
1 ‘xid’
1 ‘xlogfname’

The top 5 by count here account for about half of the warnings, so
maybe is best to start with those? Likely the ones ending in __state
will fix themselves when you fix the variable with the same name
without that suffix.

The attached patch targets fixing the "expected" variable.

$ ./configure --prefix=/home/drowley/pg
CFLAGS="-Wshadow=compatible-local" > /dev/null
$ make clean -s
$ make -j -s 2>&1 | grep "warning: declaration of" | wc -l
153
$ make clean -s
$ patch -p1 < reduce_local_variable_shadow_warnings_in_regress.c.patch
$ make -j -s 2>&1 | grep "warning: declaration of" | wc -l
117

So 36 fewer warnings with the attached.

I'm probably not the only committer to want to run a mile when they
see someone posting 17 or 26 patches in an email. So maybe "bang for
buck" is a better method for getting the ball rolling here. As you
know, I was recently bitten by local shadows in af7d270dd, so I do
believe in the cause.

What do you think?

David

Attachments:

reduce_local_variable_shadow_warnings_in_regress.c.patchtext/plain; charset=US-ASCII; name=reduce_local_variable_shadow_warnings_in_regress.c.patchDownload
diff --git a/src/test/regress/regress.c b/src/test/regress/regress.c
index ba3532a51e..6d285255dd 100644
--- a/src/test/regress/regress.c
+++ b/src/test/regress/regress.c
@@ -56,22 +56,22 @@
 
 #define EXPECT_EQ_U32(result_expr, expected_expr)	\
 	do { \
-		uint32		result = (result_expr); \
-		uint32		expected = (expected_expr); \
-		if (result != expected) \
+		uint32		actual_result = (result_expr); \
+		uint32		expected_result = (expected_expr); \
+		if (actual_result != expected_result) \
 			elog(ERROR, \
 				 "%s yielded %u, expected %s in file \"%s\" line %u", \
-				 #result_expr, result, #expected_expr, __FILE__, __LINE__); \
+				 #result_expr, actual_result, #expected_expr, __FILE__, __LINE__); \
 	} while (0)
 
 #define EXPECT_EQ_U64(result_expr, expected_expr)	\
 	do { \
-		uint64		result = (result_expr); \
-		uint64		expected = (expected_expr); \
-		if (result != expected) \
+		uint64		actual_result = (result_expr); \
+		uint64		expected_result = (expected_expr); \
+		if (actual_result != expected_result) \
 			elog(ERROR, \
 				 "%s yielded " UINT64_FORMAT ", expected %s in file \"%s\" line %u", \
-				 #result_expr, result, #expected_expr, __FILE__, __LINE__); \
+				 #result_expr, actual_result, #expected_expr, __FILE__, __LINE__); \
 	} while (0)
 
 #define LDELIM			'('
#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Michael Paquier (#3)
Re: shadow variables - pg15 edition

Michael Paquier <michael@paquier.xyz> writes:

A lot of the changes proposed here update the code so as the same
variable gets used across more code paths by removing declarations,
but we have two variables defined because both are aimed to be used in
a different context (see AttachPartitionEnsureIndexes() in tablecmds.c
for example).

Wouldn't it be a saner approach in a lot of cases to rename the
shadowed variables (aka the ones getting removed in your patches) and
keep them local to the code paths where we use them?

Yeah. I do not think a patch of this sort has any business changing
the scopes of variables. That moves it out of "cosmetic cleanup"
and into "hm, I wonder if this introduces any bugs". Most hackers
are going to decide that they have better ways to spend their time
than doing that level of analysis for a very noncritical patch.

regards, tom lane

#7Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#5)
Re: shadow variables - pg15 edition

On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:

I'm probably not the only committer to want to run a mile when they
see someone posting 17 or 26 patches in an email. So maybe "bang for
buck" is a better method for getting the ball rolling here. As you
know, I was recently bitten by local shadows in af7d270dd, so I do
believe in the cause.

What do you think?

You already fixed the shadow var introduced in master/pg16, and I sent patches
for the shadow vars added in pg15 (marked as such and presented as 001-008), so
perhaps it's okay to start with that ?

BTW, one of the remaining warnings seems to be another buglet, which I'll write
about at a later date.

--
Justin

#8David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#7)
1 attachment(s)
Re: shadow variables - pg15 edition

On Thu, 18 Aug 2022 at 17:16, Justin Pryzby <pryzby@telsasoft.com> wrote:

On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:

I'm probably not the only committer to want to run a mile when they
see someone posting 17 or 26 patches in an email. So maybe "bang for
buck" is a better method for getting the ball rolling here. As you
know, I was recently bitten by local shadows in af7d270dd, so I do
believe in the cause.

What do you think?

You already fixed the shadow var introduced in master/pg16, and I sent patches
for the shadow vars added in pg15 (marked as such and presented as 001-008), so
perhaps it's okay to start with that ?

Alright, I made a pass over the 0001-0008 patches.

0001. I'd also rather see these 4 renamed:

+++ b/src/bin/pg_dump/pg_dump.c
@@ -3144,7 +3144,6 @@ dumpDatabase(Archive *fout)
  PQExpBuffer loHorizonQry = createPQExpBuffer();
  int i_relfrozenxid,
  i_relfilenode,
- i_oid,
  i_relminmxid;

Adding an extra 'i' (for inner) on the front seems fine to me.

0002. I don't really like the "my" name. I also see you've added the
word "this" to many other variables that are shadowing. It feels kinda
like you're missing a "self" and a "me" in there somewhere! :)

@@ -7080,21 +7080,21 @@ getConstraints(Archive *fout, TableInfo
tblinfo[], int numTables)
appendPQExpBufferChar(tbloids, '{');
for (int i = 0; i < numTables; i++)
{
- TableInfo *tbinfo = &tblinfo[i];
+ TableInfo *mytbinfo = &tblinfo[i];

How about just "tinfo"?

0003. The following is used for the exact same purpose as its shadowed
counterpart. I suggest just using the variable from the outer scope.

@@ -16799,21 +16799,21 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
  */
  if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
  {
- TableInfo  *owning_tab = findTableByOid(tbinfo->owning_tab);
+ TableInfo  *this_owning_tab = findTableByOid(tbinfo->owning_tab);

0004. I would rather people used foreach_current_index(lc) > 0 to
determine when we're not doing the first iteration of a foreach loop.
I understand there are more complex cases with filtering that this
cannot be done, but these are highly simple and using
foreach_current_index() removes multiple lines of code and makes it
look nicer.

@@ -762,8 +762,8 @@ fetch_remote_table_info(char *nspname, char *relname,
TupleTableSlot *slot;
Oid attrsRow[] = {INT2VECTOROID};
StringInfoData pub_names;
- bool first = true;

+ first = true;
initStringInfo(&pub_names);
foreach(lc, MySubscription->publications)

0005. How about just "tslot". I'm not a fan of "this".

+++ b/src/backend/replication/logical/tablesync.c
@@ -759,7 +759,7 @@ fetch_remote_table_info(char *nspname, char *relname,
  if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
  {
  WalRcvExecResult *pubres;
- TupleTableSlot *slot;
+ TupleTableSlot *thisslot;

0006. A see the outer shadowed counterpart is used to add a new backup
type. Since I'm not a fan of "this", how about the outer one gets
renamed to "newtype"?

+++ b/src/backend/backup/basebackup_target.c
@@ -73,9 +73,9 @@ BaseBackupAddTarget(char *name,
  /* Search the target type list for an existing entry with this name. */
  foreach(lc, BaseBackupTargetTypeList)
  {
- BaseBackupTargetType *ttype = lfirst(lc);
+ BaseBackupTargetType *this_ttype = lfirst(lc);

0007. Meh, more "this". How about just "col".

+++ b/src/backend/parser/parse_jsontable.c
@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext
*cxt, JsonTablePlan *plan,
  /* transform all nested columns into cross/union join */
  foreach(lc, columns)
  {
- JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
+ JsonTableColumn *thisjtc = castNode(JsonTableColumn, lfirst(lc));

There's a discussion about reverting this entire patch. Not sure if
patching master and not backpatching to pg15 would be useful to the
people who may be doing that revert.

0008. Sorry, I had to change this one too. I just have an aversion to
variables named "temp" or "tmp".

+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32
typmod, JsonbValue *res)
  if (JsonContainerIsScalar(&jb->root))
  {
- bool res PG_USED_FOR_ASSERTS_ONLY;
+ bool tmp PG_USED_FOR_ASSERTS_ONLY;
- res = JsonbExtractScalar(&jb->root, jbv);
- Assert(res);
+ tmp = JsonbExtractScalar(&jb->root, jbv);
+ Assert(tmp);

I've attached a patch which does things more along the lines of how I
would have done it. I don't think we should be back patching this
stuff.

Any objections to pushing this to master only?

David

Attachments:

shadow_pg15.patchtext/plain; charset=US-ASCII; name=shadow_pg15.patchDownload
diff --git a/src/backend/backup/basebackup_target.c b/src/backend/backup/basebackup_target.c
index 83928e3205..f280660a03 100644
--- a/src/backend/backup/basebackup_target.c
+++ b/src/backend/backup/basebackup_target.c
@@ -62,7 +62,7 @@ BaseBackupAddTarget(char *name,
 					void *(*check_detail) (char *, char *),
 					bbsink *(*get_sink) (bbsink *, void *))
 {
-	BaseBackupTargetType *ttype;
+	BaseBackupTargetType *newtype;
 	MemoryContext oldcontext;
 	ListCell   *lc;
 
@@ -96,11 +96,11 @@ BaseBackupAddTarget(char *name,
 	 * name into a newly-allocated chunk of memory.
 	 */
 	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
-	ttype = palloc(sizeof(BaseBackupTargetType));
-	ttype->name = pstrdup(name);
-	ttype->check_detail = check_detail;
-	ttype->get_sink = get_sink;
-	BaseBackupTargetTypeList = lappend(BaseBackupTargetTypeList, ttype);
+	newtype = palloc(sizeof(BaseBackupTargetType));
+	newtype->name = pstrdup(name);
+	newtype->check_detail = check_detail;
+	newtype->get_sink = get_sink;
+	BaseBackupTargetTypeList = lappend(BaseBackupTargetTypeList, newtype);
 	MemoryContextSwitchTo(oldcontext);
 }
 
diff --git a/src/backend/parser/parse_jsontable.c b/src/backend/parser/parse_jsontable.c
index bc3272017e..3e94071248 100644
--- a/src/backend/parser/parse_jsontable.c
+++ b/src/backend/parser/parse_jsontable.c
@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext *cxt, JsonTablePlan *plan,
 		/* transform all nested columns into cross/union join */
 		foreach(lc, columns)
 		{
-			JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
+			JsonTableColumn *col = castNode(JsonTableColumn, lfirst(lc));
 			Node	   *node;
 
-			if (jtc->coltype != JTC_NESTED)
+			if (col->coltype != JTC_NESTED)
 				continue;
 
-			node = transformNestedJsonTableColumn(cxt, jtc, plan);
+			node = transformNestedJsonTableColumn(cxt, col, plan);
 
 			/* join transformed node with previous sibling nodes */
 			res = res ? makeJsonTableSiblingJoin(cross, res, node) : node;
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index bfcb80b495..d37d8a0d74 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -707,7 +707,6 @@ fetch_remote_table_info(char *nspname, char *relname,
 	bool		isnull;
 	int			natt;
 	ListCell   *lc;
-	bool		first;
 	Bitmapset  *included_cols = NULL;
 
 	lrel->nspname = nspname;
@@ -759,18 +758,15 @@ fetch_remote_table_info(char *nspname, char *relname,
 	if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
 	{
 		WalRcvExecResult *pubres;
-		TupleTableSlot *slot;
+		TupleTableSlot *tslot;
 		Oid			attrsRow[] = {INT2VECTOROID};
 		StringInfoData pub_names;
-		bool		first = true;
-
 		initStringInfo(&pub_names);
 		foreach(lc, MySubscription->publications)
 		{
-			if (!first)
+			if (foreach_current_index(lc) > 0)
 				appendStringInfo(&pub_names, ", ");
 			appendStringInfoString(&pub_names, quote_literal_cstr(strVal(lfirst(lc))));
-			first = false;
 		}
 
 		/*
@@ -819,10 +815,10 @@ fetch_remote_table_info(char *nspname, char *relname,
 		 * If we find a NULL value, it means all the columns should be
 		 * replicated.
 		 */
-		slot = MakeSingleTupleTableSlot(pubres->tupledesc, &TTSOpsMinimalTuple);
-		if (tuplestore_gettupleslot(pubres->tuplestore, true, false, slot))
+		tslot = MakeSingleTupleTableSlot(pubres->tupledesc, &TTSOpsMinimalTuple);
+		if (tuplestore_gettupleslot(pubres->tuplestore, true, false, tslot))
 		{
-			Datum		cfval = slot_getattr(slot, 1, &isnull);
+			Datum		cfval = slot_getattr(tslot, 1, &isnull);
 
 			if (!isnull)
 			{
@@ -838,9 +834,9 @@ fetch_remote_table_info(char *nspname, char *relname,
 					included_cols = bms_add_member(included_cols, elems[natt]);
 			}
 
-			ExecClearTuple(slot);
+			ExecClearTuple(tslot);
 		}
-		ExecDropSingleTupleTableSlot(slot);
+		ExecDropSingleTupleTableSlot(tslot);
 
 		walrcv_clear_result(pubres);
 
@@ -950,14 +946,11 @@ fetch_remote_table_info(char *nspname, char *relname,
 
 		/* Build the pubname list. */
 		initStringInfo(&pub_names);
-		first = true;
 		foreach(lc, MySubscription->publications)
 		{
 			char	   *pubname = strVal(lfirst(lc));
 
-			if (first)
-				first = false;
-			else
+			if (foreach_current_index(lc) > 0)
 				appendStringInfoString(&pub_names, ", ");
 
 			appendStringInfoString(&pub_names, quote_literal_cstr(pubname));
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index 5b6a480572..9c381ae727 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32 typmod, JsonbValue *res)
 
 				if (JsonContainerIsScalar(&jb->root))
 				{
-					bool		res PG_USED_FOR_ASSERTS_ONLY;
+					bool		result PG_USED_FOR_ASSERTS_ONLY;
 
-					res = JsonbExtractScalar(&jb->root, jbv);
-					Assert(res);
+					result = JsonbExtractScalar(&jb->root, jbv);
+					Assert(result);
 				}
 				else
 					JsonbInitBinary(jbv, jb);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index da6605175a..2c68915732 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3142,10 +3142,10 @@ dumpDatabase(Archive *fout)
 		PQExpBuffer loFrozenQry = createPQExpBuffer();
 		PQExpBuffer loOutQry = createPQExpBuffer();
 		PQExpBuffer loHorizonQry = createPQExpBuffer();
-		int			i_relfrozenxid,
-					i_relfilenode,
-					i_oid,
-					i_relminmxid;
+		int			ii_relfrozenxid,
+					ii_relfilenode,
+					ii_oid,
+					ii_relminmxid;
 
 		/*
 		 * pg_largeobject
@@ -3163,10 +3163,10 @@ dumpDatabase(Archive *fout)
 
 		lo_res = ExecuteSqlQuery(fout, loFrozenQry->data, PGRES_TUPLES_OK);
 
-		i_relfrozenxid = PQfnumber(lo_res, "relfrozenxid");
-		i_relminmxid = PQfnumber(lo_res, "relminmxid");
-		i_relfilenode = PQfnumber(lo_res, "relfilenode");
-		i_oid = PQfnumber(lo_res, "oid");
+		ii_relfrozenxid = PQfnumber(lo_res, "relfrozenxid");
+		ii_relminmxid = PQfnumber(lo_res, "relminmxid");
+		ii_relfilenode = PQfnumber(lo_res, "relfilenode");
+		ii_oid = PQfnumber(lo_res, "oid");
 
 		appendPQExpBufferStr(loHorizonQry, "\n-- For binary upgrade, set pg_largeobject relfrozenxid and relminmxid\n");
 		appendPQExpBufferStr(loOutQry, "\n-- For binary upgrade, preserve pg_largeobject and index relfilenodes\n");
@@ -3178,12 +3178,12 @@ dumpDatabase(Archive *fout)
 			appendPQExpBuffer(loHorizonQry, "UPDATE pg_catalog.pg_class\n"
 							  "SET relfrozenxid = '%u', relminmxid = '%u'\n"
 							  "WHERE oid = %u;\n",
-							  atooid(PQgetvalue(lo_res, i, i_relfrozenxid)),
-							  atooid(PQgetvalue(lo_res, i, i_relminmxid)),
-							  atooid(PQgetvalue(lo_res, i, i_oid)));
+							  atooid(PQgetvalue(lo_res, i, ii_relfrozenxid)),
+							  atooid(PQgetvalue(lo_res, i, ii_relminmxid)),
+							  atooid(PQgetvalue(lo_res, i, ii_oid)));
 
-			oid = atooid(PQgetvalue(lo_res, i, i_oid));
-			relfilenumber = atooid(PQgetvalue(lo_res, i, i_relfilenode));
+			oid = atooid(PQgetvalue(lo_res, i, ii_oid));
+			relfilenumber = atooid(PQgetvalue(lo_res, i, ii_relfilenode));
 
 			if (oid == LargeObjectRelationId)
 				appendPQExpBuffer(loOutQry,
@@ -7081,21 +7081,21 @@ getConstraints(Archive *fout, TableInfo tblinfo[], int numTables)
 	appendPQExpBufferChar(tbloids, '{');
 	for (int i = 0; i < numTables; i++)
 	{
-		TableInfo  *tbinfo = &tblinfo[i];
+		TableInfo  *tinfo = &tblinfo[i];
 
 		/*
 		 * For partitioned tables, foreign keys have no triggers so they must
 		 * be included anyway in case some foreign keys are defined.
 		 */
-		if ((!tbinfo->hastriggers &&
-			 tbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
-			!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
+		if ((!tinfo->hastriggers &&
+			 tinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
+			!(tinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
 			continue;
 
 		/* OK, we need info for this table */
 		if (tbloids->len > 1)	/* do we have more than the '{'? */
 			appendPQExpBufferChar(tbloids, ',');
-		appendPQExpBuffer(tbloids, "%u", tbinfo->dobj.catId.oid);
+		appendPQExpBuffer(tbloids, "%u", tinfo->dobj.catId.oid);
 	}
 	appendPQExpBufferChar(tbloids, '}');
 
@@ -16800,7 +16800,7 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
 	 */
 	if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
 	{
-		TableInfo  *owning_tab = findTableByOid(tbinfo->owning_tab);
+		owning_tab = findTableByOid(tbinfo->owning_tab);
 
 		if (owning_tab == NULL)
 			pg_fatal("failed sanity check, parent table with OID %u of sequence with OID %u not found",
#9Peter Smith
smithpb2250@gmail.com
In reply to: David Rowley (#8)
Re: shadow variables - pg15 edition

On Thu, Aug 18, 2022 at 5:27 PM David Rowley <dgrowleyml@gmail.com> wrote:

On Thu, 18 Aug 2022 at 17:16, Justin Pryzby <pryzby@telsasoft.com> wrote:

On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:

I'm probably not the only committer to want to run a mile when they
see someone posting 17 or 26 patches in an email. So maybe "bang for
buck" is a better method for getting the ball rolling here. As you
know, I was recently bitten by local shadows in af7d270dd, so I do
believe in the cause.

What do you think?

You already fixed the shadow var introduced in master/pg16, and I sent patches
for the shadow vars added in pg15 (marked as such and presented as 001-008), so
perhaps it's okay to start with that ?

Alright, I made a pass over the 0001-0008 patches.

...

0005. How about just "tslot". I'm not a fan of "this".

(I'm sure there are others like this; I just picked this one as an example)

AFAICT the offending 'slot' really should have never been declared at
all at the local scope in the first place - e.g. the other code in
this function seems happy enough with the pattern of just re-using the
function scoped 'slot'.

I understand that for this shadow patch changing the var-name is
considered the saner/safer way than tampering with the scope, but
perhaps it is still useful to include a comment when changing ones
like this?

e.g.
+ TupleTableSlot *tslot; /* TODO - Why declare this at all? Shouldn't
it just re-use the 'slot' at function scope? */

Otherwise, such knowledge will be lost, and nobody will ever know to
revisit them, which feels a bit more like *hiding* the mistake than
fixing it.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#10Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#8)
Re: shadow variables - pg15 edition

On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:

0001. I'd also rather see these 4 renamed:

..

0002. I don't really like the "my" name. I also see you've added the

..

How about just "tinfo"?

..

0005. How about just "tslot". I'm not a fan of "this".

..

Since I'm not a fan of "this", how about the outer one gets renamed

..

0007. Meh, more "this". How about just "col".

..

0008. Sorry, I had to change this one too.

I agree that ii_oid and newtype are better names (although it's a bit
unfortunate to rename the outer "ttype" var of wider scope).

0003. The following is used for the exact same purpose as its shadowed
counterpart. I suggest just using the variable from the outer scope.

And that's what my original patch did, before people insisted that the patches
shouldn't change variable scope. Now it's back to where I stared.

There's a discussion about reverting this entire patch. Not sure if
patching master and not backpatching to pg15 would be useful to the
people who may be doing that revert.

I think if it were reverted, it'd be in both branches.

I've attached a patch which does things more along the lines of how I
would have done it. I don't think we should be back patching this
stuff.

Any objections to pushing this to master only?

I won't object, but some of your changes are what makes backpatching this less
reasonable (foreach_current_index and newtype). I had made these v15 patches
first to simplify backpatching, since having the same code in v15 means that
there's no backpatch hazard for this new-in-v15 code.

I am opened to presenting the patches differently, but we need to come up with
a better process than one person writing patches and someone else rewriting it.
I also don't see the value of debating which order to write the patches in.
Grouping by variable name or doing other statistical analysis doesn't change
the fact that there are 50+ issues to address to allow -Wshadow to be usable.

Maybe these would be helpful ?
- if I publish the patches on github;
- if I send the patches with more context;
- if you have an suggestion/objection/complaint with a patch, I can address it
and/or re-arrange the patchset so this is later, and all the polished
patches are presented first.

--
Justin

#11Peter Smith
smithpb2250@gmail.com
In reply to: Justin Pryzby (#10)
Re: shadow variables - pg15 edition

On Fri, Aug 19, 2022 at 9:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:

On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:

0001. I'd also rather see these 4 renamed:

..

0002. I don't really like the "my" name. I also see you've added the

..

How about just "tinfo"?

..

0005. How about just "tslot". I'm not a fan of "this".

..

Since I'm not a fan of "this", how about the outer one gets renamed

..

0007. Meh, more "this". How about just "col".

..

0008. Sorry, I had to change this one too.

I agree that ii_oid and newtype are better names (although it's a bit
unfortunate to rename the outer "ttype" var of wider scope).

0003. The following is used for the exact same purpose as its shadowed
counterpart. I suggest just using the variable from the outer scope.

And that's what my original patch did, before people insisted that the patches
shouldn't change variable scope. Now it's back to where I stared.

There's a discussion about reverting this entire patch. Not sure if
patching master and not backpatching to pg15 would be useful to the
people who may be doing that revert.

I think if it were reverted, it'd be in both branches.

I've attached a patch which does things more along the lines of how I
would have done it. I don't think we should be back patching this
stuff.

Any objections to pushing this to master only?

I won't object, but some of your changes are what makes backpatching this less
reasonable (foreach_current_index and newtype). I had made these v15 patches
first to simplify backpatching, since having the same code in v15 means that
there's no backpatch hazard for this new-in-v15 code.

I am opened to presenting the patches differently, but we need to come up with
a better process than one person writing patches and someone else rewriting it.
I also don't see the value of debating which order to write the patches in.
Grouping by variable name or doing other statistical analysis doesn't change
the fact that there are 50+ issues to address to allow -Wshadow to be usable.

Maybe these would be helpful ?
- if I publish the patches on github;
- if I send the patches with more context;
- if you have an suggestion/objection/complaint with a patch, I can address it
and/or re-arrange the patchset so this is later, and all the polished
patches are presented first.

Starting off with patches might come to grief, and it won't be much
fun rearranging patches over and over.

Because there are so many changes, I think it would be better to
attack this task methodically:

STEP 1 - Capture every shadow warning and categorise exactly what kind
is it. e.g maybe do this as some XLS which can be shared. The last
time I looked there were hundreds of instances, but I expect there
will be less than a couple of dozen different *categories* of them.

e.g. shadow of a global var
e.g. shadow of a function param
e.g. shadow of a function var in a code block for the exact same usage
e.g. shadow of a function var in a code block for some 'tmp' var
e.g. shadow of a function var in a code block due to a mistake
e.g. shadow of a function var by some loop index
e.g. shadow of a function var for some loop 'first' handling
e.g. bug
etc...

STEP 2 - Define your rules for how intend to address each of these
kinds of shadows (e.g. just simple rename of the var, use
'foreach_current_index', ...). Hopefully, it will be easy to reach an
agreement now since all instances of the same kind will look pretty
much the same.

STEP 3 - Fix all of the same kinds of shadows per single patch (using
the already agreed fix approach from step 2).

REPEAT STEPS 2,3 until done.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#12David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#10)
Re: shadow variables - pg15 edition

On Fri, 19 Aug 2022 at 11:21, Justin Pryzby <pryzby@telsasoft.com> wrote:

On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:

Any objections to pushing this to master only?

I won't object, but some of your changes are what makes backpatching this less
reasonable (foreach_current_index and newtype). I had made these v15 patches
first to simplify backpatching, since having the same code in v15 means that
there's no backpatch hazard for this new-in-v15 code.

I spent a bit more time on this and I see that make check-world does
fail if I change either of the foreach_current_index() changes to be
incorrect. e.g change the condition from "> 0" to be "== 0", "> 1" or
"> -1".

As for the newtype change, I was inclined to give the variable name
with the most meaning to the one that's in scope for longer.

I'm starting to feel like it would be ok to backpatch these
new-to-pg-15 changes back into PG15. The reason I think this is that
they all seem low enough risk that it's probably more risky to not
backpatch and risk bugs being introduced due to mistakes being made in
conflict resolution when future patches don't apply. It was the
failing tests I mentioned above that swayed me on this.

I am opened to presenting the patches differently, but we need to come up with
a better process than one person writing patches and someone else rewriting it.

It wasn't my intention to purposefully rewrite everything. It's just
that in order to get the work into something I was willing to commit,
that's how it ended up. As for why I did that rather than ask you to
was the fact that doing it myself required fewer keystrokes, mental
effort and time than asking you to. It's not my intention to do that
for any personal credit. I'm happy for you to take that. I'd just
rather not be batting such trivial patches over the fence at each
other for days or weeks. The effort-to-reward ratio for that is
probably going to drop below my threshold after a few rounds.

David

#13Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#12)
Re: shadow variables - pg15 edition

On Fri, Aug 19, 2022 at 03:37:52PM +1200, David Rowley wrote:

I'm happy for you to take that. I'd just rather not be batting such trivial
patches over the fence at each other for days or weeks.

Yes, thanks for that.
I read through your patch, which looks fine.
Let me know what I can do when it's time for round two.

--
Justin

#14David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#13)
Re: shadow variables - pg15 edition

On Fri, 19 Aug 2022 at 16:28, Justin Pryzby <pryzby@telsasoft.com> wrote:

Let me know what I can do when it's time for round two.

I pushed the modified 0001-0008 patches earlier today and also the one
I wrote to fixup the 36 warnings about "expected" being shadowed.

I looked through a bunch of your remaining patches and was a bit
unexcited to see many more renaming such as:

- List    *querytree_list;
+ List    *this_querytree_list;

I don't think this sort of thing is an improvement.

However, one category of these changes that I do like are the ones
where we can move the variable into an inner scope. Out of your
renaming 0009-0026 patches, these are:

0013
0014
0017
0018

I feel like having the variable in scope for the minimal amount of
time makes the code cleaner and I feel like these are good next steps
because:

a) no variable needs to be renamed
b) any backpatching issues is more likely to lead to compilation
failure rather than using the wrong variable.

Likely 0016 is a subcategory of the above as if you modified that
patch to follow this rule then you'd have to declare the variable a
few times. I think that category is less interesting and we can maybe
consider those after we're done with the more simple ones.

Do you want to submit a series of patches that fixes all of the
remaining warnings that are in this category? Once these are done we
can consider the best ways to fix and if we want to fix any of the
remaining ones.

Feel free to gzip the patches up if the number is large.

David

#15Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#14)
1 attachment(s)
Re: shadow variables - pg15 edition

On Sat, Aug 20, 2022 at 09:17:41PM +1200, David Rowley wrote:

On Fri, 19 Aug 2022 at 16:28, Justin Pryzby <pryzby@telsasoft.com> wrote:

Let me know what I can do when it's time for round two.

I pushed the modified 0001-0008 patches earlier today and also the one
I wrote to fixup the 36 warnings about "expected" being shadowed.

Thank you

I looked through a bunch of your remaining patches and was a bit
unexcited to see many more renaming such as:

Yes - after Michael said that was the sane procedure, I had rearranged the
patch series to present eariler those patches first which renamed variables ..

However, one category of these changes that I do like are the ones
where we can move the variable into an inner scope.

There are a lot of these, which ISTM is a good thing.
This fixes about half of the remaining warnings.

https://github.com/justinpryzby/postgres/tree/avoid-shadow-vars
You could review without applying the patches, on the webpage or (probably
better) by adding as a git remote. Attached is a squished version.

--
Justin

Attachments:

v2.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index e88f7efa7e4..69f21abfb59 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -372,7 +372,6 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 			  **nullkeys;
 	int		   *nkeys,
 			   *nnullkeys;
-	int			keyno;
 	char	   *ptr;
 	Size		len;
 	char	   *tmp PG_USED_FOR_ASSERTS_ONLY;
@@ -454,7 +453,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 	memset(nnullkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
 
 	/* Preprocess the scan keys - split them into per-attribute arrays. */
-	for (keyno = 0; keyno < scan->numberOfKeys; keyno++)
+	for (int keyno = 0; keyno < scan->numberOfKeys; keyno++)
 	{
 		ScanKey		key = &scan->keyData[keyno];
 		AttrNumber	keyattno = key->sk_attno;
diff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.c
index 10d4f17bc6f..524c1846b83 100644
--- a/src/backend/access/brin/brin_minmax_multi.c
+++ b/src/backend/access/brin/brin_minmax_multi.c
@@ -582,7 +582,6 @@ brin_range_serialize(Ranges *range)
 	int			typlen;
 	bool		typbyval;
 
-	int			i;
 	char	   *ptr;
 
 	/* simple sanity checks */
@@ -621,18 +620,14 @@ brin_range_serialize(Ranges *range)
 	 */
 	if (typlen == -1)			/* varlena */
 	{
-		int			i;
-
-		for (i = 0; i < nvalues; i++)
+		for (int i = 0; i < nvalues; i++)
 		{
 			len += VARSIZE_ANY(range->values[i]);
 		}
 	}
 	else if (typlen == -2)		/* cstring */
 	{
-		int			i;
-
-		for (i = 0; i < nvalues; i++)
+		for (int i = 0; i < nvalues; i++)
 		{
 			/* don't forget to include the null terminator ;-) */
 			len += strlen(DatumGetCString(range->values[i])) + 1;
@@ -662,7 +657,7 @@ brin_range_serialize(Ranges *range)
 	 */
 	ptr = serialized->data;		/* start of the serialized data */
 
-	for (i = 0; i < nvalues; i++)
+	for (int i = 0; i < nvalues; i++)
 	{
 		if (typbyval)			/* simple by-value data types */
 		{
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 5866c6aaaf7..30069f139c7 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -234,7 +234,6 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
 	Page		page = BufferGetPage(buffer);
 	bool		is_leaf = (GistPageIsLeaf(page)) ? true : false;
 	XLogRecPtr	recptr;
-	int			i;
 	bool		is_split;
 
 	/*
@@ -420,7 +419,7 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
 		{
 			char	   *data = (char *) (ptr->list);
 
-			for (i = 0; i < ptr->block.num; i++)
+			for (int i = 0; i < ptr->block.num; i++)
 			{
 				IndexTuple	thistup = (IndexTuple) data;
 
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 87b243e0d4b..46e3bb55ebb 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -3036,8 +3036,6 @@ XLogFileInitInternal(XLogSegNo logsegno, TimeLineID logtli,
 	pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
 	if (pg_fsync(fd) != 0)
 	{
-		int			save_errno = errno;
-
 		close(fd);
 		errno = save_errno;
 		ereport(ERROR,
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9b03579e6e0..9a83ebf3231 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1818,19 +1818,19 @@ heap_drop_with_catalog(Oid relid)
 	 */
 	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
 	{
-		Relation	rel;
-		HeapTuple	tuple;
+		Relation	pg_foreign_table;
+		HeapTuple	foreigntuple;
 
-		rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+		pg_foreign_table = table_open(ForeignTableRelationId, RowExclusiveLock);
 
-		tuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
-		if (!HeapTupleIsValid(tuple))
+		foreigntuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(foreigntuple))
 			elog(ERROR, "cache lookup failed for foreign table %u", relid);
 
-		CatalogTupleDelete(rel, &tuple->t_self);
+		CatalogTupleDelete(pg_foreign_table, &foreigntuple->t_self);
 
-		ReleaseSysCache(tuple);
-		table_close(rel, RowExclusiveLock);
+		ReleaseSysCache(foreigntuple);
+		table_close(pg_foreign_table, RowExclusiveLock);
 	}
 
 	/*
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index a976008b3d4..e8bb168aea8 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -1202,7 +1202,6 @@ BeginCopyFrom(ParseState *pstate,
 				num_defaults;
 	FmgrInfo   *in_functions;
 	Oid		   *typioparams;
-	int			attnum;
 	Oid			in_func_oid;
 	int		   *defmap;
 	ExprState **defexprs;
@@ -1401,7 +1400,7 @@ BeginCopyFrom(ParseState *pstate,
 	defmap = (int *) palloc(num_phys_attrs * sizeof(int));
 	defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
 
-	for (attnum = 1; attnum <= num_phys_attrs; attnum++)
+	for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
 	{
 		Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1);
 
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 667f2a4cd16..3c6e09815e0 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -565,7 +565,6 @@ DefineIndex(Oid relationId,
 	Oid			root_save_userid;
 	int			root_save_sec_context;
 	int			root_save_nestlevel;
-	int			i;
 
 	root_save_nestlevel = NewGUCNestLevel();
 
@@ -1047,7 +1046,7 @@ DefineIndex(Oid relationId,
 	 * We disallow indexes on system columns.  They would not necessarily get
 	 * updated correctly, and they don't seem useful anyway.
 	 */
-	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	for (int i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
 	{
 		AttrNumber	attno = indexInfo->ii_IndexAttrNumbers[i];
 
@@ -1067,7 +1066,7 @@ DefineIndex(Oid relationId,
 		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
 		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
 
-		for (i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
+		for (int i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
 		{
 			if (bms_is_member(i - FirstLowInvalidHeapAttributeNumber,
 							  indexattrs))
@@ -1243,7 +1242,7 @@ DefineIndex(Oid relationId,
 			 * If none matches, build a new index by calling ourselves
 			 * recursively with the same options (except for the index name).
 			 */
-			for (i = 0; i < nparts; i++)
+			for (int i = 0; i < nparts; i++)
 			{
 				Oid			childRelid = part_oids[i];
 				Relation	childrel;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 8b574b86c47..f9366f588fb 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -106,7 +106,7 @@ parse_publication_options(ParseState *pstate,
 		{
 			char	   *publish;
 			List	   *publish_list;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			if (*publish_given)
 				errorConflictingDefElem(defel, pstate);
@@ -129,9 +129,9 @@ parse_publication_options(ParseState *pstate,
 						 errmsg("invalid list syntax for \"publish\" option")));
 
 			/* Process the option list. */
-			foreach(lc, publish_list)
+			foreach(lc2, publish_list)
 			{
-				char	   *publish_opt = (char *) lfirst(lc);
+				char	   *publish_opt = (char *) lfirst(lc2);
 
 				if (strcmp(publish_opt, "insert") == 0)
 					pubactions->pubinsert = true;
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 9be04c8a1e7..7535b86bcae 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10223,7 +10223,7 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
 		Oid			constrOid;
 		ObjectAddress address,
 					referenced;
-		ListCell   *cell;
+		ListCell   *lc;
 		Oid			insertTriggerOid,
 					updateTriggerOid;
 
@@ -10276,9 +10276,9 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
 		 * don't need to recurse to partitions for this constraint.
 		 */
 		attached = false;
-		foreach(cell, partFKs)
+		foreach(lc, partFKs)
 		{
-			ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, cell);
+			ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, lc);
 
 			if (tryAttachPartitionForeignKey(fk,
 											 RelationGetRelid(partRel),
@@ -16796,7 +16796,6 @@ PreCommit_on_commit_actions(void)
 	if (oids_to_drop != NIL)
 	{
 		ObjectAddresses *targetObjects = new_object_addresses();
-		ListCell   *l;
 
 		foreach(l, oids_to_drop)
 		{
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 62a09fb131b..f1801a160ed 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1149,7 +1149,6 @@ CreateTriggerFiringOn(CreateTrigStmt *stmt, const char *queryString,
 		PartitionDesc partdesc = RelationGetPartitionDesc(rel, true);
 		List	   *idxs = NIL;
 		List	   *childTbls = NIL;
-		ListCell   *l;
 		int			i;
 		MemoryContext oldcxt,
 					perChildCxt;
@@ -1181,7 +1180,8 @@ CreateTriggerFiringOn(CreateTrigStmt *stmt, const char *queryString,
 		for (i = 0; i < partdesc->nparts; i++)
 		{
 			Oid			indexOnChild = InvalidOid;
-			ListCell   *l2;
+			ListCell   *l,
+				   *l2;
 			CreateTrigStmt *childStmt;
 			Relation	childTbl;
 			Node	   *qual;
@@ -1726,9 +1726,9 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
 
 			for (int i = 0; i < partdesc->nparts; i++)
 			{
-				Oid			partitionId = partdesc->oids[i];
+				Oid			partid = partdesc->oids[i];
 
-				renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+				renametrig_partition(tgrel, partid, tgform->oid, newname,
 									 NameStr(tgform->tgname));
 			}
 		}
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index dbdfe8bd2d4..3670d1f1861 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -233,8 +233,6 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
 	 */
 	if (!(params.options & VACOPT_ANALYZE))
 	{
-		ListCell   *lc;
-
 		foreach(lc, vacstmt->rels)
 		{
 			VacuumRelation *vrel = lfirst_node(VacuumRelation, lc);
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index ac03271882f..901dd435efd 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
 			{
 				List	   *onconflset;
 				List	   *onconflcols;
-				bool		found_whole_row;
 
 				/*
 				 * Translate expressions in onConflictSet to account for
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 96d200e4461..736082c8fb3 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -1296,13 +1296,12 @@ finalize_aggregates(AggState *aggstate,
 	Datum	   *aggvalues = econtext->ecxt_aggvalues;
 	bool	   *aggnulls = econtext->ecxt_aggnulls;
 	int			aggno;
-	int			transno;
 
 	/*
 	 * If there were any DISTINCT and/or ORDER BY aggregates, sort their
 	 * inputs and run the transition functions.
 	 */
-	for (transno = 0; transno < aggstate->numtrans; transno++)
+	for (int transno = 0; transno < aggstate->numtrans; transno++)
 	{
 		AggStatePerTrans pertrans = &aggstate->pertrans[transno];
 		AggStatePerGroup pergroupstate;
@@ -3188,7 +3187,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	int			numGroupingSets = 1;
 	int			numPhases;
 	int			numHashes;
-	int			i = 0;
 	int			j = 0;
 	bool		use_hashing = (node->aggstrategy == AGG_HASHED ||
 							   node->aggstrategy == AGG_MIXED);
@@ -3279,7 +3277,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
 
-	for (i = 0; i < numGroupingSets; ++i)
+	for (int i = 0; i < numGroupingSets; ++i)
 	{
 		ExecAssignExprContext(estate, &aggstate->ss.ps);
 		aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
@@ -3419,10 +3417,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 			AggStatePerPhase phasedata = &aggstate->phases[0];
 			AggStatePerHash perhash;
 			Bitmapset  *cols = NULL;
+			int			setno = phasedata->numsets++;
 
 			Assert(phase == 0);
-			i = phasedata->numsets++;
-			perhash = &aggstate->perhash[i];
+			perhash = &aggstate->perhash[setno];
 
 			/* phase 0 always points to the "real" Agg in the hash case */
 			phasedata->aggnode = node;
@@ -3431,12 +3429,12 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 			/* but the actual Agg node representing this hash is saved here */
 			perhash->aggnode = aggnode;
 
-			phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;
+			phasedata->gset_lengths[setno] = perhash->numCols = aggnode->numCols;
 
 			for (j = 0; j < aggnode->numCols; ++j)
 				cols = bms_add_member(cols, aggnode->grpColIdx[j]);
 
-			phasedata->grouped_cols[i] = cols;
+			phasedata->grouped_cols[setno] = cols;
 
 			all_grouped_cols = bms_add_members(all_grouped_cols, cols);
 			continue;
@@ -3450,6 +3448,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 			if (num_sets)
 			{
+				int i;
 				phasedata->gset_lengths = palloc(num_sets * sizeof(int));
 				phasedata->grouped_cols = palloc(num_sets * sizeof(Bitmapset *));
 
@@ -3535,9 +3534,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	/*
 	 * Convert all_grouped_cols to a descending-order list.
 	 */
-	i = -1;
-	while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
-		aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+	{
+		int i = -1;
+		while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+			aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+	}
 
 	/*
 	 * Set up aggregate-result storage in the output expr context, and also
@@ -3561,7 +3562,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 
 	if (node->aggstrategy != AGG_HASHED)
 	{
-		for (i = 0; i < numGroupingSets; i++)
+		for (int i = 0; i < numGroupingSets; i++)
 		{
 			pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
 													  * numaggs);
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 1545ff9f161..f9d40fa1a0d 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1650,16 +1650,16 @@ interpret_ident_response(const char *ident_response,
 						return false;
 					else
 					{
-						int			i;	/* Index into *ident_user */
+						int			j;	/* Index into *ident_user */
 
 						cursor++;	/* Go over colon */
 						while (pg_isblank(*cursor))
 							cursor++;	/* skip blanks */
 						/* Rest of line is user name.  Copy it over. */
-						i = 0;
+						j = 0;
 						while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
-							ident_user[i++] = *cursor++;
-						ident_user[i] = '\0';
+							ident_user[j++] = *cursor++;
+						ident_user[j] = '\0';
 						return true;
 					}
 				}
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 1e94c5aa7c4..74adc4f3946 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2447,7 +2447,6 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
 	int			arrlen;
 	ListCell   *l;
 	ListCell   *cell;
-	int			i;
 	int			path_index;
 	int			min_index;
 	int			max_index;
@@ -2486,7 +2485,6 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
 	for_each_cell(l, subpaths, cell)
 	{
 		Path	   *subpath = (Path *) lfirst(l);
-		int			i;
 
 		/* Consider only the non-partial paths */
 		if (path_index++ == numpaths)
@@ -2495,7 +2493,8 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
 		costarr[min_index] += subpath->total_cost;
 
 		/* Update the new min cost array index */
-		for (min_index = i = 0; i < arrlen; i++)
+		min_index = 0;
+		for (int i = 0; i < arrlen; i++)
 		{
 			if (costarr[i] < costarr[min_index])
 				min_index = i;
@@ -2503,7 +2502,8 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
 	}
 
 	/* Return the highest cost from the array */
-	for (max_index = i = 0; i < arrlen; i++)
+	max_index = 0;
+	for (int i = 0; i < arrlen; i++)
 	{
 		if (costarr[i] > costarr[max_index])
 			max_index = i;
@@ -2545,10 +2545,10 @@ cost_append(AppendPath *apath, PlannerInfo *root)
 			/* Compute rows and costs as sums of subplan rows and costs. */
 			foreach(l, apath->subpaths)
 			{
-				Path	   *subpath = (Path *) lfirst(l);
+				Path	   *sub = (Path *) lfirst(l);
 
-				apath->path.rows += subpath->rows;
-				apath->path.total_cost += subpath->total_cost;
+				apath->path.rows += sub->rows;
+				apath->path.total_cost += sub->total_cost;
 			}
 		}
 		else
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 7d176e7b00a..8ba27a98b42 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -361,7 +361,6 @@ create_index_paths(PlannerInfo *root, RelOptInfo *rel)
 	if (bitjoinpaths != NIL)
 	{
 		List	   *all_path_outers;
-		ListCell   *lc;
 
 		/* Identify each distinct parameterization seen in bitjoinpaths */
 		all_path_outers = NIL;
diff --git a/src/backend/optimizer/path/tidpath.c b/src/backend/optimizer/path/tidpath.c
index 279ca1f5b44..23194d6e007 100644
--- a/src/backend/optimizer/path/tidpath.c
+++ b/src/backend/optimizer/path/tidpath.c
@@ -305,10 +305,10 @@ TidQualFromRestrictInfoList(PlannerInfo *root, List *rlist, RelOptInfo *rel)
 				}
 				else
 				{
-					RestrictInfo *rinfo = castNode(RestrictInfo, orarg);
+					RestrictInfo *list = castNode(RestrictInfo, orarg);
 
-					Assert(!restriction_is_or_clause(rinfo));
-					sublist = TidQualFromRestrictInfo(root, rinfo, rel);
+					Assert(!restriction_is_or_clause(list));
+					sublist = TidQualFromRestrictInfo(root, list, rel);
 				}
 
 				/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index cf9e0a74dbf..e969f2be3fe 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -1994,8 +1994,6 @@ preprocess_grouping_sets(PlannerInfo *root)
 
 	if (parse->groupClause)
 	{
-		ListCell   *lc;
-
 		foreach(lc, parse->groupClause)
 		{
 			SortGroupClause *gc = lfirst_node(SortGroupClause, lc);
@@ -3458,16 +3456,16 @@ get_number_of_groups(PlannerInfo *root,
 			foreach(lc, gd->rollups)
 			{
 				RollupData *rollup = lfirst_node(RollupData, lc);
-				ListCell   *lc;
+				ListCell   *lc3;
 
 				groupExprs = get_sortgrouplist_exprs(rollup->groupClause,
 													 target_list);
 
 				rollup->numGroups = 0.0;
 
-				forboth(lc, rollup->gsets, lc2, rollup->gsets_data)
+				forboth(lc3, rollup->gsets, lc2, rollup->gsets_data)
 				{
-					List	   *gset = (List *) lfirst(lc);
+					List	   *gset = (List *) lfirst(lc3);
 					GroupingSetData *gs = lfirst_node(GroupingSetData, lc2);
 					double		numGroups = estimate_num_groups(root,
 																groupExprs,
@@ -3484,8 +3482,6 @@ get_number_of_groups(PlannerInfo *root,
 
 			if (gd->hash_sets_idx)
 			{
-				ListCell   *lc;
-
 				gd->dNumHashGroups = 0;
 
 				groupExprs = get_sortgrouplist_exprs(parse->groupClause,
@@ -5034,11 +5030,11 @@ create_ordered_paths(PlannerInfo *root,
 		 */
 		if (enable_incremental_sort && list_length(root->sort_pathkeys) > 1)
 		{
-			ListCell   *lc;
+			ListCell   *lc2;
 
-			foreach(lc, input_rel->partial_pathlist)
+			foreach(lc2, input_rel->partial_pathlist)
 			{
-				Path	   *input_path = (Path *) lfirst(lc);
+				Path	   *input_path = (Path *) lfirst(lc2);
 				Path	   *sorted_path;
 				bool		is_sorted;
 				int			presorted_keys;
@@ -7607,7 +7603,7 @@ apply_scanjoin_target_to_paths(PlannerInfo *root,
 			AppendRelInfo **appinfos;
 			int			nappinfos;
 			List	   *child_scanjoin_targets = NIL;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			Assert(child_rel != NULL);
 
@@ -7618,9 +7614,9 @@ apply_scanjoin_target_to_paths(PlannerInfo *root,
 			/* Translate scan/join targets for this child. */
 			appinfos = find_appinfos_by_relids(root, child_rel->relids,
 											   &nappinfos);
-			foreach(lc, scanjoin_targets)
+			foreach(lc2, scanjoin_targets)
 			{
-				PathTarget *target = lfirst_node(PathTarget, lc);
+				PathTarget *target = lfirst_node(PathTarget, lc2);
 
 				target = copy_pathtarget(target);
 				target->exprs = (List *)
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index df4ca129191..b15ecc83971 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2402,7 +2402,7 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 		case T_FunctionScan:
 			{
 				FunctionScan *fscan = (FunctionScan *) plan;
-				ListCell   *lc;
+				ListCell   *lc; //
 
 				/*
 				 * Call finalize_primnode independently on each function
@@ -2510,7 +2510,7 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 		case T_CustomScan:
 			{
 				CustomScan *cscan = (CustomScan *) plan;
-				ListCell   *lc;
+				ListCell   *lc; //
 
 				finalize_primnode((Node *) cscan->custom_exprs,
 								  &context);
@@ -2554,8 +2554,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 
 		case T_Append:
 			{
-				ListCell   *l;
-
 				foreach(l, ((Append *) plan)->appendplans)
 				{
 					context.paramids =
@@ -2571,8 +2569,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 
 		case T_MergeAppend:
 			{
-				ListCell   *l;
-
 				foreach(l, ((MergeAppend *) plan)->mergeplans)
 				{
 					context.paramids =
@@ -2588,8 +2584,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 
 		case T_BitmapAnd:
 			{
-				ListCell   *l;
-
 				foreach(l, ((BitmapAnd *) plan)->bitmapplans)
 				{
 					context.paramids =
@@ -2605,8 +2599,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 
 		case T_BitmapOr:
 			{
-				ListCell   *l;
-
 				foreach(l, ((BitmapOr *) plan)->bitmapplans)
 				{
 					context.paramids =
@@ -2622,8 +2614,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 
 		case T_NestLoop:
 			{
-				ListCell   *l;
-
 				finalize_primnode((Node *) ((Join *) plan)->joinqual,
 								  &context);
 				/* collect set of params that will be passed to right child */
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 043181b586b..f97c2f5256c 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -653,15 +653,14 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
 	if (partial_paths_valid)
 	{
 		Path	   *ppath;
-		ListCell   *lc;
 		int			parallel_workers = 0;
 
 		/* Find the highest number of workers requested for any subpath. */
 		foreach(lc, partial_pathlist)
 		{
-			Path	   *path = lfirst(lc);
+			Path	   *partial_path = lfirst(lc);
 
-			parallel_workers = Max(parallel_workers, path->parallel_workers);
+			parallel_workers = Max(parallel_workers, partial_path->parallel_workers);
 		}
 		Assert(parallel_workers > 0);
 
diff --git a/src/backend/optimizer/util/paramassign.c b/src/backend/optimizer/util/paramassign.c
index 8e2d4bf5158..933460989b3 100644
--- a/src/backend/optimizer/util/paramassign.c
+++ b/src/backend/optimizer/util/paramassign.c
@@ -437,16 +437,16 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 		{
 			Var		   *var = (Var *) pitem->item;
 			NestLoopParam *nlp;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			/* If not from a nestloop outer rel, complain */
 			if (!bms_is_member(var->varno, root->curOuterRels))
 				elog(ERROR, "non-LATERAL parameter required by subquery");
 
 			/* Is this param already listed in root->curOuterParams? */
-			foreach(lc, root->curOuterParams)
+			foreach(lc2, root->curOuterParams)
 			{
-				nlp = (NestLoopParam *) lfirst(lc);
+				nlp = (NestLoopParam *) lfirst(lc2);
 				if (nlp->paramno == pitem->paramId)
 				{
 					Assert(equal(var, nlp->paramval));
@@ -454,7 +454,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 					break;
 				}
 			}
-			if (lc == NULL)
+			if (lc2 == NULL)
 			{
 				/* No, so add it */
 				nlp = makeNode(NestLoopParam);
@@ -467,7 +467,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 		{
 			PlaceHolderVar *phv = (PlaceHolderVar *) pitem->item;
 			NestLoopParam *nlp;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			/* If not from a nestloop outer rel, complain */
 			if (!bms_is_subset(find_placeholder_info(root, phv)->ph_eval_at,
@@ -475,9 +475,9 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 				elog(ERROR, "non-LATERAL parameter required by subquery");
 
 			/* Is this param already listed in root->curOuterParams? */
-			foreach(lc, root->curOuterParams)
+			foreach(lc2, root->curOuterParams)
 			{
-				nlp = (NestLoopParam *) lfirst(lc);
+				nlp = (NestLoopParam *) lfirst(lc2);
 				if (nlp->paramno == pitem->paramId)
 				{
 					Assert(equal(phv, nlp->paramval));
@@ -485,7 +485,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 					break;
 				}
 			}
-			if (lc == NULL)
+			if (lc2 == NULL)
 			{
 				/* No, so add it */
 				nlp = makeNode(NestLoopParam);
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index b85fbebd00e..53a17ac3f6a 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -539,11 +539,11 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
 				!fc->func_variadic &&
 				coldeflist == NIL)
 			{
-				ListCell   *lc;
+				ListCell   *lc2;
 
-				foreach(lc, fc->args)
+				foreach(lc2, fc->args)
 				{
-					Node	   *arg = (Node *) lfirst(lc);
+					Node	   *arg = (Node *) lfirst(lc2);
 					FuncCall   *newfc;
 
 					last_srf = pstate->p_last_srf;
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index c1c27e67d47..744bc512b65 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1265,7 +1265,6 @@ dependency_is_compatible_expression(Node *clause, Index relid, List *statlist, N
 	else if (is_orclause(clause))
 	{
 		BoolExpr   *bool_expr = (BoolExpr *) clause;
-		ListCell   *lc;
 
 		/* start with no expression (we'll use the first match) */
 		*expr = NULL;
@@ -1693,7 +1692,6 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
 				{
 					int			idx;
 					Node	   *expr;
-					int			k;
 					AttrNumber	unique_attnum = InvalidAttrNumber;
 					AttrNumber	attnum;
 
@@ -1741,15 +1739,15 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
 					expr = (Node *) list_nth(stat->exprs, idx);
 
 					/* try to find the expression in the unique list */
-					for (k = 0; k < unique_exprs_cnt; k++)
+					for (int m = 0; m < unique_exprs_cnt; m++)
 					{
 						/*
 						 * found a matching unique expression, use the attnum
 						 * (derived from index of the unique expression)
 						 */
-						if (equal(unique_exprs[k], expr))
+						if (equal(unique_exprs[m], expr))
 						{
-							unique_attnum = -(k + 1) + attnum_offset;
+							unique_attnum = -(m + 1) + attnum_offset;
 							break;
 						}
 					}
diff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c
index 5410a68bc91..91b9635dc0a 100644
--- a/src/backend/statistics/mcv.c
+++ b/src/backend/statistics/mcv.c
@@ -1604,7 +1604,6 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 					 Bitmapset *keys, List *exprs,
 					 MCVList *mcvlist, bool is_or)
 {
-	int			i;
 	ListCell   *l;
 	bool	   *matches;
 
@@ -1659,7 +1658,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				bool		match = true;
 				MCVItem    *item = &mcvlist->items[i];
@@ -1766,7 +1765,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				int			j;
 				bool		match = !expr->useOr;
@@ -1837,7 +1836,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				bool		match = false;	/* assume mismatch */
 				MCVItem    *item = &mcvlist->items[i];
@@ -1862,7 +1861,6 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 		{
 			/* AND/OR clause, with all subclauses being compatible */
 
-			int			i;
 			BoolExpr   *bool_clause = ((BoolExpr *) clause);
 			List	   *bool_clauses = bool_clause->args;
 
@@ -1881,7 +1879,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * current one. We need to consider if we're evaluating AND or OR
 			 * condition when merging the results.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 				matches[i] = RESULT_MERGE(matches[i], is_or, bool_matches[i]);
 
 			pfree(bool_matches);
@@ -1890,7 +1888,6 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 		{
 			/* NOT clause, with all subclauses compatible */
 
-			int			i;
 			BoolExpr   *not_clause = ((BoolExpr *) clause);
 			List	   *not_args = not_clause->args;
 
@@ -1909,7 +1906,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * current one. We're handling a NOT clause, so invert the result
 			 * before merging it into the global bitmap.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 				matches[i] = RESULT_MERGE(matches[i], is_or, !not_matches[i]);
 
 			pfree(not_matches);
@@ -1930,7 +1927,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				MCVItem    *item = &mcvlist->items[i];
 				bool		match = false;
@@ -1956,7 +1953,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				bool		match;
 				MCVItem    *item = &mcvlist->items[i];
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 7a1202c6096..49d3b8c9dd0 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -3183,7 +3183,6 @@ void
 DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 {
 	int			i;
-	int			j;
 	int			n = 0;
 	SMgrRelation *rels;
 	BlockNumber (*block)[MAX_FORKNUM + 1];
@@ -3232,7 +3231,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 	 */
 	for (i = 0; i < n && cached; i++)
 	{
-		for (j = 0; j <= MAX_FORKNUM; j++)
+		for (int j = 0; j <= MAX_FORKNUM; j++)
 		{
 			/* Get the number of blocks for a relation's fork. */
 			block[i][j] = smgrnblocks_cached(rels[i], j);
@@ -3259,7 +3258,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 	{
 		for (i = 0; i < n; i++)
 		{
-			for (j = 0; j <= MAX_FORKNUM; j++)
+			for (int j = 0; j <= MAX_FORKNUM; j++)
 			{
 				/* ignore relation forks that doesn't exist */
 				if (!BlockNumberIsValid(block[i][j]))
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 6b0a8652622..ba9a568389f 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -1087,6 +1087,23 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 	CommandCounterIncrement();
 }
 
+static ObjectAddress
+TryExecRefreshMatView(RefreshMatViewStmt *stmt, const char *queryString,
+					ParamListInfo params, QueryCompletion *qc)
+{
+	ObjectAddress address;
+	PG_TRY();
+	{
+		address = ExecRefreshMatView(stmt, queryString, params, qc);
+	}
+	PG_FINALLY();
+	{
+		EventTriggerUndoInhibitCommandCollection();
+	}
+	PG_END_TRY();
+	return address;
+}
+
 /*
  * The "Slow" variant of ProcessUtility should only receive statements
  * supported by the event triggers facility.  Therefore, we always
@@ -1678,16 +1695,10 @@ ProcessUtilitySlow(ParseState *pstate,
 				 * command itself is queued, which is enough.
 				 */
 				EventTriggerInhibitCommandCollection();
-				PG_TRY();
-				{
-					address = ExecRefreshMatView((RefreshMatViewStmt *) parsetree,
-												 queryString, params, qc);
-				}
-				PG_FINALLY();
-				{
-					EventTriggerUndoInhibitCommandCollection();
-				}
-				PG_END_TRY();
+
+				address = TryExecRefreshMatView((RefreshMatViewStmt *) parsetree,
+											 queryString, params, qc);
+
 				break;
 
 			case T_CreateTrigStmt:
diff --git a/src/backend/utils/adt/levenshtein.c b/src/backend/utils/adt/levenshtein.c
index 3026cc24311..2e67a90e516 100644
--- a/src/backend/utils/adt/levenshtein.c
+++ b/src/backend/utils/adt/levenshtein.c
@@ -193,16 +193,16 @@ varstr_levenshtein(const char *source, int slen,
 	 */
 	if (m != slen || n != tlen)
 	{
-		int			i;
+		int			k;
 		const char *cp = source;
 
 		s_char_len = (int *) palloc((m + 1) * sizeof(int));
-		for (i = 0; i < m; ++i)
+		for (k = 0; k < m; ++k)
 		{
-			s_char_len[i] = pg_mblen(cp);
-			cp += s_char_len[i];
+			s_char_len[k] = pg_mblen(cp);
+			cp += s_char_len[k];
 		}
-		s_char_len[i] = 0;
+		s_char_len[k] = 0;
 	}
 
 	/* One more cell for initialization column and row. */
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 8964f73b929..3f5683f70b5 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -1303,7 +1303,6 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
 	if (!heap_attisnull(ht_idx, Anum_pg_index_indexprs, NULL))
 	{
 		Datum		exprsDatum;
-		bool		isnull;
 		char	   *exprsString;
 
 		exprsDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
@@ -1500,7 +1499,6 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
 		{
 			Node	   *node;
 			Datum		predDatum;
-			bool		isnull;
 			char	   *predString;
 
 			/* Convert text string to node tree */
@@ -1648,7 +1646,6 @@ pg_get_statisticsobj_worker(Oid statextid, bool columns_only, bool missing_ok)
 	if (has_exprs)
 	{
 		Datum		exprsDatum;
-		bool		isnull;
 		char	   *exprsString;
 
 		exprsDatum = SysCacheGetAttr(STATEXTOID, statexttup,
@@ -1944,7 +1941,6 @@ pg_get_partkeydef_worker(Oid relid, int prettyFlags,
 	if (!heap_attisnull(tuple, Anum_pg_partitioned_table_partexprs, NULL))
 	{
 		Datum		exprsDatum;
-		bool		isnull;
 		char	   *exprsString;
 
 		exprsDatum = SysCacheGetAttr(PARTRELID, tuple,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2c689157329..c0d09edf9d0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -11576,7 +11576,6 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
 	char	  **configitems = NULL;
 	int			nconfigitems = 0;
 	const char *keyword;
-	int			i;
 
 	/* Do nothing in data-only dump */
 	if (dopt->dataOnly)
@@ -11776,11 +11775,10 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
 	if (*protrftypes)
 	{
 		Oid		   *typeids = palloc(FUNC_MAX_ARGS * sizeof(Oid));
-		int			i;
 
 		appendPQExpBufferStr(q, " TRANSFORM ");
 		parseOidArray(protrftypes, typeids, FUNC_MAX_ARGS);
-		for (i = 0; typeids[i]; i++)
+		for (int i = 0; typeids[i]; i++)
 		{
 			if (i != 0)
 				appendPQExpBufferStr(q, ", ");
@@ -11853,7 +11851,7 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
 					 finfo->dobj.name);
 	}
 
-	for (i = 0; i < nconfigitems; i++)
+	for (int i = 0; i < nconfigitems; i++)
 	{
 		/* we feel free to scribble on configitems[] here */
 		char	   *configitem = configitems[i];
diff --git a/src/interfaces/ecpg/pgtypeslib/numeric.c b/src/interfaces/ecpg/pgtypeslib/numeric.c
index a97b3300cb8..b666c909084 100644
--- a/src/interfaces/ecpg/pgtypeslib/numeric.c
+++ b/src/interfaces/ecpg/pgtypeslib/numeric.c
@@ -1062,7 +1062,6 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 	int			weight_tmp;
 	int			rscale_tmp;
 	int			ri;
-	int			i;
 	long		guess;
 	long		first_have;
 	long		first_div;
@@ -1109,7 +1108,7 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 	 * Initialize local variables
 	 */
 	init_var(&dividend);
-	for (i = 1; i < 10; i++)
+	for (int i = 1; i < 10; i++)
 		init_var(&divisor[i]);
 
 	/*
@@ -1181,7 +1180,6 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 		{
 			if (divisor[guess].buf == NULL)
 			{
-				int			i;
 				long		sum = 0;
 
 				memcpy(&divisor[guess], &divisor[1], sizeof(numeric));
@@ -1189,7 +1187,7 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 				if (divisor[guess].buf == NULL)
 					goto done;
 				divisor[guess].digits = divisor[guess].buf;
-				for (i = divisor[1].ndigits - 1; i >= 0; i--)
+				for (int i = divisor[1].ndigits - 1; i >= 0; i--)
 				{
 					sum += divisor[1].digits[i] * guess;
 					divisor[guess].digits[i] = sum % 10;
@@ -1268,7 +1266,7 @@ done:
 	if (dividend.buf != NULL)
 		digitbuf_free(dividend.buf);
 
-	for (i = 1; i < 10; i++)
+	for (int i = 1; i < 10; i++)
 	{
 		if (divisor[i].buf != NULL)
 			digitbuf_free(divisor[i].buf);
diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c
index 93d9cef06ba..7e6169fc203 100644
--- a/src/pl/plpgsql/src/pl_funcs.c
+++ b/src/pl/plpgsql/src/pl_funcs.c
@@ -1597,14 +1597,13 @@ dump_expr(PLpgSQL_expr *expr)
 void
 plpgsql_dumptree(PLpgSQL_function *func)
 {
-	int			i;
 	PLpgSQL_datum *d;
 
 	printf("\nExecution tree of successfully compiled PL/pgSQL function %s:\n",
 		   func->fn_signature);
 
 	printf("\nFunction's data area:\n");
-	for (i = 0; i < func->ndatums; i++)
+	for (int i = 0; i < func->ndatums; i++)
 	{
 		d = func->datums[i];
 
@@ -1647,13 +1646,12 @@ plpgsql_dumptree(PLpgSQL_function *func)
 			case PLPGSQL_DTYPE_ROW:
 				{
 					PLpgSQL_row *row = (PLpgSQL_row *) d;
-					int			i;
 
 					printf("ROW %-16s fields", row->refname);
-					for (i = 0; i < row->nfields; i++)
+					for (int j = 0; j < row->nfields; j++)
 					{
-						printf(" %s=var %d", row->fieldnames[i],
-							   row->varnos[i]);
+						printf(" %s=var %d", row->fieldnames[j],
+							   row->varnos[j]);
 					}
 					printf("\n");
 				}
#16David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#15)
Re: shadow variables - pg15 edition

On Tue, 23 Aug 2022 at 13:17, Justin Pryzby <pryzby@telsasoft.com> wrote:

Attached is a squished version.

I see there's some renaming ones snuck in there. e.g:

- Relation rel;
- HeapTuple tuple;
+ Relation pg_foreign_table;
+ HeapTuple foreigntuple;

This one does not seem to be in the category I mentioned:

@@ -3036,8 +3036,6 @@ XLogFileInitInternal(XLogSegNo logsegno,
TimeLineID logtli,
pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
if (pg_fsync(fd) != 0)
{
- int save_errno = errno;
-

More renaming:

+++ b/src/backend/catalog/heap.c
@@ -1818,19 +1818,19 @@ heap_drop_with_catalog(Oid relid)
  */
  if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
  {
- Relation rel;
- HeapTuple tuple;
+ Relation pg_foreign_table;
+ HeapTuple foreigntuple;

More renaming:

+++ b/src/backend/commands/publicationcmds.c
@@ -106,7 +106,7 @@ parse_publication_options(ParseState *pstate,
  {
  char    *publish;
  List    *publish_list;
- ListCell   *lc;
+ ListCell   *lc2;

and again:

+++ b/src/backend/commands/tablecmds.c
@@ -10223,7 +10223,7 @@ CloneFkReferencing(List **wqueue, Relation
parentRel, Relation partRel)
  Oid constrOid;
  ObjectAddress address,
  referenced;
- ListCell   *cell;
+ ListCell   *lc;

I've not checked the context one this, but this does not appear to
meet the category of moving to an inner scope:

+++ b/src/backend/executor/execPartition.c
@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate,
EState *estate,
  {
  List    *onconflset;
  List    *onconflcols;
- bool found_whole_row;

Looks like you're just using the one from the wider scope. That's not
the category we're after for now.

You've also got some renaming going on in ExecInitAgg()

- phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;
+ phasedata->gset_lengths[setno] = perhash->numCols = aggnode->numCols;

I wondered about this one too:

- i = -1;
- while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
- aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+ {
+ int i = -1;
+ while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+ aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+ }

I had in mind that maybe we should switch those to be something more like:

for (int i = -1; (i = bms_next_member(all_grouped_cols, i)) >= 0;)

But I had 2nd thoughts as the "while" version has become the standard method.

(Really that code should be using bms_prev_member() and lappend_int()
so we don't have to memmove() the entire list each lcons_int() call.
(not for this patch though))

More renaming being done here:

- int i; /* Index into *ident_user */
+ int j; /* Index into *ident_user */

... in fact, there's lots of renaming, so I'll just stop looking.

Can you just send a patch that only changes the cases where you can
remove a variable declaration from an outer scope into a single inner
scope, or multiple inner scope when the variable can be declared
inside a for() loop? The mcv_get_match_bitmap() change is an example
of this. There's still a net reduction in lines of code, so I think
the mcv_get_match_bitmap(), and any like it are ok for this next step.
A counter example is ExecInitPartitionInfo() where the way to do this
would be to move the found_whole_row declaration into multiple inner
scopes. That's a net increase in code lines, for which I think
requires more careful thought if we want that or not.

David

#17Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#16)
1 attachment(s)
Re: shadow variables - pg15 edition

On Tue, Aug 23, 2022 at 01:38:40PM +1200, David Rowley wrote:

On Tue, 23 Aug 2022 at 13:17, Justin Pryzby <pryzby@telsasoft.com> wrote:

Attached is a squished version.

I see there's some renaming ones snuck in there. e.g:
... in fact, there's lots of renaming, so I'll just stop looking.

Actually, they didn't sneak in - what I sent are the patches which are ready to
be reviewed, excluding the set of "this" and "tmp" and other renames which you
disliked. In the branch (not the squished patch) the first ~15 patches were
mostly for C99 for loops - I presented them this way deliberately, so you could
review and comment on whatever you're able to bite off, or run with whatever
parts you think are ready. I rewrote it now to be more bite sized by
truncating off the 2nd half of the patches.

Can you just send a patch that only changes the cases where you can
remove a variable declaration from an outer scope into a single inner
scope, or multiple inner scope when the variable can be declared
inside a for() loop?

would be to move the found_whole_row declaration into multiple inner
scopes. That's a net increase in code lines, for which I think
requires more careful thought if we want that or not.

IMO it doesn't make sense to declare multiple integers for something like this
whether they're all ignored. Nor for "save_errno" nor the third, similar case,
for the reason in the commit message.

--
Justin

Attachments:

v2-truncated.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index e88f7efa7e4..69f21abfb59 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -353,45 +353,44 @@ brinbeginscan(Relation r, int nkeys, int norderbys)
 int64
 bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 {
 	Relation	idxRel = scan->indexRelation;
 	Buffer		buf = InvalidBuffer;
 	BrinDesc   *bdesc;
 	Oid			heapOid;
 	Relation	heapRel;
 	BrinOpaque *opaque;
 	BlockNumber nblocks;
 	BlockNumber heapBlk;
 	int			totalpages = 0;
 	FmgrInfo   *consistentFn;
 	MemoryContext oldcxt;
 	MemoryContext perRangeCxt;
 	BrinMemTuple *dtup;
 	BrinTuple  *btup = NULL;
 	Size		btupsz = 0;
 	ScanKey   **keys,
 			  **nullkeys;
 	int		   *nkeys,
 			   *nnullkeys;
-	int			keyno;
 	char	   *ptr;
 	Size		len;
 	char	   *tmp PG_USED_FOR_ASSERTS_ONLY;
 
 	opaque = (BrinOpaque *) scan->opaque;
 	bdesc = opaque->bo_bdesc;
 	pgstat_count_index_scan(idxRel);
 
 	/*
 	 * We need to know the size of the table so that we know how long to
 	 * iterate on the revmap.
 	 */
 	heapOid = IndexGetRelation(RelationGetRelid(idxRel), false);
 	heapRel = table_open(heapOid, AccessShareLock);
 	nblocks = RelationGetNumberOfBlocks(heapRel);
 	table_close(heapRel, AccessShareLock);
 
 	/*
 	 * Make room for the consistent support procedures of indexed columns.  We
 	 * don't look them up here; we do that lazily the first time we see a scan
 	 * key reference each of them.  We rely on zeroing fn_oid to InvalidOid.
 	 */
@@ -435,45 +434,45 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 	nkeys = (int *) ptr;
 	ptr += MAXALIGN(sizeof(int) * bdesc->bd_tupdesc->natts);
 
 	nnullkeys = (int *) ptr;
 	ptr += MAXALIGN(sizeof(int) * bdesc->bd_tupdesc->natts);
 
 	for (int i = 0; i < bdesc->bd_tupdesc->natts; i++)
 	{
 		keys[i] = (ScanKey *) ptr;
 		ptr += MAXALIGN(sizeof(ScanKey) * scan->numberOfKeys);
 
 		nullkeys[i] = (ScanKey *) ptr;
 		ptr += MAXALIGN(sizeof(ScanKey) * scan->numberOfKeys);
 	}
 
 	Assert(tmp + len == ptr);
 
 	/* zero the number of keys */
 	memset(nkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
 	memset(nnullkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
 
 	/* Preprocess the scan keys - split them into per-attribute arrays. */
-	for (keyno = 0; keyno < scan->numberOfKeys; keyno++)
+	for (int keyno = 0; keyno < scan->numberOfKeys; keyno++)
 	{
 		ScanKey		key = &scan->keyData[keyno];
 		AttrNumber	keyattno = key->sk_attno;
 
 		/*
 		 * The collation of the scan key must match the collation used in the
 		 * index column (but only if the search is not IS NULL/ IS NOT NULL).
 		 * Otherwise we shouldn't be using this index ...
 		 */
 		Assert((key->sk_flags & SK_ISNULL) ||
 			   (key->sk_collation ==
 				TupleDescAttr(bdesc->bd_tupdesc,
 							  keyattno - 1)->attcollation));
 
 		/*
 		 * First time we see this index attribute, so init as needed.
 		 *
 		 * This is a bit of an overkill - we don't know how many scan keys are
 		 * there for this attribute, so we simply allocate the largest number
 		 * possible (as if all keys were for this attribute). This may waste a
 		 * bit of memory, but we only expect small number of scan keys in
 		 * general, so this should be negligible, and repeated repalloc calls
diff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.c
index 10d4f17bc6f..524c1846b83 100644
--- a/src/backend/access/brin/brin_minmax_multi.c
+++ b/src/backend/access/brin/brin_minmax_multi.c
@@ -563,125 +563,120 @@ range_deduplicate_values(Ranges *range)
 
 	AssertCheckRanges(range, range->cmp, range->colloid);
 }
 
 
 /*
  * brin_range_serialize
  *	  Serialize the in-memory representation into a compact varlena value.
  *
  * Simply copy the header and then also the individual values, as stored
  * in the in-memory value array.
  */
 static SerializedRanges *
 brin_range_serialize(Ranges *range)
 {
 	Size		len;
 	int			nvalues;
 	SerializedRanges *serialized;
 	Oid			typid;
 	int			typlen;
 	bool		typbyval;
 
-	int			i;
 	char	   *ptr;
 
 	/* simple sanity checks */
 	Assert(range->nranges >= 0);
 	Assert(range->nsorted >= 0);
 	Assert(range->nvalues >= 0);
 	Assert(range->maxvalues > 0);
 	Assert(range->target_maxvalues > 0);
 
 	/* at this point the range should be compacted to the target size */
 	Assert(2 * range->nranges + range->nvalues <= range->target_maxvalues);
 
 	Assert(range->target_maxvalues <= range->maxvalues);
 
 	/* range boundaries are always sorted */
 	Assert(range->nvalues >= range->nsorted);
 
 	/* deduplicate values, if there's unsorted part */
 	range_deduplicate_values(range);
 
 	/* see how many Datum values we actually have */
 	nvalues = 2 * range->nranges + range->nvalues;
 
 	typid = range->typid;
 	typbyval = get_typbyval(typid);
 	typlen = get_typlen(typid);
 
 	/* header is always needed */
 	len = offsetof(SerializedRanges, data);
 
 	/*
 	 * The space needed depends on data type - for fixed-length data types
 	 * (by-value and some by-reference) it's pretty simple, just multiply
 	 * (attlen * nvalues) and we're done. For variable-length by-reference
 	 * types we need to actually walk all the values and sum the lengths.
 	 */
 	if (typlen == -1)			/* varlena */
 	{
-		int			i;
-
-		for (i = 0; i < nvalues; i++)
+		for (int i = 0; i < nvalues; i++)
 		{
 			len += VARSIZE_ANY(range->values[i]);
 		}
 	}
 	else if (typlen == -2)		/* cstring */
 	{
-		int			i;
-
-		for (i = 0; i < nvalues; i++)
+		for (int i = 0; i < nvalues; i++)
 		{
 			/* don't forget to include the null terminator ;-) */
 			len += strlen(DatumGetCString(range->values[i])) + 1;
 		}
 	}
 	else						/* fixed-length types (even by-reference) */
 	{
 		Assert(typlen > 0);
 		len += nvalues * typlen;
 	}
 
 	/*
 	 * Allocate the serialized object, copy the basic information. The
 	 * serialized object is a varlena, so update the header.
 	 */
 	serialized = (SerializedRanges *) palloc0(len);
 	SET_VARSIZE(serialized, len);
 
 	serialized->typid = typid;
 	serialized->nranges = range->nranges;
 	serialized->nvalues = range->nvalues;
 	serialized->maxvalues = range->target_maxvalues;
 
 	/*
 	 * And now copy also the boundary values (like the length calculation this
 	 * depends on the particular data type).
 	 */
 	ptr = serialized->data;		/* start of the serialized data */
 
-	for (i = 0; i < nvalues; i++)
+	for (int i = 0; i < nvalues; i++)
 	{
 		if (typbyval)			/* simple by-value data types */
 		{
 			Datum		tmp;
 
 			/*
 			 * For byval types, we need to copy just the significant bytes -
 			 * we can't use memcpy directly, as that assumes little-endian
 			 * behavior.  store_att_byval does almost what we need, but it
 			 * requires a properly aligned buffer - the output buffer does not
 			 * guarantee that. So we simply use a local Datum variable (which
 			 * guarantees proper alignment), and then copy the value from it.
 			 */
 			store_att_byval(&tmp, range->values[i], typlen);
 
 			memcpy(ptr, &tmp, typlen);
 			ptr += typlen;
 		}
 		else if (typlen > 0)	/* fixed-length by-ref types */
 		{
 			memcpy(ptr, DatumGetPointer(range->values[i]), typlen);
 			ptr += typlen;
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 5866c6aaaf7..30069f139c7 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -215,45 +215,44 @@ gistinsert(Relation r, Datum *values, bool *isnull,
  *
  * If 'newblkno' is not NULL, returns the block number of page the first
  * new/updated tuple was inserted to. Usually it's the given page, but could
  * be its right sibling if the page was split.
  *
  * Returns 'true' if the page was split, 'false' otherwise.
  */
 bool
 gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
 				Buffer buffer,
 				IndexTuple *itup, int ntup, OffsetNumber oldoffnum,
 				BlockNumber *newblkno,
 				Buffer leftchildbuf,
 				List **splitinfo,
 				bool markfollowright,
 				Relation heapRel,
 				bool is_build)
 {
 	BlockNumber blkno = BufferGetBlockNumber(buffer);
 	Page		page = BufferGetPage(buffer);
 	bool		is_leaf = (GistPageIsLeaf(page)) ? true : false;
 	XLogRecPtr	recptr;
-	int			i;
 	bool		is_split;
 
 	/*
 	 * Refuse to modify a page that's incompletely split. This should not
 	 * happen because we finish any incomplete splits while we walk down the
 	 * tree. However, it's remotely possible that another concurrent inserter
 	 * splits a parent page, and errors out before completing the split. We
 	 * will just throw an error in that case, and leave any split we had in
 	 * progress unfinished too. The next insert that comes along will clean up
 	 * the mess.
 	 */
 	if (GistFollowRight(page))
 		elog(ERROR, "concurrent GiST page split was incomplete");
 
 	/* should never try to insert to a deleted page */
 	Assert(!GistPageIsDeleted(page));
 
 	*splitinfo = NIL;
 
 	/*
 	 * if isupdate, remove old key: This node's key has been modified, either
 	 * because a child split occurred or because we needed to adjust our key
@@ -401,45 +400,45 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
 		}
 		else
 		{
 			/* Prepare split-info to be returned to caller */
 			for (ptr = dist; ptr; ptr = ptr->next)
 			{
 				GISTPageSplitInfo *si = palloc(sizeof(GISTPageSplitInfo));
 
 				si->buf = ptr->buffer;
 				si->downlink = ptr->itup;
 				*splitinfo = lappend(*splitinfo, si);
 			}
 		}
 
 		/*
 		 * Fill all pages. All the pages are new, ie. freshly allocated empty
 		 * pages, or a temporary copy of the old page.
 		 */
 		for (ptr = dist; ptr; ptr = ptr->next)
 		{
 			char	   *data = (char *) (ptr->list);
 
-			for (i = 0; i < ptr->block.num; i++)
+			for (int i = 0; i < ptr->block.num; i++)
 			{
 				IndexTuple	thistup = (IndexTuple) data;
 
 				if (PageAddItem(ptr->page, (Item) data, IndexTupleSize(thistup), i + FirstOffsetNumber, false, false) == InvalidOffsetNumber)
 					elog(ERROR, "failed to add item to index page in \"%s\"", RelationGetRelationName(rel));
 
 				/*
 				 * If this is the first inserted/updated tuple, let the caller
 				 * know which page it landed on.
 				 */
 				if (newblkno && ItemPointerEquals(&thistup->t_tid, &(*itup)->t_tid))
 					*newblkno = ptr->block.blkno;
 
 				data += IndexTupleSize(thistup);
 			}
 
 			/* Set up rightlinks */
 			if (ptr->next && ptr->block.blkno != GIST_ROOT_BLKNO)
 				GistPageGetOpaque(ptr->page)->rightlink =
 					ptr->next->block.blkno;
 			else
 				GistPageGetOpaque(ptr->page)->rightlink = oldrlink;
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index a976008b3d4..e8bb168aea8 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -1183,45 +1183,44 @@ CopyFrom(CopyFromState cstate)
  * 'attnamelist': List of char *, columns to include. NIL selects all cols.
  * 'options': List of DefElem. See copy_opt_item in gram.y for selections.
  *
  * Returns a CopyFromState, to be passed to NextCopyFrom and related functions.
  */
 CopyFromState
 BeginCopyFrom(ParseState *pstate,
 			  Relation rel,
 			  Node *whereClause,
 			  const char *filename,
 			  bool is_program,
 			  copy_data_source_cb data_source_cb,
 			  List *attnamelist,
 			  List *options)
 {
 	CopyFromState cstate;
 	bool		pipe = (filename == NULL);
 	TupleDesc	tupDesc;
 	AttrNumber	num_phys_attrs,
 				num_defaults;
 	FmgrInfo   *in_functions;
 	Oid		   *typioparams;
-	int			attnum;
 	Oid			in_func_oid;
 	int		   *defmap;
 	ExprState **defexprs;
 	MemoryContext oldcontext;
 	bool		volatile_defexprs;
 	const int	progress_cols[] = {
 		PROGRESS_COPY_COMMAND,
 		PROGRESS_COPY_TYPE,
 		PROGRESS_COPY_BYTES_TOTAL
 	};
 	int64		progress_vals[] = {
 		PROGRESS_COPY_COMMAND_FROM,
 		0,
 		0
 	};
 
 	/* Allocate workspace and zero all fields */
 	cstate = (CopyFromStateData *) palloc0(sizeof(CopyFromStateData));
 
 	/*
 	 * We allocate everything used by a cstate in a new memory context. This
 	 * avoids memory leaks during repeated use of COPY in a query.
@@ -1382,45 +1381,45 @@ BeginCopyFrom(ParseState *pstate,
 	initStringInfo(&cstate->attribute_buf);
 
 	/* Assign range table, we'll need it in CopyFrom. */
 	if (pstate)
 		cstate->range_table = pstate->p_rtable;
 
 	tupDesc = RelationGetDescr(cstate->rel);
 	num_phys_attrs = tupDesc->natts;
 	num_defaults = 0;
 	volatile_defexprs = false;
 
 	/*
 	 * Pick up the required catalog information for each attribute in the
 	 * relation, including the input function, the element type (to pass to
 	 * the input function), and info about defaults and constraints. (Which
 	 * input function we use depends on text/binary format choice.)
 	 */
 	in_functions = (FmgrInfo *) palloc(num_phys_attrs * sizeof(FmgrInfo));
 	typioparams = (Oid *) palloc(num_phys_attrs * sizeof(Oid));
 	defmap = (int *) palloc(num_phys_attrs * sizeof(int));
 	defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
 
-	for (attnum = 1; attnum <= num_phys_attrs; attnum++)
+	for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
 	{
 		Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1);
 
 		/* We don't need info for dropped attributes */
 		if (att->attisdropped)
 			continue;
 
 		/* Fetch the input function and typioparam info */
 		if (cstate->opts.binary)
 			getTypeBinaryInputInfo(att->atttypid,
 								   &in_func_oid, &typioparams[attnum - 1]);
 		else
 			getTypeInputInfo(att->atttypid,
 							 &in_func_oid, &typioparams[attnum - 1]);
 		fmgr_info(in_func_oid, &in_functions[attnum - 1]);
 
 		/* Get default info if needed */
 		if (!list_member_int(cstate->attnumlist, attnum) && !att->attgenerated)
 		{
 			/* attribute is NOT to be copied from input */
 			/* use default value if one exists */
 			Expr	   *defexpr = (Expr *) build_column_default(cstate->rel,
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 667f2a4cd16..3c6e09815e0 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -546,45 +546,44 @@ DefineIndex(Oid relationId,
 	Form_pg_am	accessMethodForm;
 	IndexAmRoutine *amRoutine;
 	bool		amcanorder;
 	amoptions_function amoptions;
 	bool		partitioned;
 	bool		safe_index;
 	Datum		reloptions;
 	int16	   *coloptions;
 	IndexInfo  *indexInfo;
 	bits16		flags;
 	bits16		constr_flags;
 	int			numberOfAttributes;
 	int			numberOfKeyAttributes;
 	TransactionId limitXmin;
 	ObjectAddress address;
 	LockRelId	heaprelid;
 	LOCKTAG		heaplocktag;
 	LOCKMODE	lockmode;
 	Snapshot	snapshot;
 	Oid			root_save_userid;
 	int			root_save_sec_context;
 	int			root_save_nestlevel;
-	int			i;
 
 	root_save_nestlevel = NewGUCNestLevel();
 
 	/*
 	 * Some callers need us to run with an empty default_tablespace; this is a
 	 * necessary hack to be able to reproduce catalog state accurately when
 	 * recreating indexes after table-rewriting ALTER TABLE.
 	 */
 	if (stmt->reset_default_tblspc)
 		(void) set_config_option("default_tablespace", "",
 								 PGC_USERSET, PGC_S_SESSION,
 								 GUC_ACTION_SAVE, true, 0, false);
 
 	/*
 	 * Force non-concurrent build on temporary relations, even if CONCURRENTLY
 	 * was requested.  Other backends can't access a temporary relation, so
 	 * there's no harm in grabbing a stronger lock, and a non-concurrent DROP
 	 * is more efficient.  Do this before any use of the concurrent option is
 	 * done.
 	 */
 	if (stmt->concurrent && get_rel_persistence(relationId) != RELPERSISTENCE_TEMP)
 		concurrent = true;
@@ -1028,65 +1027,65 @@ DefineIndex(Oid relationId,
 
 			if (!found)
 			{
 				Form_pg_attribute att;
 
 				att = TupleDescAttr(RelationGetDescr(rel),
 									key->partattrs[i] - 1);
 				ereport(ERROR,
 						(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 						 errmsg("unique constraint on partitioned table must include all partitioning columns"),
 						 errdetail("%s constraint on table \"%s\" lacks column \"%s\" which is part of the partition key.",
 								   constraint_type, RelationGetRelationName(rel),
 								   NameStr(att->attname))));
 			}
 		}
 	}
 
 
 	/*
 	 * We disallow indexes on system columns.  They would not necessarily get
 	 * updated correctly, and they don't seem useful anyway.
 	 */
-	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	for (int i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
 	{
 		AttrNumber	attno = indexInfo->ii_IndexAttrNumbers[i];
 
 		if (attno < 0)
 			ereport(ERROR,
 					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					 errmsg("index creation on system columns is not supported")));
 	}
 
 	/*
 	 * Also check for system columns used in expressions or predicates.
 	 */
 	if (indexInfo->ii_Expressions || indexInfo->ii_Predicate)
 	{
 		Bitmapset  *indexattrs = NULL;
 
 		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
 		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
 
-		for (i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
+		for (int i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
 		{
 			if (bms_is_member(i - FirstLowInvalidHeapAttributeNumber,
 							  indexattrs))
 				ereport(ERROR,
 						(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 						 errmsg("index creation on system columns is not supported")));
 		}
 	}
 
 	/* Is index safe for others to ignore?  See set_indexsafe_procflags() */
 	safe_index = indexInfo->ii_Expressions == NIL &&
 		indexInfo->ii_Predicate == NIL;
 
 	/*
 	 * Report index creation if appropriate (delay this till after most of the
 	 * error checks)
 	 */
 	if (stmt->isconstraint && !quiet)
 	{
 		const char *constraint_type;
 
 		if (stmt->primary)
@@ -1224,45 +1223,45 @@ DefineIndex(Oid relationId,
 
 			/*
 			 * We'll need an IndexInfo describing the parent index.  The one
 			 * built above is almost good enough, but not quite, because (for
 			 * example) its predicate expression if any hasn't been through
 			 * expression preprocessing.  The most reliable way to get an
 			 * IndexInfo that will match those for child indexes is to build
 			 * it the same way, using BuildIndexInfo().
 			 */
 			parentIndex = index_open(indexRelationId, lockmode);
 			indexInfo = BuildIndexInfo(parentIndex);
 
 			parentDesc = RelationGetDescr(rel);
 
 			/*
 			 * For each partition, scan all existing indexes; if one matches
 			 * our index definition and is not already attached to some other
 			 * parent index, attach it to the one we just created.
 			 *
 			 * If none matches, build a new index by calling ourselves
 			 * recursively with the same options (except for the index name).
 			 */
-			for (i = 0; i < nparts; i++)
+			for (int i = 0; i < nparts; i++)
 			{
 				Oid			childRelid = part_oids[i];
 				Relation	childrel;
 				Oid			child_save_userid;
 				int			child_save_sec_context;
 				int			child_save_nestlevel;
 				List	   *childidxs;
 				ListCell   *cell;
 				AttrMap    *attmap;
 				bool		found = false;
 
 				childrel = table_open(childRelid, lockmode);
 
 				GetUserIdAndSecContext(&child_save_userid,
 									   &child_save_sec_context);
 				SetUserIdAndSecContext(childrel->rd_rel->relowner,
 									   child_save_sec_context | SECURITY_RESTRICTED_OPERATION);
 				child_save_nestlevel = NewGUCNestLevel();
 
 				/*
 				 * Don't try to create indexes on foreign tables, though. Skip
 				 * those if a regular index, or fail if trying to create a
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 96d200e4461..933c3049016 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -1277,51 +1277,50 @@ prepare_projection_slot(AggState *aggstate, TupleTableSlot *slot, int currentSet
 		}
 	}
 }
 
 /*
  * Compute the final value of all aggregates for one group.
  *
  * This function handles only one grouping set at a time, which the caller must
  * have selected.  It's also the caller's responsibility to adjust the supplied
  * pergroup parameter to point to the current set's transvalues.
  *
  * Results are stored in the output econtext aggvalues/aggnulls.
  */
 static void
 finalize_aggregates(AggState *aggstate,
 					AggStatePerAgg peraggs,
 					AggStatePerGroup pergroup)
 {
 	ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
 	Datum	   *aggvalues = econtext->ecxt_aggvalues;
 	bool	   *aggnulls = econtext->ecxt_aggnulls;
 	int			aggno;
-	int			transno;
 
 	/*
 	 * If there were any DISTINCT and/or ORDER BY aggregates, sort their
 	 * inputs and run the transition functions.
 	 */
-	for (transno = 0; transno < aggstate->numtrans; transno++)
+	for (int transno = 0; transno < aggstate->numtrans; transno++)
 	{
 		AggStatePerTrans pertrans = &aggstate->pertrans[transno];
 		AggStatePerGroup pergroupstate;
 
 		pergroupstate = &pergroup[transno];
 
 		if (pertrans->aggsortrequired)
 		{
 			Assert(aggstate->aggstrategy != AGG_HASHED &&
 				   aggstate->aggstrategy != AGG_MIXED);
 
 			if (pertrans->numInputs == 1)
 				process_ordered_aggregate_single(aggstate,
 												 pertrans,
 												 pergroupstate);
 			else
 				process_ordered_aggregate_multi(aggstate,
 												pertrans,
 												pergroupstate);
 		}
 		else if (pertrans->numDistinctCols > 0 && pertrans->haslast)
 		{
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 1545ff9f161..f9d40fa1a0d 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1631,54 +1631,54 @@ interpret_ident_response(const char *ident_response,
 			while (pg_isblank(*cursor))
 				cursor++;		/* skip blanks */
 			if (strcmp(response_type, "USERID") != 0)
 				return false;
 			else
 			{
 				/*
 				 * It's a USERID response.  Good.  "cursor" should be pointing
 				 * to the colon that precedes the operating system type.
 				 */
 				if (*cursor != ':')
 					return false;
 				else
 				{
 					cursor++;	/* Go over colon */
 					/* Skip over operating system field. */
 					while (*cursor != ':' && *cursor != '\r')
 						cursor++;
 					if (*cursor != ':')
 						return false;
 					else
 					{
-						int			i;	/* Index into *ident_user */
+						int			j;	/* Index into *ident_user */
 
 						cursor++;	/* Go over colon */
 						while (pg_isblank(*cursor))
 							cursor++;	/* skip blanks */
 						/* Rest of line is user name.  Copy it over. */
-						i = 0;
+						j = 0;
 						while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
-							ident_user[i++] = *cursor++;
-						ident_user[i] = '\0';
+							ident_user[j++] = *cursor++;
+						ident_user[j] = '\0';
 						return true;
 					}
 				}
 			}
 		}
 	}
 }
 
 
 /*
  *	Talk to the ident server on "remote_addr" and find out who
  *	owns the tcp connection to "local_addr"
  *	If the username is successfully retrieved, check the usermap.
  *
  *	XXX: Using WaitLatchOrSocket() and doing a CHECK_FOR_INTERRUPTS() if the
  *	latch was set would improve the responsiveness to timeouts/cancellations.
  */
 static int
 ident_inet(hbaPort *port)
 {
 	const SockAddr remote_addr = port->raddr;
 	const SockAddr local_addr = port->laddr;
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 1e94c5aa7c4..75acea149c7 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2428,101 +2428,101 @@ cost_sort(Path *path, PlannerInfo *root,
 		startup_cost += disable_cost;
 
 	startup_cost += input_cost;
 
 	path->rows = tuples;
 	path->startup_cost = startup_cost;
 	path->total_cost = startup_cost + run_cost;
 }
 
 /*
  * append_nonpartial_cost
  *	  Estimate the cost of the non-partial paths in a Parallel Append.
  *	  The non-partial paths are assumed to be the first "numpaths" paths
  *	  from the subpaths list, and to be in order of decreasing cost.
  */
 static Cost
 append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
 {
 	Cost	   *costarr;
 	int			arrlen;
 	ListCell   *l;
 	ListCell   *cell;
-	int			i;
 	int			path_index;
 	int			min_index;
 	int			max_index;
 
 	if (numpaths == 0)
 		return 0;
 
 	/*
 	 * Array length is number of workers or number of relevant paths,
 	 * whichever is less.
 	 */
 	arrlen = Min(parallel_workers, numpaths);
 	costarr = (Cost *) palloc(sizeof(Cost) * arrlen);
 
 	/* The first few paths will each be claimed by a different worker. */
 	path_index = 0;
 	foreach(cell, subpaths)
 	{
 		Path	   *subpath = (Path *) lfirst(cell);
 
 		if (path_index == arrlen)
 			break;
 		costarr[path_index++] = subpath->total_cost;
 	}
 
 	/*
 	 * Since subpaths are sorted by decreasing cost, the last one will have
 	 * the minimum cost.
 	 */
 	min_index = arrlen - 1;
 
 	/*
 	 * For each of the remaining subpaths, add its cost to the array element
 	 * with minimum cost.
 	 */
 	for_each_cell(l, subpaths, cell)
 	{
 		Path	   *subpath = (Path *) lfirst(l);
-		int			i;
 
 		/* Consider only the non-partial paths */
 		if (path_index++ == numpaths)
 			break;
 
 		costarr[min_index] += subpath->total_cost;
 
 		/* Update the new min cost array index */
-		for (min_index = i = 0; i < arrlen; i++)
+		min_index = 0;
+		for (int i = 0; i < arrlen; i++)
 		{
 			if (costarr[i] < costarr[min_index])
 				min_index = i;
 		}
 	}
 
 	/* Return the highest cost from the array */
-	for (max_index = i = 0; i < arrlen; i++)
+	max_index = 0;
+	for (int i = 0; i < arrlen; i++)
 	{
 		if (costarr[i] > costarr[max_index])
 			max_index = i;
 	}
 
 	return costarr[max_index];
 }
 
 /*
  * cost_append
  *	  Determines and returns the cost of an Append node.
  */
 void
 cost_append(AppendPath *apath, PlannerInfo *root)
 {
 	ListCell   *l;
 
 	apath->path.startup_cost = 0;
 	apath->path.total_cost = 0;
 	apath->path.rows = 0;
 
 	if (apath->subpaths == NIL)
diff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c
index 5410a68bc91..91b9635dc0a 100644
--- a/src/backend/statistics/mcv.c
+++ b/src/backend/statistics/mcv.c
@@ -1585,45 +1585,44 @@ mcv_match_expression(Node *expr, Bitmapset *keys, List *exprs, Oid *collid)
  *	Evaluate clauses using the MCV list, and update the match bitmap.
  *
  * A match bitmap keeps match/mismatch status for each MCV item, and we
  * update it based on additional clauses. We also use it to skip items
  * that can't possibly match (e.g. item marked as "mismatch" can't change
  * to "match" when evaluating AND clause list).
  *
  * The function also returns a flag indicating whether there was an
  * equality condition for all attributes, the minimum frequency in the MCV
  * list, and a total MCV frequency (sum of frequencies for all items).
  *
  * XXX Currently the match bitmap uses a bool for each MCV item, which is
  * somewhat wasteful as we could do with just a single bit, thus reducing
  * the size to ~1/8. It would also allow us to combine bitmaps simply using
  * & and |, which should be faster than min/max. The bitmaps are fairly
  * small, though (thanks to the cap on the MCV list size).
  */
 static bool *
 mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 					 Bitmapset *keys, List *exprs,
 					 MCVList *mcvlist, bool is_or)
 {
-	int			i;
 	ListCell   *l;
 	bool	   *matches;
 
 	/* The bitmap may be partially built. */
 	Assert(clauses != NIL);
 	Assert(mcvlist != NULL);
 	Assert(mcvlist->nitems > 0);
 	Assert(mcvlist->nitems <= STATS_MCVLIST_MAX_ITEMS);
 
 	matches = palloc(sizeof(bool) * mcvlist->nitems);
 	memset(matches, !is_or, sizeof(bool) * mcvlist->nitems);
 
 	/*
 	 * Loop through the list of clauses, and for each of them evaluate all the
 	 * MCV items not yet eliminated by the preceding clauses.
 	 */
 	foreach(l, clauses)
 	{
 		Node	   *clause = (Node *) lfirst(l);
 
 		/* if it's a RestrictInfo, then extract the clause */
 		if (IsA(clause, RestrictInfo))
@@ -1640,45 +1639,45 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 
 			/* valid only after examine_opclause_args returns true */
 			Node	   *clause_expr;
 			Const	   *cst;
 			bool		expronleft;
 			int			idx;
 			Oid			collid;
 
 			fmgr_info(get_opcode(expr->opno), &opproc);
 
 			/* extract the var/expr and const from the expression */
 			if (!examine_opclause_args(expr->args, &clause_expr, &cst, &expronleft))
 				elog(ERROR, "incompatible clause");
 
 			/* match the attribute/expression to a dimension of the statistic */
 			idx = mcv_match_expression(clause_expr, keys, exprs, &collid);
 
 			/*
 			 * Walk through the MCV items and evaluate the current clause. We
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				bool		match = true;
 				MCVItem    *item = &mcvlist->items[i];
 
 				Assert(idx >= 0);
 
 				/*
 				 * When the MCV item or the Const value is NULL we can treat
 				 * this as a mismatch. We must not call the operator because
 				 * of strictness.
 				 */
 				if (item->isnull[idx] || cst->constisnull)
 				{
 					matches[i] = RESULT_MERGE(matches[i], is_or, false);
 					continue;
 				}
 
 				/*
 				 * Skip MCV items that can't change result in the bitmap. Once
 				 * the value gets false for AND-lists, or true for OR-lists,
 				 * we don't need to look at more clauses.
 				 */
@@ -1747,45 +1746,45 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * Deconstruct the array constant, unless it's NULL (we'll cover
 			 * that case below)
 			 */
 			if (!cst->constisnull)
 			{
 				arrayval = DatumGetArrayTypeP(cst->constvalue);
 				get_typlenbyvalalign(ARR_ELEMTYPE(arrayval),
 									 &elmlen, &elmbyval, &elmalign);
 				deconstruct_array(arrayval,
 								  ARR_ELEMTYPE(arrayval),
 								  elmlen, elmbyval, elmalign,
 								  &elem_values, &elem_nulls, &num_elems);
 			}
 
 			/* match the attribute/expression to a dimension of the statistic */
 			idx = mcv_match_expression(clause_expr, keys, exprs, &collid);
 
 			/*
 			 * Walk through the MCV items and evaluate the current clause. We
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				int			j;
 				bool		match = !expr->useOr;
 				MCVItem    *item = &mcvlist->items[i];
 
 				/*
 				 * When the MCV item or the Const value is NULL we can treat
 				 * this as a mismatch. We must not call the operator because
 				 * of strictness.
 				 */
 				if (item->isnull[idx] || cst->constisnull)
 				{
 					matches[i] = RESULT_MERGE(matches[i], is_or, false);
 					continue;
 				}
 
 				/*
 				 * Skip MCV items that can't change result in the bitmap. Once
 				 * the value gets false for AND-lists, or true for OR-lists,
 				 * we don't need to look at more clauses.
 				 */
 				if (RESULT_IS_FINAL(matches[i], is_or))
@@ -1818,164 +1817,162 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 																elem_value));
 
 					match = RESULT_MERGE(match, expr->useOr, elem_match);
 				}
 
 				/* update the match bitmap with the result */
 				matches[i] = RESULT_MERGE(matches[i], is_or, match);
 			}
 		}
 		else if (IsA(clause, NullTest))
 		{
 			NullTest   *expr = (NullTest *) clause;
 			Node	   *clause_expr = (Node *) (expr->arg);
 
 			/* match the attribute/expression to a dimension of the statistic */
 			int			idx = mcv_match_expression(clause_expr, keys, exprs, NULL);
 
 			/*
 			 * Walk through the MCV items and evaluate the current clause. We
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				bool		match = false;	/* assume mismatch */
 				MCVItem    *item = &mcvlist->items[i];
 
 				/* if the clause mismatches the MCV item, update the bitmap */
 				switch (expr->nulltesttype)
 				{
 					case IS_NULL:
 						match = (item->isnull[idx]) ? true : match;
 						break;
 
 					case IS_NOT_NULL:
 						match = (!item->isnull[idx]) ? true : match;
 						break;
 				}
 
 				/* now, update the match bitmap, depending on OR/AND type */
 				matches[i] = RESULT_MERGE(matches[i], is_or, match);
 			}
 		}
 		else if (is_orclause(clause) || is_andclause(clause))
 		{
 			/* AND/OR clause, with all subclauses being compatible */
 
-			int			i;
 			BoolExpr   *bool_clause = ((BoolExpr *) clause);
 			List	   *bool_clauses = bool_clause->args;
 
 			/* match/mismatch bitmap for each MCV item */
 			bool	   *bool_matches = NULL;
 
 			Assert(bool_clauses != NIL);
 			Assert(list_length(bool_clauses) >= 2);
 
 			/* build the match bitmap for the OR-clauses */
 			bool_matches = mcv_get_match_bitmap(root, bool_clauses, keys, exprs,
 												mcvlist, is_orclause(clause));
 
 			/*
 			 * Merge the bitmap produced by mcv_get_match_bitmap into the
 			 * current one. We need to consider if we're evaluating AND or OR
 			 * condition when merging the results.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 				matches[i] = RESULT_MERGE(matches[i], is_or, bool_matches[i]);
 
 			pfree(bool_matches);
 		}
 		else if (is_notclause(clause))
 		{
 			/* NOT clause, with all subclauses compatible */
 
-			int			i;
 			BoolExpr   *not_clause = ((BoolExpr *) clause);
 			List	   *not_args = not_clause->args;
 
 			/* match/mismatch bitmap for each MCV item */
 			bool	   *not_matches = NULL;
 
 			Assert(not_args != NIL);
 			Assert(list_length(not_args) == 1);
 
 			/* build the match bitmap for the NOT-clause */
 			not_matches = mcv_get_match_bitmap(root, not_args, keys, exprs,
 											   mcvlist, false);
 
 			/*
 			 * Merge the bitmap produced by mcv_get_match_bitmap into the
 			 * current one. We're handling a NOT clause, so invert the result
 			 * before merging it into the global bitmap.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 				matches[i] = RESULT_MERGE(matches[i], is_or, !not_matches[i]);
 
 			pfree(not_matches);
 		}
 		else if (IsA(clause, Var))
 		{
 			/* Var (has to be a boolean Var, possibly from below NOT) */
 
 			Var		   *var = (Var *) (clause);
 
 			/* match the attribute to a dimension of the statistic */
 			int			idx = bms_member_index(keys, var->varattno);
 
 			Assert(var->vartype == BOOLOID);
 
 			/*
 			 * Walk through the MCV items and evaluate the current clause. We
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				MCVItem    *item = &mcvlist->items[i];
 				bool		match = false;
 
 				/* if the item is NULL, it's a mismatch */
 				if (!item->isnull[idx] && DatumGetBool(item->values[idx]))
 					match = true;
 
 				/* update the result bitmap */
 				matches[i] = RESULT_MERGE(matches[i], is_or, match);
 			}
 		}
 		else
 		{
 			/* Otherwise, it must be a bare boolean-returning expression */
 			int			idx;
 
 			/* match the expression to a dimension of the statistic */
 			idx = mcv_match_expression(clause, keys, exprs, NULL);
 
 			/*
 			 * Walk through the MCV items and evaluate the current clause. We
 			 * can skip items that were already ruled out, and terminate if
 			 * there are no remaining MCV items that might possibly match.
 			 */
-			for (i = 0; i < mcvlist->nitems; i++)
+			for (int i = 0; i < mcvlist->nitems; i++)
 			{
 				bool		match;
 				MCVItem    *item = &mcvlist->items[i];
 
 				/* "match" just means it's bool TRUE */
 				match = !item->isnull[idx] && DatumGetBool(item->values[idx]);
 
 				/* now, update the match bitmap, depending on OR/AND type */
 				matches[i] = RESULT_MERGE(matches[i], is_or, match);
 			}
 		}
 	}
 
 	return matches;
 }
 
 
 /*
  * mcv_combine_selectivities
  * 		Combine per-column and multi-column MCV selectivity estimates.
  *
  * simple_sel is a "simple" selectivity estimate (produced without using any
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 7a1202c6096..49d3b8c9dd0 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -3164,45 +3164,44 @@ DropRelationBuffers(SMgrRelation smgr_reln, ForkNumber *forkNum,
 			{
 				InvalidateBuffer(bufHdr);	/* releases spinlock */
 				break;
 			}
 		}
 		if (j >= nforks)
 			UnlockBufHdr(bufHdr, buf_state);
 	}
 }
 
 /* ---------------------------------------------------------------------
  *		DropRelationsAllBuffers
  *
  *		This function removes from the buffer pool all the pages of all
  *		forks of the specified relations.  It's equivalent to calling
  *		DropRelationBuffers once per fork per relation with firstDelBlock = 0.
  *		--------------------------------------------------------------------
  */
 void
 DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 {
 	int			i;
-	int			j;
 	int			n = 0;
 	SMgrRelation *rels;
 	BlockNumber (*block)[MAX_FORKNUM + 1];
 	uint64		nBlocksToInvalidate = 0;
 	RelFileLocator *locators;
 	bool		cached = true;
 	bool		use_bsearch;
 
 	if (nlocators == 0)
 		return;
 
 	rels = palloc(sizeof(SMgrRelation) * nlocators);	/* non-local relations */
 
 	/* If it's a local relation, it's localbuf.c's problem. */
 	for (i = 0; i < nlocators; i++)
 	{
 		if (RelFileLocatorBackendIsTemp(smgr_reln[i]->smgr_rlocator))
 		{
 			if (smgr_reln[i]->smgr_rlocator.backend == MyBackendId)
 				DropRelationAllLocalBuffers(smgr_reln[i]->smgr_rlocator.locator);
 		}
 		else
@@ -3213,72 +3212,72 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
 	 * If there are no non-local relations, then we're done. Release the
 	 * memory and return.
 	 */
 	if (n == 0)
 	{
 		pfree(rels);
 		return;
 	}
 
 	/*
 	 * This is used to remember the number of blocks for all the relations
 	 * forks.
 	 */
 	block = (BlockNumber (*)[MAX_FORKNUM + 1])
 		palloc(sizeof(BlockNumber) * n * (MAX_FORKNUM + 1));
 
 	/*
 	 * We can avoid scanning the entire buffer pool if we know the exact size
 	 * of each of the given relation forks. See DropRelationBuffers.
 	 */
 	for (i = 0; i < n && cached; i++)
 	{
-		for (j = 0; j <= MAX_FORKNUM; j++)
+		for (int j = 0; j <= MAX_FORKNUM; j++)
 		{
 			/* Get the number of blocks for a relation's fork. */
 			block[i][j] = smgrnblocks_cached(rels[i], j);
 
 			/* We need to only consider the relation forks that exists. */
 			if (block[i][j] == InvalidBlockNumber)
 			{
 				if (!smgrexists(rels[i], j))
 					continue;
 				cached = false;
 				break;
 			}
 
 			/* calculate the total number of blocks to be invalidated */
 			nBlocksToInvalidate += block[i][j];
 		}
 	}
 
 	/*
 	 * We apply the optimization iff the total number of blocks to invalidate
 	 * is below the BUF_DROP_FULL_SCAN_THRESHOLD.
 	 */
 	if (cached && nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)
 	{
 		for (i = 0; i < n; i++)
 		{
-			for (j = 0; j <= MAX_FORKNUM; j++)
+			for (int j = 0; j <= MAX_FORKNUM; j++)
 			{
 				/* ignore relation forks that doesn't exist */
 				if (!BlockNumberIsValid(block[i][j]))
 					continue;
 
 				/* drop all the buffers for a particular relation fork */
 				FindAndDropRelationBuffers(rels[i]->smgr_rlocator.locator,
 										   j, block[i][j], 0);
 			}
 		}
 
 		pfree(block);
 		pfree(rels);
 		return;
 	}
 
 	pfree(block);
 	locators = palloc(sizeof(RelFileLocator) * n);	/* non-local relations */
 	for (i = 0; i < n; i++)
 		locators[i] = rels[i]->smgr_rlocator.locator;
 
 	/*
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2c689157329..c0d09edf9d0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -11557,45 +11557,44 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
 	char	   *proretset;
 	char	   *prosrc;
 	char	   *probin;
 	char	   *prosqlbody;
 	char	   *funcargs;
 	char	   *funciargs;
 	char	   *funcresult;
 	char	   *protrftypes;
 	char	   *prokind;
 	char	   *provolatile;
 	char	   *proisstrict;
 	char	   *prosecdef;
 	char	   *proleakproof;
 	char	   *proconfig;
 	char	   *procost;
 	char	   *prorows;
 	char	   *prosupport;
 	char	   *proparallel;
 	char	   *lanname;
 	char	  **configitems = NULL;
 	int			nconfigitems = 0;
 	const char *keyword;
-	int			i;
 
 	/* Do nothing in data-only dump */
 	if (dopt->dataOnly)
 		return;
 
 	query = createPQExpBuffer();
 	q = createPQExpBuffer();
 	delqry = createPQExpBuffer();
 	asPart = createPQExpBuffer();
 
 	if (!fout->is_prepared[PREPQUERY_DUMPFUNC])
 	{
 		/* Set up query for function-specific details */
 		appendPQExpBufferStr(query,
 							 "PREPARE dumpFunc(pg_catalog.oid) AS\n");
 
 		appendPQExpBufferStr(query,
 							 "SELECT\n"
 							 "proretset,\n"
 							 "prosrc,\n"
 							 "probin,\n"
 							 "provolatile,\n"
@@ -11757,49 +11756,48 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
 
 	appendPQExpBuffer(q, "CREATE %s %s.%s",
 					  keyword,
 					  fmtId(finfo->dobj.namespace->dobj.name),
 					  funcfullsig ? funcfullsig :
 					  funcsig);
 
 	if (prokind[0] == PROKIND_PROCEDURE)
 		 /* no result type to output */ ;
 	else if (funcresult)
 		appendPQExpBuffer(q, " RETURNS %s", funcresult);
 	else
 		appendPQExpBuffer(q, " RETURNS %s%s",
 						  (proretset[0] == 't') ? "SETOF " : "",
 						  getFormattedTypeName(fout, finfo->prorettype,
 											   zeroIsError));
 
 	appendPQExpBuffer(q, "\n    LANGUAGE %s", fmtId(lanname));
 
 	if (*protrftypes)
 	{
 		Oid		   *typeids = palloc(FUNC_MAX_ARGS * sizeof(Oid));
-		int			i;
 
 		appendPQExpBufferStr(q, " TRANSFORM ");
 		parseOidArray(protrftypes, typeids, FUNC_MAX_ARGS);
-		for (i = 0; typeids[i]; i++)
+		for (int i = 0; typeids[i]; i++)
 		{
 			if (i != 0)
 				appendPQExpBufferStr(q, ", ");
 			appendPQExpBuffer(q, "FOR TYPE %s",
 							  getFormattedTypeName(fout, typeids[i], zeroAsNone));
 		}
 	}
 
 	if (prokind[0] == PROKIND_WINDOW)
 		appendPQExpBufferStr(q, " WINDOW");
 
 	if (provolatile[0] != PROVOLATILE_VOLATILE)
 	{
 		if (provolatile[0] == PROVOLATILE_IMMUTABLE)
 			appendPQExpBufferStr(q, " IMMUTABLE");
 		else if (provolatile[0] == PROVOLATILE_STABLE)
 			appendPQExpBufferStr(q, " STABLE");
 		else if (provolatile[0] != PROVOLATILE_VOLATILE)
 			pg_fatal("unrecognized provolatile value for function \"%s\"",
 					 finfo->dobj.name);
 	}
 
@@ -11834,45 +11832,45 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
 	}
 	if (proretset[0] == 't' &&
 		strcmp(prorows, "0") != 0 && strcmp(prorows, "1000") != 0)
 		appendPQExpBuffer(q, " ROWS %s", prorows);
 
 	if (strcmp(prosupport, "-") != 0)
 	{
 		/* We rely on regprocout to provide quoting and qualification */
 		appendPQExpBuffer(q, " SUPPORT %s", prosupport);
 	}
 
 	if (proparallel[0] != PROPARALLEL_UNSAFE)
 	{
 		if (proparallel[0] == PROPARALLEL_SAFE)
 			appendPQExpBufferStr(q, " PARALLEL SAFE");
 		else if (proparallel[0] == PROPARALLEL_RESTRICTED)
 			appendPQExpBufferStr(q, " PARALLEL RESTRICTED");
 		else if (proparallel[0] != PROPARALLEL_UNSAFE)
 			pg_fatal("unrecognized proparallel value for function \"%s\"",
 					 finfo->dobj.name);
 	}
 
-	for (i = 0; i < nconfigitems; i++)
+	for (int i = 0; i < nconfigitems; i++)
 	{
 		/* we feel free to scribble on configitems[] here */
 		char	   *configitem = configitems[i];
 		char	   *pos;
 
 		pos = strchr(configitem, '=');
 		if (pos == NULL)
 			continue;
 		*pos++ = '\0';
 		appendPQExpBuffer(q, "\n    SET %s TO ", fmtId(configitem));
 
 		/*
 		 * Variables that are marked GUC_LIST_QUOTE were already fully quoted
 		 * by flatten_set_variable_args() before they were put into the
 		 * proconfig array.  However, because the quoting rules used there
 		 * aren't exactly like SQL's, we have to break the list value apart
 		 * and then quote the elements as string literals.  (The elements may
 		 * be double-quoted as-is, but we can't just feed them to the SQL
 		 * parser; it would do the wrong thing with elements that are
 		 * zero-length or longer than NAMEDATALEN.)
 		 *
 		 * Variables that are not so marked should just be emitted as simple
diff --git a/src/interfaces/ecpg/pgtypeslib/numeric.c b/src/interfaces/ecpg/pgtypeslib/numeric.c
index a97b3300cb8..b666c909084 100644
--- a/src/interfaces/ecpg/pgtypeslib/numeric.c
+++ b/src/interfaces/ecpg/pgtypeslib/numeric.c
@@ -1043,45 +1043,44 @@ select_div_scale(numeric *var1, numeric *var2, int *rscale)
 	res_dscale = Max(res_dscale, NUMERIC_MIN_DISPLAY_SCALE);
 	res_dscale = Min(res_dscale, NUMERIC_MAX_DISPLAY_SCALE);
 
 	/* Select result scale */
 	*rscale = res_dscale + 4;
 
 	return res_dscale;
 }
 
 int
 PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 {
 	NumericDigit *res_digits;
 	int			res_ndigits;
 	int			res_sign;
 	int			res_weight;
 	numeric		dividend;
 	numeric		divisor[10];
 	int			ndigits_tmp;
 	int			weight_tmp;
 	int			rscale_tmp;
 	int			ri;
-	int			i;
 	long		guess;
 	long		first_have;
 	long		first_div;
 	int			first_nextdigit;
 	int			stat = 0;
 	int			rscale;
 	int			res_dscale = select_div_scale(var1, var2, &rscale);
 	int			err = -1;
 	NumericDigit *tmp_buf;
 
 	/*
 	 * First of all division by zero check
 	 */
 	ndigits_tmp = var2->ndigits + 1;
 	if (ndigits_tmp == 1)
 	{
 		errno = PGTYPES_NUM_DIVIDE_ZERO;
 		return -1;
 	}
 
 	/*
 	 * Determine the result sign, weight and number of digits to calculate
@@ -1090,45 +1089,45 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 		res_sign = NUMERIC_POS;
 	else
 		res_sign = NUMERIC_NEG;
 	res_weight = var1->weight - var2->weight + 1;
 	res_ndigits = rscale + res_weight;
 	if (res_ndigits <= 0)
 		res_ndigits = 1;
 
 	/*
 	 * Now result zero check
 	 */
 	if (var1->ndigits == 0)
 	{
 		zero_var(result);
 		result->rscale = rscale;
 		return 0;
 	}
 
 	/*
 	 * Initialize local variables
 	 */
 	init_var(&dividend);
-	for (i = 1; i < 10; i++)
+	for (int i = 1; i < 10; i++)
 		init_var(&divisor[i]);
 
 	/*
 	 * Make a copy of the divisor which has one leading zero digit
 	 */
 	divisor[1].ndigits = ndigits_tmp;
 	divisor[1].rscale = var2->ndigits;
 	divisor[1].sign = NUMERIC_POS;
 	divisor[1].buf = digitbuf_alloc(ndigits_tmp);
 	if (divisor[1].buf == NULL)
 		goto done;
 	divisor[1].digits = divisor[1].buf;
 	divisor[1].digits[0] = 0;
 	memcpy(&(divisor[1].digits[1]), var2->digits, ndigits_tmp - 1);
 
 	/*
 	 * Make a copy of the dividend
 	 */
 	dividend.ndigits = var1->ndigits;
 	dividend.weight = 0;
 	dividend.rscale = var1->ndigits;
 	dividend.sign = NUMERIC_POS;
@@ -1162,53 +1161,52 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 
 	first_have = 0;
 	first_nextdigit = 0;
 
 	weight_tmp = 1;
 	rscale_tmp = divisor[1].rscale;
 
 	for (ri = 0; ri <= res_ndigits; ri++)
 	{
 		first_have = first_have * 10;
 		if (first_nextdigit >= 0 && first_nextdigit < dividend.ndigits)
 			first_have += dividend.digits[first_nextdigit];
 		first_nextdigit++;
 
 		guess = (first_have * 10) / first_div + 1;
 		if (guess > 9)
 			guess = 9;
 
 		while (guess > 0)
 		{
 			if (divisor[guess].buf == NULL)
 			{
-				int			i;
 				long		sum = 0;
 
 				memcpy(&divisor[guess], &divisor[1], sizeof(numeric));
 				divisor[guess].buf = digitbuf_alloc(divisor[guess].ndigits);
 				if (divisor[guess].buf == NULL)
 					goto done;
 				divisor[guess].digits = divisor[guess].buf;
-				for (i = divisor[1].ndigits - 1; i >= 0; i--)
+				for (int i = divisor[1].ndigits - 1; i >= 0; i--)
 				{
 					sum += divisor[1].digits[i] * guess;
 					divisor[guess].digits[i] = sum % 10;
 					sum /= 10;
 				}
 			}
 
 			divisor[guess].weight = weight_tmp;
 			divisor[guess].rscale = rscale_tmp;
 
 			stat = cmp_abs(&dividend, &divisor[guess]);
 			if (stat >= 0)
 				break;
 
 			guess--;
 		}
 
 		res_digits[ri + 1] = guess;
 		if (stat == 0)
 		{
 			ri++;
 			break;
@@ -1249,45 +1247,45 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 	while (result->ndigits > 0 && *(result->digits) == 0)
 	{
 		(result->digits)++;
 		(result->weight)--;
 		(result->ndigits)--;
 	}
 	while (result->ndigits > 0 && result->digits[result->ndigits - 1] == 0)
 		(result->ndigits)--;
 	if (result->ndigits == 0)
 		result->sign = NUMERIC_POS;
 
 	result->dscale = res_dscale;
 	err = 0;					/* if we've made it this far, return success */
 
 done:
 
 	/*
 	 * Tidy up
 	 */
 	if (dividend.buf != NULL)
 		digitbuf_free(dividend.buf);
 
-	for (i = 1; i < 10; i++)
+	for (int i = 1; i < 10; i++)
 	{
 		if (divisor[i].buf != NULL)
 			digitbuf_free(divisor[i].buf);
 	}
 
 	return err;
 }
 
 
 int
 PGTYPESnumeric_cmp(numeric *var1, numeric *var2)
 {
 	/* use cmp_abs function to calculate the result */
 
 	/* both are positive: normal comparison with cmp_abs */
 	if (var1->sign == NUMERIC_POS && var2->sign == NUMERIC_POS)
 		return cmp_abs(var1, var2);
 
 	/* both are negative: return the inverse of the normal comparison */
 	if (var1->sign == NUMERIC_NEG && var2->sign == NUMERIC_NEG)
 	{
 		/*
#18David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#17)
1 attachment(s)
Re: shadow variables - pg15 edition

On Tue, 23 Aug 2022 at 14:14, Justin Pryzby <pryzby@telsasoft.com> wrote:

Actually, they didn't sneak in - what I sent are the patches which are ready to
be reviewed, excluding the set of "this" and "tmp" and other renames which you
disliked. In the branch (not the squished patch) the first ~15 patches were
mostly for C99 for loops - I presented them this way deliberately, so you could
review and comment on whatever you're able to bite off, or run with whatever
parts you think are ready. I rewrote it now to be more bite sized by
truncating off the 2nd half of the patches.

Thanks for the updated patch.

I've now pushed it after making some small adjustments.

It seems there was one leftover rename still there, I removed that.
The only other changes I made were to just make the patch mode
consistent with what it was doing. There were a few cases where you
were doing:

  if (typlen == -1) /* varlena */
  {
- int i;
-
- for (i = 0; i < nvalues; i++)
+ for (int i = 0; i < nvalues; i++)

That wasn't really required to remove the warning as you'd already
adjusted the scope of the shadowed variable so there was no longer a
collision. The reason I adjusted these was because sometimes you were
doing that, and sometimes you were not. I wanted to be consistent, so
I opted for not doing it as it's not required for this effort. Maybe
one day those can be changed in some other unrelated effort to C99ify
our code.

The attached patch is just the portions I didn't commit.

Thanks for working on this.

David

Attachments:

v2_didnt_apply.patchtext/plain; charset=US-ASCII; name=v2_didnt_apply.patchDownload
diff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.c
index 524c1846b8..a581659fe2 100644
--- a/src/backend/access/brin/brin_minmax_multi.c
+++ b/src/backend/access/brin/brin_minmax_multi.c
@@ -620,14 +620,18 @@ brin_range_serialize(Ranges *range)
 	 */
 	if (typlen == -1)			/* varlena */
 	{
-		for (int i = 0; i < nvalues; i++)
+		int			i;
+
+		for (i = 0; i < nvalues; i++)
 		{
 			len += VARSIZE_ANY(range->values[i]);
 		}
 	}
 	else if (typlen == -2)		/* cstring */
 	{
-		for (int i = 0; i < nvalues; i++)
+		int			i;
+
+		for (i = 0; i < nvalues; i++)
 		{
 			/* don't forget to include the null terminator ;-) */
 			len += strlen(DatumGetCString(range->values[i])) + 1;
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index f9d40fa1a0..1545ff9f16 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1650,16 +1650,16 @@ interpret_ident_response(const char *ident_response,
 						return false;
 					else
 					{
-						int			j;	/* Index into *ident_user */
+						int			i;	/* Index into *ident_user */
 
 						cursor++;	/* Go over colon */
 						while (pg_isblank(*cursor))
 							cursor++;	/* skip blanks */
 						/* Rest of line is user name.  Copy it over. */
-						j = 0;
+						i = 0;
 						while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
-							ident_user[j++] = *cursor++;
-						ident_user[j] = '\0';
+							ident_user[i++] = *cursor++;
+						ident_user[i] = '\0';
 						return true;
 					}
 				}
diff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c
index 91b9635dc0..6eeacb0d47 100644
--- a/src/backend/statistics/mcv.c
+++ b/src/backend/statistics/mcv.c
@@ -1861,6 +1861,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 		{
 			/* AND/OR clause, with all subclauses being compatible */
 
+			int			i;
 			BoolExpr   *bool_clause = ((BoolExpr *) clause);
 			List	   *bool_clauses = bool_clause->args;
 
@@ -1879,7 +1880,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * current one. We need to consider if we're evaluating AND or OR
 			 * condition when merging the results.
 			 */
-			for (int i = 0; i < mcvlist->nitems; i++)
+			for (i = 0; i < mcvlist->nitems; i++)
 				matches[i] = RESULT_MERGE(matches[i], is_or, bool_matches[i]);
 
 			pfree(bool_matches);
@@ -1888,6 +1889,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 		{
 			/* NOT clause, with all subclauses compatible */
 
+			int			i;
 			BoolExpr   *not_clause = ((BoolExpr *) clause);
 			List	   *not_args = not_clause->args;
 
@@ -1906,7 +1908,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
 			 * current one. We're handling a NOT clause, so invert the result
 			 * before merging it into the global bitmap.
 			 */
-			for (int i = 0; i < mcvlist->nitems; i++)
+			for (i = 0; i < mcvlist->nitems; i++)
 				matches[i] = RESULT_MERGE(matches[i], is_or, !not_matches[i]);
 
 			pfree(not_matches);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c0d09edf9d..ca4ad07004 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -11775,10 +11775,11 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
 	if (*protrftypes)
 	{
 		Oid		   *typeids = palloc(FUNC_MAX_ARGS * sizeof(Oid));
+		int			i;
 
 		appendPQExpBufferStr(q, " TRANSFORM ");
 		parseOidArray(protrftypes, typeids, FUNC_MAX_ARGS);
-		for (int i = 0; typeids[i]; i++)
+		for (i = 0; typeids[i]; i++)
 		{
 			if (i != 0)
 				appendPQExpBufferStr(q, ", ");
diff --git a/src/interfaces/ecpg/pgtypeslib/numeric.c b/src/interfaces/ecpg/pgtypeslib/numeric.c
index b666c90908..35e7b92da4 100644
--- a/src/interfaces/ecpg/pgtypeslib/numeric.c
+++ b/src/interfaces/ecpg/pgtypeslib/numeric.c
@@ -1180,6 +1180,7 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 		{
 			if (divisor[guess].buf == NULL)
 			{
+				int			i;
 				long		sum = 0;
 
 				memcpy(&divisor[guess], &divisor[1], sizeof(numeric));
@@ -1187,7 +1188,7 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
 				if (divisor[guess].buf == NULL)
 					goto done;
 				divisor[guess].digits = divisor[guess].buf;
-				for (int i = divisor[1].ndigits - 1; i >= 0; i--)
+				for (i = divisor[1].ndigits - 1; i >= 0; i--)
 				{
 					sum += divisor[1].digits[i] * guess;
 					divisor[guess].digits[i] = sum % 10;
#19Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#18)
2 attachment(s)
Re: shadow variables - pg15 edition

On Wed, Aug 24, 2022 at 12:37:29PM +1200, David Rowley wrote:

On Tue, 23 Aug 2022 at 14:14, Justin Pryzby <pryzby@telsasoft.com> wrote:

Actually, they didn't sneak in - what I sent are the patches which are ready to
be reviewed, excluding the set of "this" and "tmp" and other renames which you
disliked. In the branch (not the squished patch) the first ~15 patches were
mostly for C99 for loops - I presented them this way deliberately, so you could
review and comment on whatever you're able to bite off, or run with whatever
parts you think are ready. I rewrote it now to be more bite sized by
truncating off the 2nd half of the patches.

Thanks for the updated patch.

I've now pushed it after making some small adjustments.

Thanks for handling them.

Attached are half of the remainder of what I've written, ready for review.

I also put it here: https://github.com/justinpryzby/postgres/tree/avoid-shadow-vars

You may or may not find the associated commit messages to be useful.
Let me know if you'd like the individual patches included here, instead.

The first patch removes 2ndary, "inner" declarations, where that seems
reasonably safe and consistent with existing practice (and probably what the
original authors intended or would have written).

--
Justin

Attachments:

v3-remove-var-declarations.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 87b243e0d4b..a090cada400 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -3017,46 +3017,45 @@ XLogFileInitInternal(XLogSegNo logsegno, TimeLineID logtli,
 	}
 	pgstat_report_wait_end();
 
 	if (save_errno)
 	{
 		/*
 		 * If we fail to make the file, delete it to release disk space
 		 */
 		unlink(tmppath);
 
 		close(fd);
 
 		errno = save_errno;
 
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not write to file \"%s\": %m", tmppath)));
 	}
 
 	pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
 	if (pg_fsync(fd) != 0)
 	{
-		int			save_errno = errno;
-
+		save_errno = errno;
 		close(fd);
 		errno = save_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not fsync file \"%s\": %m", tmppath)));
 	}
 	pgstat_report_wait_end();
 
 	if (close(fd) != 0)
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not close file \"%s\": %m", tmppath)));
 
 	/*
 	 * Now move the segment into place with its final name.  Cope with
 	 * possibility that someone else has created the file while we were
 	 * filling ours: if so, use ours to pre-create a future log segment.
 	 */
 	installed_segno = logsegno;
 
 	/*
 	 * XXX: What should we use as max_segno? We used to use XLOGfileslop when
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 9be04c8a1e7..dacc989d855 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -16777,45 +16777,44 @@ PreCommit_on_commit_actions(void)
 					oids_to_truncate = lappend_oid(oids_to_truncate, oc->relid);
 				break;
 			case ONCOMMIT_DROP:
 				oids_to_drop = lappend_oid(oids_to_drop, oc->relid);
 				break;
 		}
 	}
 
 	/*
 	 * Truncate relations before dropping so that all dependencies between
 	 * relations are removed after they are worked on.  Doing it like this
 	 * might be a waste as it is possible that a relation being truncated will
 	 * be dropped anyway due to its parent being dropped, but this makes the
 	 * code more robust because of not having to re-check that the relation
 	 * exists at truncation time.
 	 */
 	if (oids_to_truncate != NIL)
 		heap_truncate(oids_to_truncate);
 
 	if (oids_to_drop != NIL)
 	{
 		ObjectAddresses *targetObjects = new_object_addresses();
-		ListCell   *l;
 
 		foreach(l, oids_to_drop)
 		{
 			ObjectAddress object;
 
 			object.classId = RelationRelationId;
 			object.objectId = lfirst_oid(l);
 			object.objectSubId = 0;
 
 			Assert(!object_address_present(&object, targetObjects));
 
 			add_exact_object_address(&object, targetObjects);
 		}
 
 		/*
 		 * Since this is an automatic drop, rather than one directly initiated
 		 * by the user, we pass the PERFORM_DELETION_INTERNAL flag.
 		 */
 		performMultipleDeletions(targetObjects, DROP_CASCADE,
 								 PERFORM_DELETION_INTERNAL | PERFORM_DELETION_QUIETLY);
 
 #ifdef USE_ASSERT_CHECKING
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index dbdfe8bd2d4..3670d1f1861 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -214,46 +214,44 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
 		(skip_locked ? VACOPT_SKIP_LOCKED : 0) |
 		(analyze ? VACOPT_ANALYZE : 0) |
 		(freeze ? VACOPT_FREEZE : 0) |
 		(full ? VACOPT_FULL : 0) |
 		(disable_page_skipping ? VACOPT_DISABLE_PAGE_SKIPPING : 0) |
 		(process_toast ? VACOPT_PROCESS_TOAST : 0);
 
 	/* sanity checks on options */
 	Assert(params.options & (VACOPT_VACUUM | VACOPT_ANALYZE));
 	Assert((params.options & VACOPT_VACUUM) ||
 		   !(params.options & (VACOPT_FULL | VACOPT_FREEZE)));
 
 	if ((params.options & VACOPT_FULL) && params.nworkers > 0)
 		ereport(ERROR,
 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 				 errmsg("VACUUM FULL cannot be performed in parallel")));
 
 	/*
 	 * Make sure VACOPT_ANALYZE is specified if any column lists are present.
 	 */
 	if (!(params.options & VACOPT_ANALYZE))
 	{
-		ListCell   *lc;
-
 		foreach(lc, vacstmt->rels)
 		{
 			VacuumRelation *vrel = lfirst_node(VacuumRelation, lc);
 
 			if (vrel->va_cols != NIL)
 				ereport(ERROR,
 						(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 						 errmsg("ANALYZE option must be specified when a column list is provided")));
 		}
 	}
 
 	/*
 	 * All freeze ages are zero if the FREEZE option is given; otherwise pass
 	 * them as -1 which means to use the default values.
 	 */
 	if (params.options & VACOPT_FREEZE)
 	{
 		params.freeze_min_age = 0;
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
 	}
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index ac03271882f..901dd435efd 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -749,45 +749,44 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
 			 */
 			if (map == NULL)
 			{
 				/*
 				 * It's safe to reuse these from the partition root, as we
 				 * only process one tuple at a time (therefore we won't
 				 * overwrite needed data in slots), and the results of
 				 * projections are independent of the underlying storage.
 				 * Projections and where clauses themselves don't store state
 				 * / are independent of the underlying storage.
 				 */
 				onconfl->oc_ProjSlot =
 					rootResultRelInfo->ri_onConflict->oc_ProjSlot;
 				onconfl->oc_ProjInfo =
 					rootResultRelInfo->ri_onConflict->oc_ProjInfo;
 				onconfl->oc_WhereClause =
 					rootResultRelInfo->ri_onConflict->oc_WhereClause;
 			}
 			else
 			{
 				List	   *onconflset;
 				List	   *onconflcols;
-				bool		found_whole_row;
 
 				/*
 				 * Translate expressions in onConflictSet to account for
 				 * different attribute numbers.  For that, map partition
 				 * varattnos twice: first to catch the EXCLUDED
 				 * pseudo-relation (INNER_VAR), and second to handle the main
 				 * target relation (firstVarno).
 				 */
 				onconflset = copyObject(node->onConflictSet);
 				if (part_attmap == NULL)
 					part_attmap =
 						build_attrmap_by_name(RelationGetDescr(partrel),
 											  RelationGetDescr(firstResultRel));
 				onconflset = (List *)
 					map_variable_attnos((Node *) onconflset,
 										INNER_VAR, 0,
 										part_attmap,
 										RelationGetForm(partrel)->reltype,
 										&found_whole_row);
 				/* We ignore the value of found_whole_row. */
 				onconflset = (List *)
 					map_variable_attnos((Node *) onconflset,
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 7d176e7b00a..8ba27a98b42 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -342,45 +342,44 @@ create_index_paths(PlannerInfo *root, RelOptInfo *rel)
 		bpath = create_bitmap_heap_path(root, rel, bitmapqual,
 										rel->lateral_relids, 1.0, 0);
 		add_path(rel, (Path *) bpath);
 
 		/* create a partial bitmap heap path */
 		if (rel->consider_parallel && rel->lateral_relids == NULL)
 			create_partial_bitmap_paths(root, rel, bitmapqual);
 	}
 
 	/*
 	 * Likewise, if we found anything usable, generate BitmapHeapPaths for the
 	 * most promising combinations of join bitmap index paths.  Our strategy
 	 * is to generate one such path for each distinct parameterization seen
 	 * among the available bitmap index paths.  This may look pretty
 	 * expensive, but usually there won't be very many distinct
 	 * parameterizations.  (This logic is quite similar to that in
 	 * consider_index_join_clauses, but we're working with whole paths not
 	 * individual clauses.)
 	 */
 	if (bitjoinpaths != NIL)
 	{
 		List	   *all_path_outers;
-		ListCell   *lc;
 
 		/* Identify each distinct parameterization seen in bitjoinpaths */
 		all_path_outers = NIL;
 		foreach(lc, bitjoinpaths)
 		{
 			Path	   *path = (Path *) lfirst(lc);
 			Relids		required_outer = PATH_REQ_OUTER(path);
 
 			if (!bms_equal_any(required_outer, all_path_outers))
 				all_path_outers = lappend(all_path_outers, required_outer);
 		}
 
 		/* Now, for each distinct parameterization set ... */
 		foreach(lc, all_path_outers)
 		{
 			Relids		max_outers = (Relids) lfirst(lc);
 			List	   *this_path_set;
 			Path	   *bitmapqual;
 			Relids		required_outer;
 			double		loop_count;
 			BitmapHeapPath *bpath;
 			ListCell   *lcp;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index df4ca129191..b15ecc83971 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2383,45 +2383,45 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 
 				/* We must run finalize_plan on the subquery */
 				rel = find_base_rel(root, sscan->scan.scanrelid);
 				subquery_params = rel->subroot->outer_params;
 				if (gather_param >= 0)
 					subquery_params = bms_add_member(bms_copy(subquery_params),
 													 gather_param);
 				finalize_plan(rel->subroot, sscan->subplan, gather_param,
 							  subquery_params, NULL);
 
 				/* Now we can add its extParams to the parent's params */
 				context.paramids = bms_add_members(context.paramids,
 												   sscan->subplan->extParam);
 				/* We need scan_params too, though */
 				context.paramids = bms_add_members(context.paramids,
 												   scan_params);
 			}
 			break;
 
 		case T_FunctionScan:
 			{
 				FunctionScan *fscan = (FunctionScan *) plan;
-				ListCell   *lc;
+				ListCell   *lc; //
 
 				/*
 				 * Call finalize_primnode independently on each function
 				 * expression, so that we can record which params are
 				 * referenced in each, in order to decide which need
 				 * re-evaluating during rescan.
 				 */
 				foreach(lc, fscan->functions)
 				{
 					RangeTblFunction *rtfunc = (RangeTblFunction *) lfirst(lc);
 					finalize_primnode_context funccontext;
 
 					funccontext = context;
 					funccontext.paramids = NULL;
 
 					finalize_primnode(rtfunc->funcexpr, &funccontext);
 
 					/* remember results for execution */
 					rtfunc->funcparams = funccontext.paramids;
 
 					/* add the function's params to the overall set */
 					context.paramids = bms_add_members(context.paramids,
@@ -2491,158 +2491,148 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 		case T_NamedTuplestoreScan:
 			context.paramids = bms_add_members(context.paramids, scan_params);
 			break;
 
 		case T_ForeignScan:
 			{
 				ForeignScan *fscan = (ForeignScan *) plan;
 
 				finalize_primnode((Node *) fscan->fdw_exprs,
 								  &context);
 				finalize_primnode((Node *) fscan->fdw_recheck_quals,
 								  &context);
 
 				/* We assume fdw_scan_tlist cannot contain Params */
 				context.paramids = bms_add_members(context.paramids,
 												   scan_params);
 			}
 			break;
 
 		case T_CustomScan:
 			{
 				CustomScan *cscan = (CustomScan *) plan;
-				ListCell   *lc;
+				ListCell   *lc; //
 
 				finalize_primnode((Node *) cscan->custom_exprs,
 								  &context);
 				/* We assume custom_scan_tlist cannot contain Params */
 				context.paramids =
 					bms_add_members(context.paramids, scan_params);
 
 				/* child nodes if any */
 				foreach(lc, cscan->custom_plans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(lc),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_ModifyTable:
 			{
 				ModifyTable *mtplan = (ModifyTable *) plan;
 
 				/* Force descendant scan nodes to reference epqParam */
 				locally_added_param = mtplan->epqParam;
 				valid_params = bms_add_member(bms_copy(valid_params),
 											  locally_added_param);
 				scan_params = bms_add_member(bms_copy(scan_params),
 											 locally_added_param);
 				finalize_primnode((Node *) mtplan->returningLists,
 								  &context);
 				finalize_primnode((Node *) mtplan->onConflictSet,
 								  &context);
 				finalize_primnode((Node *) mtplan->onConflictWhere,
 								  &context);
 				/* exclRelTlist contains only Vars, doesn't need examination */
 			}
 			break;
 
 		case T_Append:
 			{
-				ListCell   *l;
-
 				foreach(l, ((Append *) plan)->appendplans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_MergeAppend:
 			{
-				ListCell   *l;
-
 				foreach(l, ((MergeAppend *) plan)->mergeplans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_BitmapAnd:
 			{
-				ListCell   *l;
-
 				foreach(l, ((BitmapAnd *) plan)->bitmapplans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_BitmapOr:
 			{
-				ListCell   *l;
-
 				foreach(l, ((BitmapOr *) plan)->bitmapplans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_NestLoop:
 			{
-				ListCell   *l;
-
 				finalize_primnode((Node *) ((Join *) plan)->joinqual,
 								  &context);
 				/* collect set of params that will be passed to right child */
 				foreach(l, ((NestLoop *) plan)->nestParams)
 				{
 					NestLoopParam *nlp = (NestLoopParam *) lfirst(l);
 
 					nestloop_params = bms_add_member(nestloop_params,
 													 nlp->paramno);
 				}
 			}
 			break;
 
 		case T_MergeJoin:
 			finalize_primnode((Node *) ((Join *) plan)->joinqual,
 							  &context);
 			finalize_primnode((Node *) ((MergeJoin *) plan)->mergeclauses,
 							  &context);
 			break;
 
 		case T_HashJoin:
 			finalize_primnode((Node *) ((Join *) plan)->joinqual,
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 043181b586b..71052c841d7 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -634,45 +634,44 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
 	 * For UNION ALL, we just need the Append path.  For UNION, need to add
 	 * node(s) to remove duplicates.
 	 */
 	if (!op->all)
 		path = make_union_unique(op, path, tlist, root);
 
 	add_path(result_rel, path);
 
 	/*
 	 * Estimate number of groups.  For now we just assume the output is unique
 	 * --- this is certainly true for the UNION case, and we want worst-case
 	 * estimates anyway.
 	 */
 	result_rel->rows = path->rows;
 
 	/*
 	 * Now consider doing the same thing using the partial paths plus Append
 	 * plus Gather.
 	 */
 	if (partial_paths_valid)
 	{
 		Path	   *ppath;
-		ListCell   *lc;
 		int			parallel_workers = 0;
 
 		/* Find the highest number of workers requested for any subpath. */
 		foreach(lc, partial_pathlist)
 		{
 			Path	   *path = lfirst(lc);
 
 			parallel_workers = Max(parallel_workers, path->parallel_workers);
 		}
 		Assert(parallel_workers > 0);
 
 		/*
 		 * If the use of parallel append is permitted, always request at least
 		 * log2(# of children) paths.  We assume it can be useful to have
 		 * extra workers in this case because they will be spread out across
 		 * the children.  The precise formula is just a guess; see
 		 * add_paths_to_append_rel.
 		 */
 		if (enable_parallel_append)
 		{
 			parallel_workers = Max(parallel_workers,
 								   pg_leftmost_one_pos32(list_length(partial_pathlist)) + 1);
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index c1c27e67d47..bf698c1fc3f 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1246,45 +1246,44 @@ dependency_is_compatible_expression(Node *clause, Index relid, List *statlist, N
 		 * first argument, and pseudoconstant is the second one.
 		 */
 		if (!is_pseudo_constant_clause(lsecond(expr->args)))
 			return false;
 
 		clause_expr = linitial(expr->args);
 
 		/*
 		 * If it's not an "=" operator, just ignore the clause, as it's not
 		 * compatible with functional dependencies. The operator is identified
 		 * simply by looking at which function it uses to estimate
 		 * selectivity. That's a bit strange, but it's what other similar
 		 * places do.
 		 */
 		if (get_oprrest(expr->opno) != F_EQSEL)
 			return false;
 
 		/* OK to proceed with checking "var" */
 	}
 	else if (is_orclause(clause))
 	{
 		BoolExpr   *bool_expr = (BoolExpr *) clause;
-		ListCell   *lc;
 
 		/* start with no expression (we'll use the first match) */
 		*expr = NULL;
 
 		foreach(lc, bool_expr->args)
 		{
 			Node	   *or_expr = NULL;
 
 			/*
 			 * Had we found incompatible expression in the arguments, treat
 			 * the whole expression as incompatible.
 			 */
 			if (!dependency_is_compatible_expression((Node *) lfirst(lc), relid,
 													 statlist, &or_expr))
 				return false;
 
 			if (*expr == NULL)
 				*expr = or_expr;
 
 			/* ensure all the expressions are the same */
 			if (!equal(or_expr, *expr))
 				return false;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 8964f73b929..3f5683f70b5 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -1284,45 +1284,44 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
 	idxrelrec = (Form_pg_class) GETSTRUCT(ht_idxrel);
 
 	/*
 	 * Fetch the pg_am tuple of the index' access method
 	 */
 	ht_am = SearchSysCache1(AMOID, ObjectIdGetDatum(idxrelrec->relam));
 	if (!HeapTupleIsValid(ht_am))
 		elog(ERROR, "cache lookup failed for access method %u",
 			 idxrelrec->relam);
 	amrec = (Form_pg_am) GETSTRUCT(ht_am);
 
 	/* Fetch the index AM's API struct */
 	amroutine = GetIndexAmRoutine(amrec->amhandler);
 
 	/*
 	 * Get the index expressions, if any.  (NOTE: we do not use the relcache
 	 * versions of the expressions and predicate, because we want to display
 	 * non-const-folded expressions.)
 	 */
 	if (!heap_attisnull(ht_idx, Anum_pg_index_indexprs, NULL))
 	{
 		Datum		exprsDatum;
-		bool		isnull;
 		char	   *exprsString;
 
 		exprsDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
 									 Anum_pg_index_indexprs, &isnull);
 		Assert(!isnull);
 		exprsString = TextDatumGetCString(exprsDatum);
 		indexprs = (List *) stringToNode(exprsString);
 		pfree(exprsString);
 	}
 	else
 		indexprs = NIL;
 
 	indexpr_item = list_head(indexprs);
 
 	context = deparse_context_for(get_relation_name(indrelid), indrelid);
 
 	/*
 	 * Start the index definition.  Note that the index's name should never be
 	 * schema-qualified, but the indexed rel's name may be.
 	 */
 	initStringInfo(&buf);
 
@@ -1481,45 +1480,44 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
 		 */
 		if (showTblSpc)
 		{
 			Oid			tblspc;
 
 			tblspc = get_rel_tablespace(indexrelid);
 			if (OidIsValid(tblspc))
 			{
 				if (isConstraint)
 					appendStringInfoString(&buf, " USING INDEX");
 				appendStringInfo(&buf, " TABLESPACE %s",
 								 quote_identifier(get_tablespace_name(tblspc)));
 			}
 		}
 
 		/*
 		 * If it's a partial index, decompile and append the predicate
 		 */
 		if (!heap_attisnull(ht_idx, Anum_pg_index_indpred, NULL))
 		{
 			Node	   *node;
 			Datum		predDatum;
-			bool		isnull;
 			char	   *predString;
 
 			/* Convert text string to node tree */
 			predDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
 										Anum_pg_index_indpred, &isnull);
 			Assert(!isnull);
 			predString = TextDatumGetCString(predDatum);
 			node = (Node *) stringToNode(predString);
 			pfree(predString);
 
 			/* Deparse */
 			str = deparse_expression_pretty(node, context, false, false,
 											prettyFlags, 0);
 			if (isConstraint)
 				appendStringInfo(&buf, " WHERE (%s)", str);
 			else
 				appendStringInfo(&buf, " WHERE %s", str);
 		}
 	}
 
 	/* Clean up */
 	ReleaseSysCache(ht_idx);
@@ -1629,45 +1627,44 @@ pg_get_statisticsobj_worker(Oid statextid, bool columns_only, bool missing_ok)
 	statexttup = SearchSysCache1(STATEXTOID, ObjectIdGetDatum(statextid));
 
 	if (!HeapTupleIsValid(statexttup))
 	{
 		if (missing_ok)
 			return NULL;
 		elog(ERROR, "cache lookup failed for statistics object %u", statextid);
 	}
 
 	/* has the statistics expressions? */
 	has_exprs = !heap_attisnull(statexttup, Anum_pg_statistic_ext_stxexprs, NULL);
 
 	statextrec = (Form_pg_statistic_ext) GETSTRUCT(statexttup);
 
 	/*
 	 * Get the statistics expressions, if any.  (NOTE: we do not use the
 	 * relcache versions of the expressions, because we want to display
 	 * non-const-folded expressions.)
 	 */
 	if (has_exprs)
 	{
 		Datum		exprsDatum;
-		bool		isnull;
 		char	   *exprsString;
 
 		exprsDatum = SysCacheGetAttr(STATEXTOID, statexttup,
 									 Anum_pg_statistic_ext_stxexprs, &isnull);
 		Assert(!isnull);
 		exprsString = TextDatumGetCString(exprsDatum);
 		exprs = (List *) stringToNode(exprsString);
 		pfree(exprsString);
 	}
 	else
 		exprs = NIL;
 
 	/* count the number of columns (attributes and expressions) */
 	ncolumns = statextrec->stxkeys.dim1 + list_length(exprs);
 
 	initStringInfo(&buf);
 
 	if (!columns_only)
 	{
 		nsp = get_namespace_name_or_temp(statextrec->stxnamespace);
 		appendStringInfo(&buf, "CREATE STATISTICS %s",
 						 quote_qualified_identifier(nsp,
@@ -1925,45 +1922,44 @@ pg_get_partkeydef_worker(Oid relid, int prettyFlags,
 	Assert(form->partrelid == relid);
 
 	/* Must get partclass and partcollation the hard way */
 	datum = SysCacheGetAttr(PARTRELID, tuple,
 							Anum_pg_partitioned_table_partclass, &isnull);
 	Assert(!isnull);
 	partclass = (oidvector *) DatumGetPointer(datum);
 
 	datum = SysCacheGetAttr(PARTRELID, tuple,
 							Anum_pg_partitioned_table_partcollation, &isnull);
 	Assert(!isnull);
 	partcollation = (oidvector *) DatumGetPointer(datum);
 
 
 	/*
 	 * Get the expressions, if any.  (NOTE: we do not use the relcache
 	 * versions of the expressions, because we want to display
 	 * non-const-folded expressions.)
 	 */
 	if (!heap_attisnull(tuple, Anum_pg_partitioned_table_partexprs, NULL))
 	{
 		Datum		exprsDatum;
-		bool		isnull;
 		char	   *exprsString;
 
 		exprsDatum = SysCacheGetAttr(PARTRELID, tuple,
 									 Anum_pg_partitioned_table_partexprs, &isnull);
 		Assert(!isnull);
 		exprsString = TextDatumGetCString(exprsDatum);
 		partexprs = (List *) stringToNode(exprsString);
 
 		if (!IsA(partexprs, List))
 			elog(ERROR, "unexpected node type found in partexprs: %d",
 				 (int) nodeTag(partexprs));
 
 		pfree(exprsString);
 	}
 	else
 		partexprs = NIL;
 
 	partexpr_item = list_head(partexprs);
 	context = deparse_context_for(get_relation_name(relid), relid);
 
 	initStringInfo(&buf);
 
v3-renames.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9b03579e6e0..9a83ebf3231 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1799,57 +1799,57 @@ heap_drop_with_catalog(Oid relid)
 	rel = relation_open(relid, AccessExclusiveLock);
 
 	/*
 	 * There can no longer be anyone *else* touching the relation, but we
 	 * might still have open queries or cursors, or pending trigger events, in
 	 * our own session.
 	 */
 	CheckTableNotInUse(rel, "DROP TABLE");
 
 	/*
 	 * This effectively deletes all rows in the table, and may be done in a
 	 * serializable transaction.  In that case we must record a rw-conflict in
 	 * to this transaction from each transaction holding a predicate lock on
 	 * the table.
 	 */
 	CheckTableForSerializableConflictIn(rel);
 
 	/*
 	 * Delete pg_foreign_table tuple first.
 	 */
 	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
 	{
-		Relation	rel;
-		HeapTuple	tuple;
+		Relation	pg_foreign_table;
+		HeapTuple	foreigntuple;
 
-		rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+		pg_foreign_table = table_open(ForeignTableRelationId, RowExclusiveLock);
 
-		tuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
-		if (!HeapTupleIsValid(tuple))
+		foreigntuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(foreigntuple))
 			elog(ERROR, "cache lookup failed for foreign table %u", relid);
 
-		CatalogTupleDelete(rel, &tuple->t_self);
+		CatalogTupleDelete(pg_foreign_table, &foreigntuple->t_self);
 
-		ReleaseSysCache(tuple);
-		table_close(rel, RowExclusiveLock);
+		ReleaseSysCache(foreigntuple);
+		table_close(pg_foreign_table, RowExclusiveLock);
 	}
 
 	/*
 	 * If a partitioned table, delete the pg_partitioned_table tuple.
 	 */
 	if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
 		RemovePartitionKeyByRelId(relid);
 
 	/*
 	 * If the relation being dropped is the default partition itself,
 	 * invalidate its entry in pg_partitioned_table.
 	 */
 	if (relid == defaultPartOid)
 		update_default_partition_oid(parentOid, InvalidOid);
 
 	/*
 	 * Schedule unlinking of the relation's physical files at commit.
 	 */
 	if (RELKIND_HAS_STORAGE(rel->rd_rel->relkind))
 		RelationDropStorage(rel);
 
 	/* ensure that stats are dropped if transaction commits */
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 8b574b86c47..f9366f588fb 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -87,70 +87,70 @@ parse_publication_options(ParseState *pstate,
 {
 	ListCell   *lc;
 
 	*publish_given = false;
 	*publish_via_partition_root_given = false;
 
 	/* defaults */
 	pubactions->pubinsert = true;
 	pubactions->pubupdate = true;
 	pubactions->pubdelete = true;
 	pubactions->pubtruncate = true;
 	*publish_via_partition_root = false;
 
 	/* Parse options */
 	foreach(lc, options)
 	{
 		DefElem    *defel = (DefElem *) lfirst(lc);
 
 		if (strcmp(defel->defname, "publish") == 0)
 		{
 			char	   *publish;
 			List	   *publish_list;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			if (*publish_given)
 				errorConflictingDefElem(defel, pstate);
 
 			/*
 			 * If publish option was given only the explicitly listed actions
 			 * should be published.
 			 */
 			pubactions->pubinsert = false;
 			pubactions->pubupdate = false;
 			pubactions->pubdelete = false;
 			pubactions->pubtruncate = false;
 
 			*publish_given = true;
 			publish = defGetString(defel);
 
 			if (!SplitIdentifierString(publish, ',', &publish_list))
 				ereport(ERROR,
 						(errcode(ERRCODE_SYNTAX_ERROR),
 						 errmsg("invalid list syntax for \"publish\" option")));
 
 			/* Process the option list. */
-			foreach(lc, publish_list)
+			foreach(lc2, publish_list)
 			{
-				char	   *publish_opt = (char *) lfirst(lc);
+				char	   *publish_opt = (char *) lfirst(lc2);
 
 				if (strcmp(publish_opt, "insert") == 0)
 					pubactions->pubinsert = true;
 				else if (strcmp(publish_opt, "update") == 0)
 					pubactions->pubupdate = true;
 				else if (strcmp(publish_opt, "delete") == 0)
 					pubactions->pubdelete = true;
 				else if (strcmp(publish_opt, "truncate") == 0)
 					pubactions->pubtruncate = true;
 				else
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
 							 errmsg("unrecognized \"publish\" value: \"%s\"", publish_opt)));
 			}
 		}
 		else if (strcmp(defel->defname, "publish_via_partition_root") == 0)
 		{
 			if (*publish_via_partition_root_given)
 				errorConflictingDefElem(defel, pstate);
 			*publish_via_partition_root_given = true;
 			*publish_via_partition_root = defGetBoolean(defel);
 		}
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index dacc989d855..7535b86bcae 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10204,45 +10204,45 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
 
 	foreach(cell, clone)
 	{
 		Oid			parentConstrOid = lfirst_oid(cell);
 		Form_pg_constraint constrForm;
 		Relation	pkrel;
 		HeapTuple	tuple;
 		int			numfks;
 		AttrNumber	conkey[INDEX_MAX_KEYS];
 		AttrNumber	mapped_conkey[INDEX_MAX_KEYS];
 		AttrNumber	confkey[INDEX_MAX_KEYS];
 		Oid			conpfeqop[INDEX_MAX_KEYS];
 		Oid			conppeqop[INDEX_MAX_KEYS];
 		Oid			conffeqop[INDEX_MAX_KEYS];
 		int			numfkdelsetcols;
 		AttrNumber	confdelsetcols[INDEX_MAX_KEYS];
 		Constraint *fkconstraint;
 		bool		attached;
 		Oid			indexOid;
 		Oid			constrOid;
 		ObjectAddress address,
 					referenced;
-		ListCell   *cell;
+		ListCell   *lc;
 		Oid			insertTriggerOid,
 					updateTriggerOid;
 
 		tuple = SearchSysCache1(CONSTROID, parentConstrOid);
 		if (!HeapTupleIsValid(tuple))
 			elog(ERROR, "cache lookup failed for constraint %u",
 				 parentConstrOid);
 		constrForm = (Form_pg_constraint) GETSTRUCT(tuple);
 
 		/* Don't clone constraints whose parents are being cloned */
 		if (list_member_oid(clone, constrForm->conparentid))
 		{
 			ReleaseSysCache(tuple);
 			continue;
 		}
 
 		/*
 		 * Need to prevent concurrent deletions.  If pkrel is a partitioned
 		 * relation, that means to lock all partitions.
 		 */
 		pkrel = table_open(constrForm->confrelid, ShareRowExclusiveLock);
 		if (pkrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
@@ -10257,47 +10257,47 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
 
 		/*
 		 * Get the "check" triggers belonging to the constraint to pass as
 		 * parent OIDs for similar triggers that will be created on the
 		 * partition in addFkRecurseReferencing().  They are also passed to
 		 * tryAttachPartitionForeignKey() below to simply assign as parents to
 		 * the partition's existing "check" triggers, that is, if the
 		 * corresponding constraints is deemed attachable to the parent
 		 * constraint.
 		 */
 		GetForeignKeyCheckTriggers(trigrel, constrForm->oid,
 								   constrForm->confrelid, constrForm->conrelid,
 								   &insertTriggerOid, &updateTriggerOid);
 
 		/*
 		 * Before creating a new constraint, see whether any existing FKs are
 		 * fit for the purpose.  If one is, attach the parent constraint to
 		 * it, and don't clone anything.  This way we avoid the expensive
 		 * verification step and don't end up with a duplicate FK, and we
 		 * don't need to recurse to partitions for this constraint.
 		 */
 		attached = false;
-		foreach(cell, partFKs)
+		foreach(lc, partFKs)
 		{
-			ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, cell);
+			ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, lc);
 
 			if (tryAttachPartitionForeignKey(fk,
 											 RelationGetRelid(partRel),
 											 parentConstrOid,
 											 numfks,
 											 mapped_conkey,
 											 confkey,
 											 conpfeqop,
 											 insertTriggerOid,
 											 updateTriggerOid,
 											 trigrel))
 			{
 				attached = true;
 				table_close(pkrel, NoLock);
 				break;
 			}
 		}
 		if (attached)
 		{
 			ReleaseSysCache(tuple);
 			continue;
 		}
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 62a09fb131b..f1801a160ed 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1130,77 +1130,77 @@ CreateTriggerFiringOn(CreateTrigStmt *stmt, const char *queryString,
 	}
 
 	/*
 	 * If it has a WHEN clause, add dependencies on objects mentioned in the
 	 * expression (eg, functions, as well as any columns used).
 	 */
 	if (whenRtable != NIL)
 		recordDependencyOnExpr(&myself, whenClause, whenRtable,
 							   DEPENDENCY_NORMAL);
 
 	/* Post creation hook for new trigger */
 	InvokeObjectPostCreateHookArg(TriggerRelationId, trigoid, 0,
 								  isInternal);
 
 	/*
 	 * Lastly, create the trigger on child relations, if needed.
 	 */
 	if (partition_recurse)
 	{
 		PartitionDesc partdesc = RelationGetPartitionDesc(rel, true);
 		List	   *idxs = NIL;
 		List	   *childTbls = NIL;
-		ListCell   *l;
 		int			i;
 		MemoryContext oldcxt,
 					perChildCxt;
 
 		perChildCxt = AllocSetContextCreate(CurrentMemoryContext,
 											"part trig clone",
 											ALLOCSET_SMALL_SIZES);
 
 		/*
 		 * When a trigger is being created associated with an index, we'll
 		 * need to associate the trigger in each child partition with the
 		 * corresponding index on it.
 		 */
 		if (OidIsValid(indexOid))
 		{
 			ListCell   *l;
 			List	   *idxs = NIL;
 
 			idxs = find_inheritance_children(indexOid, ShareRowExclusiveLock);
 			foreach(l, idxs)
 				childTbls = lappend_oid(childTbls,
 										IndexGetRelation(lfirst_oid(l),
 														 false));
 		}
 
 		oldcxt = MemoryContextSwitchTo(perChildCxt);
 
 		/* Iterate to create the trigger on each existing partition */
 		for (i = 0; i < partdesc->nparts; i++)
 		{
 			Oid			indexOnChild = InvalidOid;
-			ListCell   *l2;
+			ListCell   *l,
+				   *l2;
 			CreateTrigStmt *childStmt;
 			Relation	childTbl;
 			Node	   *qual;
 
 			childTbl = table_open(partdesc->oids[i], ShareRowExclusiveLock);
 
 			/* Find which of the child indexes is the one on this partition */
 			if (OidIsValid(indexOid))
 			{
 				forboth(l, idxs, l2, childTbls)
 				{
 					if (lfirst_oid(l2) == partdesc->oids[i])
 					{
 						indexOnChild = lfirst_oid(l);
 						break;
 					}
 				}
 				if (!OidIsValid(indexOnChild))
 					elog(ERROR, "failed to find index matching index \"%s\" in partition \"%s\"",
 						 get_rel_name(indexOid),
 						 get_rel_name(partdesc->oids[i]));
 			}
@@ -1707,47 +1707,47 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
 								NULL, 1, &key);
 	while (HeapTupleIsValid(tuple = systable_getnext(tgscan)))
 	{
 		Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tuple);
 		Relation	partitionRel;
 
 		if (tgform->tgparentid != parentTriggerOid)
 			continue;			/* not our trigger */
 
 		partitionRel = table_open(partitionId, NoLock);
 
 		/* Rename the trigger on this partition */
 		renametrig_internal(tgrel, partitionRel, tuple, newname, expected_name);
 
 		/* And if this relation is partitioned, recurse to its partitions */
 		if (partitionRel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
 		{
 			PartitionDesc partdesc = RelationGetPartitionDesc(partitionRel,
 															  true);
 
 			for (int i = 0; i < partdesc->nparts; i++)
 			{
-				Oid			partitionId = partdesc->oids[i];
+				Oid			partid = partdesc->oids[i];
 
-				renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+				renametrig_partition(tgrel, partid, tgform->oid, newname,
 									 NameStr(tgform->tgname));
 			}
 		}
 		table_close(partitionRel, NoLock);
 
 		/* There should be at most one matching tuple */
 		break;
 	}
 	systable_endscan(tgscan);
 }
 
 /*
  * EnableDisableTrigger()
  *
  *	Called by ALTER TABLE ENABLE/DISABLE [ REPLICA | ALWAYS ] TRIGGER
  *	to change 'tgenabled' field for the specified trigger(s)
  *
  * rel: relation to process (caller must hold suitable lock on it)
  * tgname: trigger to process, or NULL to scan all triggers
  * fires_when: new value for tgenabled field. In addition to generic
  *			   enablement/disablement, this also defines when the trigger
  *			   should be fired in session replication roles.
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 933c3049016..736082c8fb3 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -3168,45 +3168,44 @@ hashagg_reset_spill_state(AggState *aggstate)
 AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
 	AggStatePerAgg peraggs;
 	AggStatePerTrans pertransstates;
 	AggStatePerGroup *pergroups;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
 	TupleDesc	scanDesc;
 	int			max_aggno;
 	int			max_transno;
 	int			numaggrefs;
 	int			numaggs;
 	int			numtrans;
 	int			phase;
 	int			phaseidx;
 	ListCell   *l;
 	Bitmapset  *all_grouped_cols = NULL;
 	int			numGroupingSets = 1;
 	int			numPhases;
 	int			numHashes;
-	int			i = 0;
 	int			j = 0;
 	bool		use_hashing = (node->aggstrategy == AGG_HASHED ||
 							   node->aggstrategy == AGG_MIXED);
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
 
 	/*
 	 * create state structure
 	 */
 	aggstate = makeNode(AggState);
 	aggstate->ss.ps.plan = (Plan *) node;
 	aggstate->ss.ps.state = estate;
 	aggstate->ss.ps.ExecProcNode = ExecAgg;
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
 	aggstate->numtrans = 0;
 	aggstate->aggstrategy = node->aggstrategy;
 	aggstate->aggsplit = node->aggsplit;
 	aggstate->maxsets = 0;
 	aggstate->projected_set = -1;
@@ -3259,45 +3258,45 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	aggstate->numphases = numPhases;
 
 	aggstate->aggcontexts = (ExprContext **)
 		palloc0(sizeof(ExprContext *) * numGroupingSets);
 
 	/*
 	 * Create expression contexts.  We need three or more, one for
 	 * per-input-tuple processing, one for per-output-tuple processing, one
 	 * for all the hashtables, and one for each grouping set.  The per-tuple
 	 * memory context of the per-grouping-set ExprContexts (aggcontexts)
 	 * replaces the standalone memory context formerly used to hold transition
 	 * values.  We cheat a little by using ExecAssignExprContext() to build
 	 * all of them.
 	 *
 	 * NOTE: the details of what is stored in aggcontexts and what is stored
 	 * in the regular per-query memory context are driven by a simple
 	 * decision: we want to reset the aggcontext at group boundaries (if not
 	 * hashing) and in ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
 
-	for (i = 0; i < numGroupingSets; ++i)
+	for (int i = 0; i < numGroupingSets; ++i)
 	{
 		ExecAssignExprContext(estate, &aggstate->ss.ps);
 		aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
 	}
 
 	if (use_hashing)
 		aggstate->hashcontext = CreateWorkExprContext(estate);
 
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * Initialize child nodes.
 	 *
 	 * If we are doing a hashed aggregation then the child plan does not need
 	 * to handle REWIND efficiently; see ExecReScanAgg.
 	 */
 	if (node->aggstrategy == AGG_HASHED)
 		eflags &= ~EXEC_FLAG_REWIND;
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
 	/*
@@ -3399,75 +3398,76 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		Agg		   *aggnode;
 		Sort	   *sortnode;
 
 		if (phaseidx > 0)
 		{
 			aggnode = list_nth_node(Agg, node->chain, phaseidx - 1);
 			sortnode = castNode(Sort, outerPlan(aggnode));
 		}
 		else
 		{
 			aggnode = node;
 			sortnode = NULL;
 		}
 
 		Assert(phase <= 1 || sortnode);
 
 		if (aggnode->aggstrategy == AGG_HASHED
 			|| aggnode->aggstrategy == AGG_MIXED)
 		{
 			AggStatePerPhase phasedata = &aggstate->phases[0];
 			AggStatePerHash perhash;
 			Bitmapset  *cols = NULL;
+			int			setno = phasedata->numsets++;
 
 			Assert(phase == 0);
-			i = phasedata->numsets++;
-			perhash = &aggstate->perhash[i];
+			perhash = &aggstate->perhash[setno];
 
 			/* phase 0 always points to the "real" Agg in the hash case */
 			phasedata->aggnode = node;
 			phasedata->aggstrategy = node->aggstrategy;
 
 			/* but the actual Agg node representing this hash is saved here */
 			perhash->aggnode = aggnode;
 
-			phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;
+			phasedata->gset_lengths[setno] = perhash->numCols = aggnode->numCols;
 
 			for (j = 0; j < aggnode->numCols; ++j)
 				cols = bms_add_member(cols, aggnode->grpColIdx[j]);
 
-			phasedata->grouped_cols[i] = cols;
+			phasedata->grouped_cols[setno] = cols;
 
 			all_grouped_cols = bms_add_members(all_grouped_cols, cols);
 			continue;
 		}
 		else
 		{
 			AggStatePerPhase phasedata = &aggstate->phases[++phase];
 			int			num_sets;
 
 			phasedata->numsets = num_sets = list_length(aggnode->groupingSets);
 
 			if (num_sets)
 			{
+				int i;
 				phasedata->gset_lengths = palloc(num_sets * sizeof(int));
 				phasedata->grouped_cols = palloc(num_sets * sizeof(Bitmapset *));
 
 				i = 0;
 				foreach(l, aggnode->groupingSets)
 				{
 					int			current_length = list_length(lfirst(l));
 					Bitmapset  *cols = NULL;
 
 					/* planner forces this to be correct */
 					for (j = 0; j < current_length; ++j)
 						cols = bms_add_member(cols, aggnode->grpColIdx[j]);
 
 					phasedata->grouped_cols[i] = cols;
 					phasedata->gset_lengths[i] = current_length;
 
 					++i;
 				}
 
 				all_grouped_cols = bms_add_members(all_grouped_cols,
 												   phasedata->grouped_cols[0]);
 			}
@@ -3515,71 +3515,73 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 				/* and for all grouped columns, unless already computed */
 				if (phasedata->eqfunctions[aggnode->numCols - 1] == NULL)
 				{
 					phasedata->eqfunctions[aggnode->numCols - 1] =
 						execTuplesMatchPrepare(scanDesc,
 											   aggnode->numCols,
 											   aggnode->grpColIdx,
 											   aggnode->grpOperators,
 											   aggnode->grpCollations,
 											   (PlanState *) aggstate);
 				}
 			}
 
 			phasedata->aggnode = aggnode;
 			phasedata->aggstrategy = aggnode->aggstrategy;
 			phasedata->sortnode = sortnode;
 		}
 	}
 
 	/*
 	 * Convert all_grouped_cols to a descending-order list.
 	 */
-	i = -1;
-	while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
-		aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+	{
+		int i = -1;
+		while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+			aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+	}
 
 	/*
 	 * Set up aggregate-result storage in the output expr context, and also
 	 * allocate my private per-agg working storage
 	 */
 	econtext = aggstate->ss.ps.ps_ExprContext;
 	econtext->ecxt_aggvalues = (Datum *) palloc0(sizeof(Datum) * numaggs);
 	econtext->ecxt_aggnulls = (bool *) palloc0(sizeof(bool) * numaggs);
 
 	peraggs = (AggStatePerAgg) palloc0(sizeof(AggStatePerAggData) * numaggs);
 	pertransstates = (AggStatePerTrans) palloc0(sizeof(AggStatePerTransData) * numtrans);
 
 	aggstate->peragg = peraggs;
 	aggstate->pertrans = pertransstates;
 
 
 	aggstate->all_pergroups =
 		(AggStatePerGroup *) palloc0(sizeof(AggStatePerGroup)
 									 * (numGroupingSets + numHashes));
 	pergroups = aggstate->all_pergroups;
 
 	if (node->aggstrategy != AGG_HASHED)
 	{
-		for (i = 0; i < numGroupingSets; i++)
+		for (int i = 0; i < numGroupingSets; i++)
 		{
 			pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
 													  * numaggs);
 		}
 
 		aggstate->pergroups = pergroups;
 		pergroups += numGroupingSets;
 	}
 
 	/*
 	 * Hashing can only appear in the initial phase.
 	 */
 	if (use_hashing)
 	{
 		Plan	   *outerplan = outerPlan(node);
 		uint64		totalGroups = 0;
 		int			i;
 
 		aggstate->hash_metacxt = AllocSetContextCreate(aggstate->ss.ps.state->es_query_cxt,
 													   "HashAgg meta context",
 													   ALLOCSET_DEFAULT_SIZES);
 		aggstate->hash_spill_rslot = ExecInitExtraTupleSlot(estate, scanDesc,
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 1545ff9f161..f9d40fa1a0d 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1631,54 +1631,54 @@ interpret_ident_response(const char *ident_response,
 			while (pg_isblank(*cursor))
 				cursor++;		/* skip blanks */
 			if (strcmp(response_type, "USERID") != 0)
 				return false;
 			else
 			{
 				/*
 				 * It's a USERID response.  Good.  "cursor" should be pointing
 				 * to the colon that precedes the operating system type.
 				 */
 				if (*cursor != ':')
 					return false;
 				else
 				{
 					cursor++;	/* Go over colon */
 					/* Skip over operating system field. */
 					while (*cursor != ':' && *cursor != '\r')
 						cursor++;
 					if (*cursor != ':')
 						return false;
 					else
 					{
-						int			i;	/* Index into *ident_user */
+						int			j;	/* Index into *ident_user */
 
 						cursor++;	/* Go over colon */
 						while (pg_isblank(*cursor))
 							cursor++;	/* skip blanks */
 						/* Rest of line is user name.  Copy it over. */
-						i = 0;
+						j = 0;
 						while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
-							ident_user[i++] = *cursor++;
-						ident_user[i] = '\0';
+							ident_user[j++] = *cursor++;
+						ident_user[j] = '\0';
 						return true;
 					}
 				}
 			}
 		}
 	}
 }
 
 
 /*
  *	Talk to the ident server on "remote_addr" and find out who
  *	owns the tcp connection to "local_addr"
  *	If the username is successfully retrieved, check the usermap.
  *
  *	XXX: Using WaitLatchOrSocket() and doing a CHECK_FOR_INTERRUPTS() if the
  *	latch was set would improve the responsiveness to timeouts/cancellations.
  */
 static int
 ident_inet(hbaPort *port)
 {
 	const SockAddr remote_addr = port->raddr;
 	const SockAddr local_addr = port->laddr;
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 75acea149c7..74adc4f3946 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2526,48 +2526,48 @@ cost_append(AppendPath *apath, PlannerInfo *root)
 	apath->path.rows = 0;
 
 	if (apath->subpaths == NIL)
 		return;
 
 	if (!apath->path.parallel_aware)
 	{
 		List	   *pathkeys = apath->path.pathkeys;
 
 		if (pathkeys == NIL)
 		{
 			Path	   *subpath = (Path *) linitial(apath->subpaths);
 
 			/*
 			 * For an unordered, non-parallel-aware Append we take the startup
 			 * cost as the startup cost of the first subpath.
 			 */
 			apath->path.startup_cost = subpath->startup_cost;
 
 			/* Compute rows and costs as sums of subplan rows and costs. */
 			foreach(l, apath->subpaths)
 			{
-				Path	   *subpath = (Path *) lfirst(l);
+				Path	   *sub = (Path *) lfirst(l);
 
-				apath->path.rows += subpath->rows;
-				apath->path.total_cost += subpath->total_cost;
+				apath->path.rows += sub->rows;
+				apath->path.total_cost += sub->total_cost;
 			}
 		}
 		else
 		{
 			/*
 			 * For an ordered, non-parallel-aware Append we take the startup
 			 * cost as the sum of the subpath startup costs.  This ensures
 			 * that we don't underestimate the startup cost when a query's
 			 * LIMIT is such that several of the children have to be run to
 			 * satisfy it.  This might be overkill --- another plausible hack
 			 * would be to take the Append's startup cost as the maximum of
 			 * the child startup costs.  But we don't want to risk believing
 			 * that an ORDER BY LIMIT query can be satisfied at small cost
 			 * when the first child has small startup cost but later ones
 			 * don't.  (If we had the ability to deal with nonlinear cost
 			 * interpolation for partial retrievals, we would not need to be
 			 * so conservative about this.)
 			 *
 			 * This case is also different from the above in that we have to
 			 * account for possibly injecting sorts into subpaths that aren't
 			 * natively ordered.
 			 */
diff --git a/src/backend/optimizer/path/tidpath.c b/src/backend/optimizer/path/tidpath.c
index 279ca1f5b44..23194d6e007 100644
--- a/src/backend/optimizer/path/tidpath.c
+++ b/src/backend/optimizer/path/tidpath.c
@@ -286,48 +286,48 @@ TidQualFromRestrictInfoList(PlannerInfo *root, List *rlist, RelOptInfo *rel)
 		{
 			ListCell   *j;
 
 			/*
 			 * We must be able to extract a CTID condition from every
 			 * sub-clause of an OR, or we can't use it.
 			 */
 			foreach(j, ((BoolExpr *) rinfo->orclause)->args)
 			{
 				Node	   *orarg = (Node *) lfirst(j);
 				List	   *sublist;
 
 				/* OR arguments should be ANDs or sub-RestrictInfos */
 				if (is_andclause(orarg))
 				{
 					List	   *andargs = ((BoolExpr *) orarg)->args;
 
 					/* Recurse in case there are sub-ORs */
 					sublist = TidQualFromRestrictInfoList(root, andargs, rel);
 				}
 				else
 				{
-					RestrictInfo *rinfo = castNode(RestrictInfo, orarg);
+					RestrictInfo *list = castNode(RestrictInfo, orarg);
 
-					Assert(!restriction_is_or_clause(rinfo));
-					sublist = TidQualFromRestrictInfo(root, rinfo, rel);
+					Assert(!restriction_is_or_clause(list));
+					sublist = TidQualFromRestrictInfo(root, list, rel);
 				}
 
 				/*
 				 * If nothing found in this arm, we can't do anything with
 				 * this OR clause.
 				 */
 				if (sublist == NIL)
 				{
 					rlst = NIL; /* forget anything we had */
 					break;		/* out of loop over OR args */
 				}
 
 				/*
 				 * OK, continue constructing implicitly-OR'ed result list.
 				 */
 				rlst = list_concat(rlst, sublist);
 			}
 		}
 		else
 		{
 			/* Not an OR clause, so handle base cases */
 			rlst = TidQualFromRestrictInfo(root, rinfo, rel);
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index cf9e0a74dbf..e969f2be3fe 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -1975,46 +1975,44 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
  * of rollups, and preparing annotations which will later be filled in with
  * size estimates.
  */
 static grouping_sets_data *
 preprocess_grouping_sets(PlannerInfo *root)
 {
 	Query	   *parse = root->parse;
 	List	   *sets;
 	int			maxref = 0;
 	ListCell   *lc;
 	ListCell   *lc_set;
 	grouping_sets_data *gd = palloc0(sizeof(grouping_sets_data));
 
 	parse->groupingSets = expand_grouping_sets(parse->groupingSets, parse->groupDistinct, -1);
 
 	gd->any_hashable = false;
 	gd->unhashable_refs = NULL;
 	gd->unsortable_refs = NULL;
 	gd->unsortable_sets = NIL;
 
 	if (parse->groupClause)
 	{
-		ListCell   *lc;
-
 		foreach(lc, parse->groupClause)
 		{
 			SortGroupClause *gc = lfirst_node(SortGroupClause, lc);
 			Index		ref = gc->tleSortGroupRef;
 
 			if (ref > maxref)
 				maxref = ref;
 
 			if (!gc->hashable)
 				gd->unhashable_refs = bms_add_member(gd->unhashable_refs, ref);
 
 			if (!OidIsValid(gc->sortop))
 				gd->unsortable_refs = bms_add_member(gd->unsortable_refs, ref);
 		}
 	}
 
 	/* Allocate workspace array for remapping */
 	gd->tleref_to_colnum_map = (int *) palloc((maxref + 1) * sizeof(int));
 
 	/*
 	 * If we have any unsortable sets, we must extract them before trying to
 	 * prepare rollups. Unsortable sets don't go through
@@ -3439,72 +3437,70 @@ get_number_of_groups(PlannerInfo *root,
 					 List *target_list)
 {
 	Query	   *parse = root->parse;
 	double		dNumGroups;
 
 	if (parse->groupClause)
 	{
 		List	   *groupExprs;
 
 		if (parse->groupingSets)
 		{
 			/* Add up the estimates for each grouping set */
 			ListCell   *lc;
 			ListCell   *lc2;
 
 			Assert(gd);			/* keep Coverity happy */
 
 			dNumGroups = 0;
 
 			foreach(lc, gd->rollups)
 			{
 				RollupData *rollup = lfirst_node(RollupData, lc);
-				ListCell   *lc;
+				ListCell   *lc3;
 
 				groupExprs = get_sortgrouplist_exprs(rollup->groupClause,
 													 target_list);
 
 				rollup->numGroups = 0.0;
 
-				forboth(lc, rollup->gsets, lc2, rollup->gsets_data)
+				forboth(lc3, rollup->gsets, lc2, rollup->gsets_data)
 				{
-					List	   *gset = (List *) lfirst(lc);
+					List	   *gset = (List *) lfirst(lc3);
 					GroupingSetData *gs = lfirst_node(GroupingSetData, lc2);
 					double		numGroups = estimate_num_groups(root,
 																groupExprs,
 																path_rows,
 																&gset,
 																NULL);
 
 					gs->numGroups = numGroups;
 					rollup->numGroups += numGroups;
 				}
 
 				dNumGroups += rollup->numGroups;
 			}
 
 			if (gd->hash_sets_idx)
 			{
-				ListCell   *lc;
-
 				gd->dNumHashGroups = 0;
 
 				groupExprs = get_sortgrouplist_exprs(parse->groupClause,
 													 target_list);
 
 				forboth(lc, gd->hash_sets_idx, lc2, gd->unsortable_sets)
 				{
 					List	   *gset = (List *) lfirst(lc);
 					GroupingSetData *gs = lfirst_node(GroupingSetData, lc2);
 					double		numGroups = estimate_num_groups(root,
 																groupExprs,
 																path_rows,
 																&gset,
 																NULL);
 
 					gs->numGroups = numGroups;
 					gd->dNumHashGroups += numGroups;
 				}
 
 				dNumGroups += gd->dNumHashGroups;
 			}
 		}
@@ -5015,49 +5011,49 @@ create_ordered_paths(PlannerInfo *root,
 										 path,
 										 path->pathtarget,
 										 root->sort_pathkeys, NULL,
 										 &total_groups);
 
 			/* Add projection step if needed */
 			if (path->pathtarget != target)
 				path = apply_projection_to_path(root, ordered_rel,
 												path, target);
 
 			add_path(ordered_rel, path);
 		}
 
 		/*
 		 * Consider incremental sort with a gather merge on partial paths.
 		 *
 		 * We can also skip the entire loop when we only have a single-item
 		 * sort_pathkeys because then we can't possibly have a presorted
 		 * prefix of the list without having the list be fully sorted.
 		 */
 		if (enable_incremental_sort && list_length(root->sort_pathkeys) > 1)
 		{
-			ListCell   *lc;
+			ListCell   *lc2;
 
-			foreach(lc, input_rel->partial_pathlist)
+			foreach(lc2, input_rel->partial_pathlist)
 			{
-				Path	   *input_path = (Path *) lfirst(lc);
+				Path	   *input_path = (Path *) lfirst(lc2);
 				Path	   *sorted_path;
 				bool		is_sorted;
 				int			presorted_keys;
 				double		total_groups;
 
 				/*
 				 * We don't care if this is the cheapest partial path - we
 				 * can't simply skip it, because it may be partially sorted in
 				 * which case we want to consider adding incremental sort
 				 * (instead of full sort, which is what happens above).
 				 */
 
 				is_sorted = pathkeys_count_contained_in(root->sort_pathkeys,
 														input_path->pathkeys,
 														&presorted_keys);
 
 				/* No point in adding incremental sort on fully sorted paths. */
 				if (is_sorted)
 					continue;
 
 				if (presorted_keys == 0)
 					continue;
@@ -7588,58 +7584,58 @@ apply_scanjoin_target_to_paths(PlannerInfo *root,
 	rel->reltarget = llast_node(PathTarget, scanjoin_targets);
 
 	/*
 	 * If the relation is partitioned, recursively apply the scan/join target
 	 * to all partitions, and generate brand-new Append paths in which the
 	 * scan/join target is computed below the Append rather than above it.
 	 * Since Append is not projection-capable, that might save a separate
 	 * Result node, and it also is important for partitionwise aggregate.
 	 */
 	if (rel_is_partitioned)
 	{
 		List	   *live_children = NIL;
 		int			i;
 
 		/* Adjust each partition. */
 		i = -1;
 		while ((i = bms_next_member(rel->live_parts, i)) >= 0)
 		{
 			RelOptInfo *child_rel = rel->part_rels[i];
 			AppendRelInfo **appinfos;
 			int			nappinfos;
 			List	   *child_scanjoin_targets = NIL;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			Assert(child_rel != NULL);
 
 			/* Dummy children can be ignored. */
 			if (IS_DUMMY_REL(child_rel))
 				continue;
 
 			/* Translate scan/join targets for this child. */
 			appinfos = find_appinfos_by_relids(root, child_rel->relids,
 											   &nappinfos);
-			foreach(lc, scanjoin_targets)
+			foreach(lc2, scanjoin_targets)
 			{
-				PathTarget *target = lfirst_node(PathTarget, lc);
+				PathTarget *target = lfirst_node(PathTarget, lc2);
 
 				target = copy_pathtarget(target);
 				target->exprs = (List *)
 					adjust_appendrel_attrs(root,
 										   (Node *) target->exprs,
 										   nappinfos, appinfos);
 				child_scanjoin_targets = lappend(child_scanjoin_targets,
 												 target);
 			}
 			pfree(appinfos);
 
 			/* Recursion does the real work. */
 			apply_scanjoin_target_to_paths(root, child_rel,
 										   child_scanjoin_targets,
 										   scanjoin_targets_contain_srfs,
 										   scanjoin_target_parallel_safe,
 										   tlist_same_exprs);
 
 			/* Save non-dummy children for Append paths. */
 			if (!IS_DUMMY_REL(child_rel))
 				live_children = lappend(live_children, child_rel);
 		}
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 71052c841d7..f97c2f5256c 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -639,47 +639,47 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
 
 	add_path(result_rel, path);
 
 	/*
 	 * Estimate number of groups.  For now we just assume the output is unique
 	 * --- this is certainly true for the UNION case, and we want worst-case
 	 * estimates anyway.
 	 */
 	result_rel->rows = path->rows;
 
 	/*
 	 * Now consider doing the same thing using the partial paths plus Append
 	 * plus Gather.
 	 */
 	if (partial_paths_valid)
 	{
 		Path	   *ppath;
 		int			parallel_workers = 0;
 
 		/* Find the highest number of workers requested for any subpath. */
 		foreach(lc, partial_pathlist)
 		{
-			Path	   *path = lfirst(lc);
+			Path	   *partial_path = lfirst(lc);
 
-			parallel_workers = Max(parallel_workers, path->parallel_workers);
+			parallel_workers = Max(parallel_workers, partial_path->parallel_workers);
 		}
 		Assert(parallel_workers > 0);
 
 		/*
 		 * If the use of parallel append is permitted, always request at least
 		 * log2(# of children) paths.  We assume it can be useful to have
 		 * extra workers in this case because they will be spread out across
 		 * the children.  The precise formula is just a guess; see
 		 * add_paths_to_append_rel.
 		 */
 		if (enable_parallel_append)
 		{
 			parallel_workers = Max(parallel_workers,
 								   pg_leftmost_one_pos32(list_length(partial_pathlist)) + 1);
 			parallel_workers = Min(parallel_workers,
 								   max_parallel_workers_per_gather);
 		}
 		Assert(parallel_workers > 0);
 
 		ppath = (Path *)
 			create_append_path(root, result_rel, NIL, partial_pathlist,
 							   NIL, NULL,
diff --git a/src/backend/optimizer/util/paramassign.c b/src/backend/optimizer/util/paramassign.c
index 8e2d4bf5158..933460989b3 100644
--- a/src/backend/optimizer/util/paramassign.c
+++ b/src/backend/optimizer/util/paramassign.c
@@ -418,93 +418,93 @@ replace_nestloop_param_placeholdervar(PlannerInfo *root, PlaceHolderVar *phv)
  * while planning the subquery.  So we need not modify the subplan or the
  * PlannerParamItems here.  What we do need to do is add entries to
  * root->curOuterParams to signal the parent nestloop plan node that it must
  * provide these values.  This differs from replace_nestloop_param_var in
  * that the PARAM_EXEC slots to use have already been determined.
  *
  * Note that we also use root->curOuterRels as an implicit parameter for
  * sanity checks.
  */
 void
 process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 {
 	ListCell   *lc;
 
 	foreach(lc, subplan_params)
 	{
 		PlannerParamItem *pitem = lfirst_node(PlannerParamItem, lc);
 
 		if (IsA(pitem->item, Var))
 		{
 			Var		   *var = (Var *) pitem->item;
 			NestLoopParam *nlp;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			/* If not from a nestloop outer rel, complain */
 			if (!bms_is_member(var->varno, root->curOuterRels))
 				elog(ERROR, "non-LATERAL parameter required by subquery");
 
 			/* Is this param already listed in root->curOuterParams? */
-			foreach(lc, root->curOuterParams)
+			foreach(lc2, root->curOuterParams)
 			{
-				nlp = (NestLoopParam *) lfirst(lc);
+				nlp = (NestLoopParam *) lfirst(lc2);
 				if (nlp->paramno == pitem->paramId)
 				{
 					Assert(equal(var, nlp->paramval));
 					/* Present, so nothing to do */
 					break;
 				}
 			}
-			if (lc == NULL)
+			if (lc2 == NULL)
 			{
 				/* No, so add it */
 				nlp = makeNode(NestLoopParam);
 				nlp->paramno = pitem->paramId;
 				nlp->paramval = copyObject(var);
 				root->curOuterParams = lappend(root->curOuterParams, nlp);
 			}
 		}
 		else if (IsA(pitem->item, PlaceHolderVar))
 		{
 			PlaceHolderVar *phv = (PlaceHolderVar *) pitem->item;
 			NestLoopParam *nlp;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			/* If not from a nestloop outer rel, complain */
 			if (!bms_is_subset(find_placeholder_info(root, phv)->ph_eval_at,
 							   root->curOuterRels))
 				elog(ERROR, "non-LATERAL parameter required by subquery");
 
 			/* Is this param already listed in root->curOuterParams? */
-			foreach(lc, root->curOuterParams)
+			foreach(lc2, root->curOuterParams)
 			{
-				nlp = (NestLoopParam *) lfirst(lc);
+				nlp = (NestLoopParam *) lfirst(lc2);
 				if (nlp->paramno == pitem->paramId)
 				{
 					Assert(equal(phv, nlp->paramval));
 					/* Present, so nothing to do */
 					break;
 				}
 			}
-			if (lc == NULL)
+			if (lc2 == NULL)
 			{
 				/* No, so add it */
 				nlp = makeNode(NestLoopParam);
 				nlp->paramno = pitem->paramId;
 				nlp->paramval = (Var *) copyObject(phv);
 				root->curOuterParams = lappend(root->curOuterParams, nlp);
 			}
 		}
 		else
 			elog(ERROR, "unexpected type of subquery parameter");
 	}
 }
 
 /*
  * Identify any NestLoopParams that should be supplied by a NestLoop plan
  * node with the specified lefthand rels.  Remove them from the active
  * root->curOuterParams list and return them as the result list.
  */
 List *
 identify_current_nestloop_params(PlannerInfo *root, Relids leftrelids)
 {
 	List	   *result;
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index b85fbebd00e..53a17ac3f6a 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -520,49 +520,49 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
 		 * likely expecting an un-tweaked function call.
 		 *
 		 * Note: the transformation changes a non-schema-qualified unnest()
 		 * function name into schema-qualified pg_catalog.unnest().  This
 		 * choice is also a bit debatable, but it seems reasonable to force
 		 * use of built-in unnest() when we make this transformation.
 		 */
 		if (IsA(fexpr, FuncCall))
 		{
 			FuncCall   *fc = (FuncCall *) fexpr;
 
 			if (list_length(fc->funcname) == 1 &&
 				strcmp(strVal(linitial(fc->funcname)), "unnest") == 0 &&
 				list_length(fc->args) > 1 &&
 				fc->agg_order == NIL &&
 				fc->agg_filter == NULL &&
 				fc->over == NULL &&
 				!fc->agg_star &&
 				!fc->agg_distinct &&
 				!fc->func_variadic &&
 				coldeflist == NIL)
 			{
-				ListCell   *lc;
+				ListCell   *lc2;
 
-				foreach(lc, fc->args)
+				foreach(lc2, fc->args)
 				{
-					Node	   *arg = (Node *) lfirst(lc);
+					Node	   *arg = (Node *) lfirst(lc2);
 					FuncCall   *newfc;
 
 					last_srf = pstate->p_last_srf;
 
 					newfc = makeFuncCall(SystemFuncName("unnest"),
 										 list_make1(arg),
 										 COERCE_EXPLICIT_CALL,
 										 fc->location);
 
 					newfexpr = transformExpr(pstate, (Node *) newfc,
 											 EXPR_KIND_FROM_FUNCTION);
 
 					/* nodeFunctionscan.c requires SRFs to be at top level */
 					if (pstate->p_last_srf != last_srf &&
 						pstate->p_last_srf != newfexpr)
 						ereport(ERROR,
 								(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 								 errmsg("set-returning functions must appear at top level of FROM"),
 								 parser_errposition(pstate,
 													exprLocation(pstate->p_last_srf))));
 
 					funcexprs = lappend(funcexprs, newfexpr);
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index bf698c1fc3f..744bc512b65 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1673,45 +1673,44 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
 		 *
 		 * XXX We have to do this even when there are no expressions in
 		 * clauses, otherwise find_strongest_dependency may fail for stats
 		 * with expressions (due to lookup of negative value in bitmap). So we
 		 * need to at least filter out those dependencies. Maybe we could do
 		 * it in a cheaper way (if there are no expr clauses, we can just
 		 * discard all negative attnums without any lookups).
 		 */
 		if (unique_exprs_cnt > 0 || stat->exprs != NIL)
 		{
 			int			ndeps = 0;
 
 			for (i = 0; i < deps->ndeps; i++)
 			{
 				bool		skip = false;
 				MVDependency *dep = deps->deps[i];
 				int			j;
 
 				for (j = 0; j < dep->nattributes; j++)
 				{
 					int			idx;
 					Node	   *expr;
-					int			k;
 					AttrNumber	unique_attnum = InvalidAttrNumber;
 					AttrNumber	attnum;
 
 					/* undo the per-statistics offset */
 					attnum = dep->attributes[j];
 
 					/*
 					 * For regular attributes we can simply check if it
 					 * matches any clause. If there's no matching clause, we
 					 * can just ignore it. We need to offset the attnum
 					 * though.
 					 */
 					if (AttrNumberIsForUserDefinedAttr(attnum))
 					{
 						dep->attributes[j] = attnum + attnum_offset;
 
 						if (!bms_is_member(dep->attributes[j], clauses_attnums))
 						{
 							skip = true;
 							break;
 						}
 
@@ -1721,53 +1720,53 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
 					/*
 					 * the attnum should be a valid system attnum (-1, -2,
 					 * ...)
 					 */
 					Assert(AttributeNumberIsValid(attnum));
 
 					/*
 					 * For expressions, we need to do two translations. First
 					 * we have to translate the negative attnum to index in
 					 * the list of expressions (in the statistics object).
 					 * Then we need to see if there's a matching clause. The
 					 * index of the unique expression determines the attnum
 					 * (and we offset it).
 					 */
 					idx = -(1 + attnum);
 
 					/* Is the expression index is valid? */
 					Assert((idx >= 0) && (idx < list_length(stat->exprs)));
 
 					expr = (Node *) list_nth(stat->exprs, idx);
 
 					/* try to find the expression in the unique list */
-					for (k = 0; k < unique_exprs_cnt; k++)
+					for (int m = 0; m < unique_exprs_cnt; m++)
 					{
 						/*
 						 * found a matching unique expression, use the attnum
 						 * (derived from index of the unique expression)
 						 */
-						if (equal(unique_exprs[k], expr))
+						if (equal(unique_exprs[m], expr))
 						{
-							unique_attnum = -(k + 1) + attnum_offset;
+							unique_attnum = -(m + 1) + attnum_offset;
 							break;
 						}
 					}
 
 					/*
 					 * Found no matching expression, so we can simply skip
 					 * this dependency, because there's no chance it will be
 					 * fully covered.
 					 */
 					if (unique_attnum == InvalidAttrNumber)
 					{
 						skip = true;
 						break;
 					}
 
 					/* otherwise remap it to the new attnum */
 					dep->attributes[j] = unique_attnum;
 				}
 
 				/* if found a matching dependency, keep it */
 				if (!skip)
 				{
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 6b0a8652622..ba9a568389f 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -1068,44 +1068,61 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 					ExecSecLabelStmt(stmt);
 				break;
 			}
 
 		default:
 			/* All other statement types have event trigger support */
 			ProcessUtilitySlow(pstate, pstmt, queryString,
 							   context, params, queryEnv,
 							   dest, qc);
 			break;
 	}
 
 	free_parsestate(pstate);
 
 	/*
 	 * Make effects of commands visible, for instance so that
 	 * PreCommit_on_commit_actions() can see them (see for example bug
 	 * #15631).
 	 */
 	CommandCounterIncrement();
 }
 
+static ObjectAddress
+TryExecRefreshMatView(RefreshMatViewStmt *stmt, const char *queryString,
+					ParamListInfo params, QueryCompletion *qc)
+{
+	ObjectAddress address;
+	PG_TRY();
+	{
+		address = ExecRefreshMatView(stmt, queryString, params, qc);
+	}
+	PG_FINALLY();
+	{
+		EventTriggerUndoInhibitCommandCollection();
+	}
+	PG_END_TRY();
+	return address;
+}
+
 /*
  * The "Slow" variant of ProcessUtility should only receive statements
  * supported by the event triggers facility.  Therefore, we always
  * perform the trigger support calls if the context allows it.
  */
 static void
 ProcessUtilitySlow(ParseState *pstate,
 				   PlannedStmt *pstmt,
 				   const char *queryString,
 				   ProcessUtilityContext context,
 				   ParamListInfo params,
 				   QueryEnvironment *queryEnv,
 				   DestReceiver *dest,
 				   QueryCompletion *qc)
 {
 	Node	   *parsetree = pstmt->utilityStmt;
 	bool		isTopLevel = (context == PROCESS_UTILITY_TOPLEVEL);
 	bool		isCompleteQuery = (context != PROCESS_UTILITY_SUBCOMMAND);
 	bool		needCleanup;
 	bool		commandCollected = false;
 	ObjectAddress address;
 	ObjectAddress secondaryObject = InvalidObjectAddress;
@@ -1659,54 +1676,48 @@ ProcessUtilitySlow(ParseState *pstate,
 			case T_CreateSeqStmt:
 				address = DefineSequence(pstate, (CreateSeqStmt *) parsetree);
 				break;
 
 			case T_AlterSeqStmt:
 				address = AlterSequence(pstate, (AlterSeqStmt *) parsetree);
 				break;
 
 			case T_CreateTableAsStmt:
 				address = ExecCreateTableAs(pstate, (CreateTableAsStmt *) parsetree,
 											params, queryEnv, qc);
 				break;
 
 			case T_RefreshMatViewStmt:
 
 				/*
 				 * REFRESH CONCURRENTLY executes some DDL commands internally.
 				 * Inhibit DDL command collection here to avoid those commands
 				 * from showing up in the deparsed command queue.  The refresh
 				 * command itself is queued, which is enough.
 				 */
 				EventTriggerInhibitCommandCollection();
-				PG_TRY();
-				{
-					address = ExecRefreshMatView((RefreshMatViewStmt *) parsetree,
-												 queryString, params, qc);
-				}
-				PG_FINALLY();
-				{
-					EventTriggerUndoInhibitCommandCollection();
-				}
-				PG_END_TRY();
+
+				address = TryExecRefreshMatView((RefreshMatViewStmt *) parsetree,
+											 queryString, params, qc);
+
 				break;
 
 			case T_CreateTrigStmt:
 				address = CreateTrigger((CreateTrigStmt *) parsetree,
 										queryString, InvalidOid, InvalidOid,
 										InvalidOid, InvalidOid, InvalidOid,
 										InvalidOid, NULL, false, false);
 				break;
 
 			case T_CreatePLangStmt:
 				address = CreateProceduralLanguage((CreatePLangStmt *) parsetree);
 				break;
 
 			case T_CreateDomainStmt:
 				address = DefineDomain((CreateDomainStmt *) parsetree);
 				break;
 
 			case T_CreateConversionStmt:
 				address = CreateConversionCommand((CreateConversionStmt *) parsetree);
 				break;
 
 			case T_CreateCastStmt:
diff --git a/src/backend/utils/adt/levenshtein.c b/src/backend/utils/adt/levenshtein.c
index 3026cc24311..2e67a90e516 100644
--- a/src/backend/utils/adt/levenshtein.c
+++ b/src/backend/utils/adt/levenshtein.c
@@ -174,54 +174,54 @@ varstr_levenshtein(const char *source, int slen,
 			 * total cost increases by ins_c + del_c for each move right.
 			 */
 			int			slack_d = max_d - min_theo_d;
 			int			best_column = net_inserts < 0 ? -net_inserts : 0;
 
 			stop_column = best_column + (slack_d / (ins_c + del_c)) + 1;
 			if (stop_column > m)
 				stop_column = m + 1;
 		}
 	}
 #endif
 
 	/*
 	 * In order to avoid calling pg_mblen() repeatedly on each character in s,
 	 * we cache all the lengths before starting the main loop -- but if all
 	 * the characters in both strings are single byte, then we skip this and
 	 * use a fast-path in the main loop.  If only one string contains
 	 * multi-byte characters, we still build the array, so that the fast-path
 	 * needn't deal with the case where the array hasn't been initialized.
 	 */
 	if (m != slen || n != tlen)
 	{
-		int			i;
+		int			k;
 		const char *cp = source;
 
 		s_char_len = (int *) palloc((m + 1) * sizeof(int));
-		for (i = 0; i < m; ++i)
+		for (k = 0; k < m; ++k)
 		{
-			s_char_len[i] = pg_mblen(cp);
-			cp += s_char_len[i];
+			s_char_len[k] = pg_mblen(cp);
+			cp += s_char_len[k];
 		}
-		s_char_len[i] = 0;
+		s_char_len[k] = 0;
 	}
 
 	/* One more cell for initialization column and row. */
 	++m;
 	++n;
 
 	/* Previous and current rows of notional array. */
 	prev = (int *) palloc(2 * m * sizeof(int));
 	curr = prev + m;
 
 	/*
 	 * To transform the first i characters of s into the first 0 characters of
 	 * t, we must perform i deletions.
 	 */
 	for (i = START_COLUMN; i < STOP_COLUMN; i++)
 		prev[i] = i * del_c;
 
 	/* Loop through rows of the notional array */
 	for (y = target, j = 1; j < n; j++)
 	{
 		int		   *temp;
 		const char *x = source;
diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c
index 93d9cef06ba..8d7b6b58c05 100644
--- a/src/pl/plpgsql/src/pl_funcs.c
+++ b/src/pl/plpgsql/src/pl_funcs.c
@@ -1628,51 +1628,50 @@ plpgsql_dumptree(PLpgSQL_function *func)
 					{
 						printf("                                  DEFAULT ");
 						dump_expr(var->default_val);
 						printf("\n");
 					}
 					if (var->cursor_explicit_expr != NULL)
 					{
 						if (var->cursor_explicit_argrow >= 0)
 							printf("                                  CURSOR argument row %d\n", var->cursor_explicit_argrow);
 
 						printf("                                  CURSOR IS ");
 						dump_expr(var->cursor_explicit_expr);
 						printf("\n");
 					}
 					if (var->promise != PLPGSQL_PROMISE_NONE)
 						printf("                                  PROMISE %d\n",
 							   (int) var->promise);
 				}
 				break;
 			case PLPGSQL_DTYPE_ROW:
 				{
 					PLpgSQL_row *row = (PLpgSQL_row *) d;
-					int			i;
 
 					printf("ROW %-16s fields", row->refname);
-					for (i = 0; i < row->nfields; i++)
+					for (int j = 0; j < row->nfields; j++)
 					{
-						printf(" %s=var %d", row->fieldnames[i],
-							   row->varnos[i]);
+						printf(" %s=var %d", row->fieldnames[j],
+							   row->varnos[j]);
 					}
 					printf("\n");
 				}
 				break;
 			case PLPGSQL_DTYPE_REC:
 				printf("REC %-16s typoid %u\n",
 					   ((PLpgSQL_rec *) d)->refname,
 					   ((PLpgSQL_rec *) d)->rectypeid);
 				if (((PLpgSQL_rec *) d)->isconst)
 					printf("                                  CONSTANT\n");
 				if (((PLpgSQL_rec *) d)->notnull)
 					printf("                                  NOT NULL\n");
 				if (((PLpgSQL_rec *) d)->default_val != NULL)
 				{
 					printf("                                  DEFAULT ");
 					dump_expr(((PLpgSQL_rec *) d)->default_val);
 					printf("\n");
 				}
 				break;
 			case PLPGSQL_DTYPE_RECFIELD:
 				printf("RECFIELD %-16s of REC %d\n",
 					   ((PLpgSQL_recfield *) d)->fieldname,
#20David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#19)
1 attachment(s)
Re: shadow variables - pg15 edition

On Wed, 24 Aug 2022 at 14:39, Justin Pryzby <pryzby@telsasoft.com> wrote:

Attached are half of the remainder of what I've written, ready for review.

Thanks for the patches.

I started to do some analysis of the remaining warnings and put them
in the attached spreadsheet. I put each of the remaining warnings into
a category of how I think they should be fixed.

These categories are:

1. "Rescope" (adjust scope of outer variable to move it into a deeper scope)
2. "Rename" (a variable needs to be renamed)
3. "RenameOrScope" (a variable needs renamed or we need to something
more extreme to rescope)
4. "Repurpose" (variables have the same purpose and may as well use
the same variable)
5. "Refactor" (fix the code to make it better)
6. "Remove" (variable is not needed)

There's also:
7. "Bug?" (might be a bug)
8. "?" (I don't know)

I was hoping we'd already caught all of the #1s in 421892a19, but I
caught a few of those in some of your other patches. One you'd done
another way and some you'd done the rescope but just put it in the
wrong patch. The others had not been done yet. I just pushed
f959bf9a5 to fix those ones.

I really think #2s should be done last. I'm not as comfortable with
the renaming and we might want to discuss tactics on that. We could
either opt to rename the shadowed or shadowing variable, or both. If
we rename the shadowing variable, then pending patches or forward
patches could use the wrong variable. If we rename the shadowed
variable then it's not impossible that backpatching could go wrong
where the new code intends to reference the outer variable using the
newly named variable, but when that's backpatched it uses the variable
with the same name in the inner scope. Renaming both would make the
problem more obvious. I'm not sure which is best. The answer may
depend on how many lines the variable is in scope for. If it's just
for a few lines then the hunk context would conflict and the committer
would likely notice the issue when resolving the conflict.

For #3, I just couldn't decide the best fix. Many of these could be
moved into an inner scope, but it would require indenting a large
amount of code, e.g. in a switch() statement's "case:" to allow
variables to be declared within the case.

I think probably #4 should be next to do (maybe after #5)

I have some ideas on how to fix the two #5s, so I'm going to go and do that now.

There's only 1 #6. I'm not so sure on that yet. The variable being
assigned to the variable is the current time and I'm not sure if we
can reuse the existing variable or not as time may have moved on
sufficiently.

I'll study #7 a bit more. My eyes glazed over a bit from doing all
that analysis, so I might be mistaken about that being a bug.

For #8s. These are the PG_TRY() ones. I see you had a go at fixing
that by moving the nested PG_TRY()s to a helper function. I don't
think that's a good fix. If we were to ever consider making
-Wshadow=compatible-local in a standard build, then we'd basically be
saying that nested PG_TRYs are not allowed. I don't think that'll fly.
I'd rather find a better way to fix those. I see we can't make use of
##__LINE__ in the variable name since PG_TRY()'s friends use the
variables too and they'd be on a different line. We maybe could have
an "ident" parameter in the macro that we ##ident onto the variables
names, but that would break existing code.

The first patch removes 2ndary, "inner" declarations, where that seems
reasonably safe and consistent with existing practice (and probably what the
original authors intended or would have written).

Would you be able to write a patch for #4. I'll do #5 now. You could
do a draft patch for #2 as well, but I think it should be committed
last, if we decide it's a good move to make. It may be worth having
the discussion about if we actually want to run
-Wshadow=compatible-local as a standard build flag before we rename
anything.

David

Attachments:

shadow_analysis.odsapplication/vnd.oasis.opendocument.spreadsheet; name=shadow_analysis.odsDownload
#21Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#20)
Re: shadow variables - pg15 edition

On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:

I was hoping we'd already caught all of the #1s in 421892a19, but I
caught a few of those in some of your other patches. One you'd done
another way and some you'd done the rescope but just put it in the
wrong patch. The others had not been done yet. I just pushed
f959bf9a5 to fix those ones.

This fixed pg_get_statisticsobj_worker() but not pg_get_indexdef_worker() nor
pg_get_partkeydef_worker().

(Also, I'd mentioned that my fixes for those deliberately re-used the
outer-scope vars, which isn't what you did, and it's why I didn't include them
with the patch for inner-scope).

I really think #2s should be done last. I'm not as comfortable with
the renaming and we might want to discuss tactics on that. We could
either opt to rename the shadowed or shadowing variable, or both. If
we rename the shadowing variable, then pending patches or forward
patches could use the wrong variable. If we rename the shadowed
variable then it's not impossible that backpatching could go wrong
where the new code intends to reference the outer variable using the
newly named variable, but when that's backpatched it uses the variable
with the same name in the inner scope. Renaming both would make the
problem more obvious. I'm not sure which is best. The answer may
depend on how many lines the variable is in scope for. If it's just
for a few lines then the hunk context would conflict and the committer
would likely notice the issue when resolving the conflict.

Yes, the hope is to limit the change to variables that are only used a couple
times within a few lines. It's also possible that these will break patches in
development, but that's normal for any change at all.

I'll study #7 a bit more. My eyes glazed over a bit from doing all
that analysis, so I might be mistaken about that being a bug.

I reported this last week.
/messages/by-id/20220819211824.GX26426@telsasoft.com

--
Justin

#22David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#21)
Re: shadow variables - pg15 edition

On Thu, 25 Aug 2022 at 02:00, Justin Pryzby <pryzby@telsasoft.com> wrote:

On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:

I was hoping we'd already caught all of the #1s in 421892a19, but I
caught a few of those in some of your other patches. One you'd done
another way and some you'd done the rescope but just put it in the
wrong patch. The others had not been done yet. I just pushed
f959bf9a5 to fix those ones.

This fixed pg_get_statisticsobj_worker() but not pg_get_indexdef_worker() nor
pg_get_partkeydef_worker().

The latter two can't be fixed in the same way as
pg_get_statisticsobj_worker(), which is why I left them alone. We can
deal with those when getting onto the next category of warnings, which
I believe should be the "Repurpose" category. If you look at the
shadow_analysis spreadsheet then you can see how I've categorised
each. I'm not pretending those are all 100% accurate. Various cases
the choice of category was subjective. My aim here is to fix as many
of the warnings as possible in the safest way possible for the
particular warning. This is why pg_get_statisticsobj_worker() wasn't
fixed in the same pass as pg_get_indexdef_worker() and
pg_get_partkeydef_worker().

David

#23David Rowley
dgrowleyml@gmail.com
In reply to: David Rowley (#20)
1 attachment(s)
Re: shadow variables - pg15 edition

On Wed, 24 Aug 2022 at 22:47, David Rowley <dgrowleyml@gmail.com> wrote:

5. "Refactor" (fix the code to make it better)

I have some ideas on how to fix the two #5s, so I'm going to go and do that now.

I've attached a patch which I think improves the code in
gistRelocateBuildBuffersOnSplit() so that there's no longer a shadowed
variable. I also benchmarked this method in a tight loop and can
measure no performance change from getting the loop index this way vs
the old way.

This only fixes one of the #5s I mentioned. I ended up scraping my
idea to fix the shadowed 'i' in get_qual_for_range() as it became too
complex. The idea was to use list_cell_number() to find out how far
we looped in the forboth() loop. It turned out that 'i' was used in
the subsequent loop in "j = i;". The fix just became too complex and I
didn't think it was worth the risk of breaking something just to get
rid of the showed 'i'.

David

Attachments:

shadow_refactor_fixes.patchtext/plain; charset=US-ASCII; name=shadow_refactor_fixes.patchDownload
diff --git a/src/backend/access/gist/gistbuildbuffers.c b/src/backend/access/gist/gistbuildbuffers.c
index eabf746018..c6c7dfe4c2 100644
--- a/src/backend/access/gist/gistbuildbuffers.c
+++ b/src/backend/access/gist/gistbuildbuffers.c
@@ -543,8 +543,7 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
 	GISTNodeBuffer *nodeBuffer;
 	BlockNumber blocknum;
 	IndexTuple	itup;
-	int			splitPagesCount = 0,
-				i;
+	int			splitPagesCount = 0;
 	GISTENTRY	entry[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
 	GISTNodeBuffer oldBuf;
@@ -595,11 +594,11 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
 	 * Fill relocation buffers information for node buffers of pages produced
 	 * by split.
 	 */
-	i = 0;
 	foreach(lc, splitinfo)
 	{
 		GISTPageSplitInfo *si = (GISTPageSplitInfo *) lfirst(lc);
 		GISTNodeBuffer *newNodeBuffer;
+		int				i = foreach_current_index(lc);
 
 		/* Decompress parent index tuple of node buffer page. */
 		gistDeCompressAtt(giststate, r,
@@ -618,8 +617,6 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
 
 		relocationBuffersInfos[i].nodeBuffer = newNodeBuffer;
 		relocationBuffersInfos[i].splitinfo = si;
-
-		i++;
 	}
 
 	/*
#24Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#20)
2 attachment(s)
Re: shadow variables - pg15 edition

On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:

On Wed, 24 Aug 2022 at 14:39, Justin Pryzby <pryzby@telsasoft.com> wrote:

Attached are half of the remainder of what I've written, ready for review.

Thanks for the patches.

4. "Repurpose" (variables have the same purpose and may as well use
the same variable)

Would you be able to write a patch for #4.

The first of the patches that I sent yesterday was all about "repurposed" vars
from outer scope (lc, l, isnull, save_errno), and was 70% of your list of vars
to repurpose.

Here, I've included the rest of your list.

Plus another patch for vars which I'd already written patches to repurpose, but
which aren't classified as "repurpose" on your list.

For subselect.c, you could remove some more "lc" vars and re-use the "l" var
for consistency (but I suppose you won't want that).

--
Justin

Attachments:

v4-reuse.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 87b243e0d4b..a090cada400 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -3017,46 +3017,45 @@ XLogFileInitInternal(XLogSegNo logsegno, TimeLineID logtli,
 	}
 	pgstat_report_wait_end();
 
 	if (save_errno)
 	{
 		/*
 		 * If we fail to make the file, delete it to release disk space
 		 */
 		unlink(tmppath);
 
 		close(fd);
 
 		errno = save_errno;
 
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not write to file \"%s\": %m", tmppath)));
 	}
 
 	pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
 	if (pg_fsync(fd) != 0)
 	{
-		int			save_errno = errno;
-
+		save_errno = errno;
 		close(fd);
 		errno = save_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not fsync file \"%s\": %m", tmppath)));
 	}
 	pgstat_report_wait_end();
 
 	if (close(fd) != 0)
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not close file \"%s\": %m", tmppath)));
 
 	/*
 	 * Now move the segment into place with its final name.  Cope with
 	 * possibility that someone else has created the file while we were
 	 * filling ours: if so, use ours to pre-create a future log segment.
 	 */
 	installed_segno = logsegno;
 
 	/*
 	 * XXX: What should we use as max_segno? We used to use XLOGfileslop when
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 9be04c8a1e7..dacc989d855 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -16777,45 +16777,44 @@ PreCommit_on_commit_actions(void)
 					oids_to_truncate = lappend_oid(oids_to_truncate, oc->relid);
 				break;
 			case ONCOMMIT_DROP:
 				oids_to_drop = lappend_oid(oids_to_drop, oc->relid);
 				break;
 		}
 	}
 
 	/*
 	 * Truncate relations before dropping so that all dependencies between
 	 * relations are removed after they are worked on.  Doing it like this
 	 * might be a waste as it is possible that a relation being truncated will
 	 * be dropped anyway due to its parent being dropped, but this makes the
 	 * code more robust because of not having to re-check that the relation
 	 * exists at truncation time.
 	 */
 	if (oids_to_truncate != NIL)
 		heap_truncate(oids_to_truncate);
 
 	if (oids_to_drop != NIL)
 	{
 		ObjectAddresses *targetObjects = new_object_addresses();
-		ListCell   *l;
 
 		foreach(l, oids_to_drop)
 		{
 			ObjectAddress object;
 
 			object.classId = RelationRelationId;
 			object.objectId = lfirst_oid(l);
 			object.objectSubId = 0;
 
 			Assert(!object_address_present(&object, targetObjects));
 
 			add_exact_object_address(&object, targetObjects);
 		}
 
 		/*
 		 * Since this is an automatic drop, rather than one directly initiated
 		 * by the user, we pass the PERFORM_DELETION_INTERNAL flag.
 		 */
 		performMultipleDeletions(targetObjects, DROP_CASCADE,
 								 PERFORM_DELETION_INTERNAL | PERFORM_DELETION_QUIETLY);
 
 #ifdef USE_ASSERT_CHECKING
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index dbdfe8bd2d4..3670d1f1861 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -214,46 +214,44 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
 		(skip_locked ? VACOPT_SKIP_LOCKED : 0) |
 		(analyze ? VACOPT_ANALYZE : 0) |
 		(freeze ? VACOPT_FREEZE : 0) |
 		(full ? VACOPT_FULL : 0) |
 		(disable_page_skipping ? VACOPT_DISABLE_PAGE_SKIPPING : 0) |
 		(process_toast ? VACOPT_PROCESS_TOAST : 0);
 
 	/* sanity checks on options */
 	Assert(params.options & (VACOPT_VACUUM | VACOPT_ANALYZE));
 	Assert((params.options & VACOPT_VACUUM) ||
 		   !(params.options & (VACOPT_FULL | VACOPT_FREEZE)));
 
 	if ((params.options & VACOPT_FULL) && params.nworkers > 0)
 		ereport(ERROR,
 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 				 errmsg("VACUUM FULL cannot be performed in parallel")));
 
 	/*
 	 * Make sure VACOPT_ANALYZE is specified if any column lists are present.
 	 */
 	if (!(params.options & VACOPT_ANALYZE))
 	{
-		ListCell   *lc;
-
 		foreach(lc, vacstmt->rels)
 		{
 			VacuumRelation *vrel = lfirst_node(VacuumRelation, lc);
 
 			if (vrel->va_cols != NIL)
 				ereport(ERROR,
 						(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 						 errmsg("ANALYZE option must be specified when a column list is provided")));
 		}
 	}
 
 	/*
 	 * All freeze ages are zero if the FREEZE option is given; otherwise pass
 	 * them as -1 which means to use the default values.
 	 */
 	if (params.options & VACOPT_FREEZE)
 	{
 		params.freeze_min_age = 0;
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
 	}
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index ac03271882f..901dd435efd 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -749,45 +749,44 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
 			 */
 			if (map == NULL)
 			{
 				/*
 				 * It's safe to reuse these from the partition root, as we
 				 * only process one tuple at a time (therefore we won't
 				 * overwrite needed data in slots), and the results of
 				 * projections are independent of the underlying storage.
 				 * Projections and where clauses themselves don't store state
 				 * / are independent of the underlying storage.
 				 */
 				onconfl->oc_ProjSlot =
 					rootResultRelInfo->ri_onConflict->oc_ProjSlot;
 				onconfl->oc_ProjInfo =
 					rootResultRelInfo->ri_onConflict->oc_ProjInfo;
 				onconfl->oc_WhereClause =
 					rootResultRelInfo->ri_onConflict->oc_WhereClause;
 			}
 			else
 			{
 				List	   *onconflset;
 				List	   *onconflcols;
-				bool		found_whole_row;
 
 				/*
 				 * Translate expressions in onConflictSet to account for
 				 * different attribute numbers.  For that, map partition
 				 * varattnos twice: first to catch the EXCLUDED
 				 * pseudo-relation (INNER_VAR), and second to handle the main
 				 * target relation (firstVarno).
 				 */
 				onconflset = copyObject(node->onConflictSet);
 				if (part_attmap == NULL)
 					part_attmap =
 						build_attrmap_by_name(RelationGetDescr(partrel),
 											  RelationGetDescr(firstResultRel));
 				onconflset = (List *)
 					map_variable_attnos((Node *) onconflset,
 										INNER_VAR, 0,
 										part_attmap,
 										RelationGetForm(partrel)->reltype,
 										&found_whole_row);
 				/* We ignore the value of found_whole_row. */
 				onconflset = (List *)
 					map_variable_attnos((Node *) onconflset,
diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c
index 4b104c4d98a..8b0858e9f5f 100644
--- a/src/backend/executor/nodeWindowAgg.c
+++ b/src/backend/executor/nodeWindowAgg.c
@@ -2043,50 +2043,51 @@ update_grouptailpos(WindowAggState *winstate)
 static TupleTableSlot *
 ExecWindowAgg(PlanState *pstate)
 {
 	WindowAggState *winstate = castNode(WindowAggState, pstate);
 	TupleTableSlot *slot;
 	ExprContext *econtext;
 	int			i;
 	int			numfuncs;
 
 	CHECK_FOR_INTERRUPTS();
 
 	if (winstate->status == WINDOWAGG_DONE)
 		return NULL;
 
 	/*
 	 * Compute frame offset values, if any, during first call (or after a
 	 * rescan).  These are assumed to hold constant throughout the scan; if
 	 * user gives us a volatile expression, we'll only use its initial value.
 	 */
 	if (winstate->all_first)
 	{
 		int			frameOptions = winstate->frameOptions;
-		ExprContext *econtext = winstate->ss.ps.ps_ExprContext;
 		Datum		value;
 		bool		isnull;
 		int16		len;
 		bool		byval;
 
+		econtext = winstate->ss.ps.ps_ExprContext;
+
 		if (frameOptions & FRAMEOPTION_START_OFFSET)
 		{
 			Assert(winstate->startOffset != NULL);
 			value = ExecEvalExprSwitchContext(winstate->startOffset,
 											  econtext,
 											  &isnull);
 			if (isnull)
 				ereport(ERROR,
 						(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
 						 errmsg("frame starting offset must not be null")));
 			/* copy value into query-lifespan context */
 			get_typlenbyval(exprType((Node *) winstate->startOffset->expr),
 							&len, &byval);
 			winstate->startOffsetValue = datumCopy(value, byval, len);
 			if (frameOptions & (FRAMEOPTION_ROWS | FRAMEOPTION_GROUPS))
 			{
 				/* value is known to be int8 */
 				int64		offset = DatumGetInt64(value);
 
 				if (offset < 0)
 					ereport(ERROR,
 							(errcode(ERRCODE_INVALID_PRECEDING_OR_FOLLOWING_SIZE),
diff --git a/src/backend/lib/integerset.c b/src/backend/lib/integerset.c
index 5aff292c287..41d3abdb09c 100644
--- a/src/backend/lib/integerset.c
+++ b/src/backend/lib/integerset.c
@@ -546,46 +546,44 @@ intset_update_upper(IntegerSet *intset, int level, intset_node *child,
 
 		intset_update_upper(intset, level + 1, (intset_node *) parent, child_key);
 	}
 }
 
 /*
  * Does the set contain the given value?
  */
 bool
 intset_is_member(IntegerSet *intset, uint64 x)
 {
 	intset_node *node;
 	intset_leaf_node *leaf;
 	int			level;
 	int			itemno;
 	leaf_item  *item;
 
 	/*
 	 * The value might be in the buffer of newly-added values.
 	 */
 	if (intset->num_buffered_values > 0 && x >= intset->buffered_values[0])
 	{
-		int			itemno;
-
 		itemno = intset_binsrch_uint64(x,
 									   intset->buffered_values,
 									   intset->num_buffered_values,
 									   false);
 		if (itemno >= intset->num_buffered_values)
 			return false;
 		else
 			return (intset->buffered_values[itemno] == x);
 	}
 
 	/*
 	 * Start from the root, and walk down the B-tree to find the right leaf
 	 * node.
 	 */
 	if (!intset->root)
 		return false;
 	node = intset->root;
 	for (level = intset->num_levels - 1; level > 0; level--)
 	{
 		intset_internal_node *n = (intset_internal_node *) node;
 
 		Assert(node->level == level);
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2e7330f7bc6..10cd19e6cd9 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1633,46 +1633,44 @@ interpret_ident_response(const char *ident_response,
 			while (pg_isblank(*cursor))
 				cursor++;		/* skip blanks */
 			if (strcmp(response_type, "USERID") != 0)
 				return false;
 			else
 			{
 				/*
 				 * It's a USERID response.  Good.  "cursor" should be pointing
 				 * to the colon that precedes the operating system type.
 				 */
 				if (*cursor != ':')
 					return false;
 				else
 				{
 					cursor++;	/* Go over colon */
 					/* Skip over operating system field. */
 					while (*cursor != ':' && *cursor != '\r')
 						cursor++;
 					if (*cursor != ':')
 						return false;
 					else
 					{
-						int			i;	/* Index into *ident_user */
-
 						cursor++;	/* Go over colon */
 						while (pg_isblank(*cursor))
 							cursor++;	/* skip blanks */
 						/* Rest of line is user name.  Copy it over. */
 						i = 0;
 						while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
 							ident_user[i++] = *cursor++;
 						ident_user[i] = '\0';
 						return true;
 					}
 				}
 			}
 		}
 	}
 }
 
 
 /*
  *	Talk to the ident server on "remote_addr" and find out who
  *	owns the tcp connection to "local_addr"
  *	If the username is successfully retrieved, check the usermap.
  *
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index d929ce34171..df86d18a604 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -4757,45 +4757,45 @@ create_final_distinct_paths(PlannerInfo *root, RelOptInfo *input_rel,
 		 * First, if we have any adequately-presorted paths, just stick a
 		 * Unique node on those.  Then consider doing an explicit sort of the
 		 * cheapest input path and Unique'ing that.
 		 *
 		 * When we have DISTINCT ON, we must sort by the more rigorous of
 		 * DISTINCT and ORDER BY, else it won't have the desired behavior.
 		 * Also, if we do have to do an explicit sort, we might as well use
 		 * the more rigorous ordering to avoid a second sort later.  (Note
 		 * that the parser will have ensured that one clause is a prefix of
 		 * the other.)
 		 */
 		List	   *needed_pathkeys;
 
 		if (parse->hasDistinctOn &&
 			list_length(root->distinct_pathkeys) <
 			list_length(root->sort_pathkeys))
 			needed_pathkeys = root->sort_pathkeys;
 		else
 			needed_pathkeys = root->distinct_pathkeys;
 
 		foreach(lc, input_rel->pathlist)
 		{
-			Path	   *path = (Path *) lfirst(lc);
+			path = (Path *) lfirst(lc);
 
 			if (pathkeys_contained_in(needed_pathkeys, path->pathkeys))
 			{
 				add_path(distinct_rel, (Path *)
 						 create_upper_unique_path(root, distinct_rel,
 												  path,
 												  list_length(root->distinct_pathkeys),
 												  numDistinctRows));
 			}
 		}
 
 		/* For explicit-sort case, always use the more rigorous clause */
 		if (list_length(root->distinct_pathkeys) <
 			list_length(root->sort_pathkeys))
 		{
 			needed_pathkeys = root->sort_pathkeys;
 			/* Assert checks that parser didn't mess up... */
 			Assert(pathkeys_contained_in(root->distinct_pathkeys,
 										 needed_pathkeys));
 		}
 		else
 			needed_pathkeys = root->distinct_pathkeys;
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 043181b586b..71052c841d7 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -634,45 +634,44 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
 	 * For UNION ALL, we just need the Append path.  For UNION, need to add
 	 * node(s) to remove duplicates.
 	 */
 	if (!op->all)
 		path = make_union_unique(op, path, tlist, root);
 
 	add_path(result_rel, path);
 
 	/*
 	 * Estimate number of groups.  For now we just assume the output is unique
 	 * --- this is certainly true for the UNION case, and we want worst-case
 	 * estimates anyway.
 	 */
 	result_rel->rows = path->rows;
 
 	/*
 	 * Now consider doing the same thing using the partial paths plus Append
 	 * plus Gather.
 	 */
 	if (partial_paths_valid)
 	{
 		Path	   *ppath;
-		ListCell   *lc;
 		int			parallel_workers = 0;
 
 		/* Find the highest number of workers requested for any subpath. */
 		foreach(lc, partial_pathlist)
 		{
 			Path	   *path = lfirst(lc);
 
 			parallel_workers = Max(parallel_workers, path->parallel_workers);
 		}
 		Assert(parallel_workers > 0);
 
 		/*
 		 * If the use of parallel append is permitted, always request at least
 		 * log2(# of children) paths.  We assume it can be useful to have
 		 * extra workers in this case because they will be spread out across
 		 * the children.  The precise formula is just a guess; see
 		 * add_paths_to_append_rel.
 		 */
 		if (enable_parallel_append)
 		{
 			parallel_workers = Max(parallel_workers,
 								   pg_leftmost_one_pos32(list_length(partial_pathlist)) + 1);
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index c1c27e67d47..bf698c1fc3f 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1246,45 +1246,44 @@ dependency_is_compatible_expression(Node *clause, Index relid, List *statlist, N
 		 * first argument, and pseudoconstant is the second one.
 		 */
 		if (!is_pseudo_constant_clause(lsecond(expr->args)))
 			return false;
 
 		clause_expr = linitial(expr->args);
 
 		/*
 		 * If it's not an "=" operator, just ignore the clause, as it's not
 		 * compatible with functional dependencies. The operator is identified
 		 * simply by looking at which function it uses to estimate
 		 * selectivity. That's a bit strange, but it's what other similar
 		 * places do.
 		 */
 		if (get_oprrest(expr->opno) != F_EQSEL)
 			return false;
 
 		/* OK to proceed with checking "var" */
 	}
 	else if (is_orclause(clause))
 	{
 		BoolExpr   *bool_expr = (BoolExpr *) clause;
-		ListCell   *lc;
 
 		/* start with no expression (we'll use the first match) */
 		*expr = NULL;
 
 		foreach(lc, bool_expr->args)
 		{
 			Node	   *or_expr = NULL;
 
 			/*
 			 * Had we found incompatible expression in the arguments, treat
 			 * the whole expression as incompatible.
 			 */
 			if (!dependency_is_compatible_expression((Node *) lfirst(lc), relid,
 													 statlist, &or_expr))
 				return false;
 
 			if (*expr == NULL)
 				*expr = or_expr;
 
 			/* ensure all the expressions are the same */
 			if (!equal(or_expr, *expr))
 				return false;
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index c0e907d4373..aad79493e86 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -3784,46 +3784,44 @@ advanceConnectionState(TState *thread, CState *st, StatsData *agg)
 						st->estatus = ESTATUS_META_COMMAND_ERROR;
 				}
 
 				/*
 				 * We're now waiting for an SQL command to complete, or
 				 * finished processing a metacommand, or need to sleep, or
 				 * something bad happened.
 				 */
 				Assert(st->state == CSTATE_WAIT_RESULT ||
 					   st->state == CSTATE_END_COMMAND ||
 					   st->state == CSTATE_SLEEP ||
 					   st->state == CSTATE_ABORTED);
 				break;
 
 				/*
 				 * non executed conditional branch
 				 */
 			case CSTATE_SKIP_COMMAND:
 				Assert(!conditional_active(st->cstack));
 				/* quickly skip commands until something to do... */
 				while (true)
 				{
-					Command    *command;
-
 					command = sql_script[st->use_file].commands[st->command];
 
 					/* cannot reach end of script in that state */
 					Assert(command != NULL);
 
 					/*
 					 * if this is conditional related, update conditional
 					 * state
 					 */
 					if (command->type == META_COMMAND &&
 						(command->meta == META_IF ||
 						 command->meta == META_ELIF ||
 						 command->meta == META_ELSE ||
 						 command->meta == META_ENDIF))
 					{
 						switch (conditional_stack_peek(st->cstack))
 						{
 							case IFSTATE_FALSE:
 								if (command->meta == META_IF ||
 									command->meta == META_ELIF)
 								{
 									/* we must evaluate the condition */
@@ -3940,46 +3938,44 @@ advanceConnectionState(TState *thread, CState *st, StatsData *agg)
 				 * instead of CSTATE_START_TX.
 				 */
 			case CSTATE_SLEEP:
 				pg_time_now_lazy(&now);
 				if (now < st->sleep_until)
 					return;		/* still sleeping, nothing to do here */
 				/* Else done sleeping. */
 				st->state = CSTATE_END_COMMAND;
 				break;
 
 				/*
 				 * End of command: record stats and proceed to next command.
 				 */
 			case CSTATE_END_COMMAND:
 
 				/*
 				 * command completed: accumulate per-command execution times
 				 * in thread-local data structure, if per-command latencies
 				 * are requested.
 				 */
 				if (report_per_command)
 				{
-					Command    *command;
-
 					pg_time_now_lazy(&now);
 
 					command = sql_script[st->use_file].commands[st->command];
 					/* XXX could use a mutex here, but we choose not to */
 					addToSimpleStats(&command->stats,
 									 PG_TIME_GET_DOUBLE(now - st->stmt_begin));
 				}
 
 				/* Go ahead with next command, to be executed or skipped */
 				st->command++;
 				st->state = conditional_active(st->cstack) ?
 					CSTATE_START_COMMAND : CSTATE_SKIP_COMMAND;
 				break;
 
 				/*
 				 * Clean up after an error.
 				 */
 			case CSTATE_ERROR:
 				{
 					TStatus		tstatus;
 
 					Assert(st->estatus != ESTATUS_NO_ERROR);
v4-reuse-more.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/gist/gistbuildbuffers.c b/src/backend/access/gist/gistbuildbuffers.c
index eabf7460182..77677150aff 100644
--- a/src/backend/access/gist/gistbuildbuffers.c
+++ b/src/backend/access/gist/gistbuildbuffers.c
@@ -615,46 +615,45 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
 		 * empty.
 		 */
 		newNodeBuffer = gistGetNodeBuffer(gfbb, giststate, BufferGetBlockNumber(si->buf), level);
 
 		relocationBuffersInfos[i].nodeBuffer = newNodeBuffer;
 		relocationBuffersInfos[i].splitinfo = si;
 
 		i++;
 	}
 
 	/*
 	 * Loop through all index tuples in the buffer of the page being split,
 	 * moving them to buffers for the new pages.  We try to move each tuple to
 	 * the page that will result in the lowest penalty for the leading column
 	 * or, in the case of a tie, the lowest penalty for the earliest column
 	 * that is not tied.
 	 *
 	 * The page searching logic is very similar to gistchoose().
 	 */
 	while (gistPopItupFromNodeBuffer(gfbb, &oldBuf, &itup))
 	{
 		float		best_penalty[INDEX_MAX_KEYS];
-		int			i,
-					which;
+		int			which;
 		IndexTuple	newtup;
 		RelocationBufferInfo *targetBufferInfo;
 
 		gistDeCompressAtt(giststate, r,
 						  itup, NULL, (OffsetNumber) 0, entry, isnull);
 
 		/* default to using first page (shouldn't matter) */
 		which = 0;
 
 		/*
 		 * best_penalty[j] is the best penalty we have seen so far for column
 		 * j, or -1 when we haven't yet examined column j.  Array entries to
 		 * the right of the first -1 are undefined.
 		 */
 		best_penalty[0] = -1;
 
 		/*
 		 * Loop over possible target pages, looking for one to move this tuple
 		 * to.
 		 */
 		for (i = 0; i < splitPagesCount; i++)
 		{
diff --git a/src/backend/access/hash/hash_xlog.c b/src/backend/access/hash/hash_xlog.c
index 2e68303cbfd..e88213c7425 100644
--- a/src/backend/access/hash/hash_xlog.c
+++ b/src/backend/access/hash/hash_xlog.c
@@ -221,45 +221,44 @@ hash_xlog_add_ovfl_page(XLogReaderState *record)
 		PageSetLSN(leftpage, lsn);
 		MarkBufferDirty(leftbuf);
 	}
 
 	if (BufferIsValid(leftbuf))
 		UnlockReleaseBuffer(leftbuf);
 	UnlockReleaseBuffer(ovflbuf);
 
 	/*
 	 * Note: in normal operation, we'd update the bitmap and meta page while
 	 * still holding lock on the overflow pages.  But during replay it's not
 	 * necessary to hold those locks, since no other index updates can be
 	 * happening concurrently.
 	 */
 	if (XLogRecHasBlockRef(record, 2))
 	{
 		Buffer		mapbuffer;
 
 		if (XLogReadBufferForRedo(record, 2, &mapbuffer) == BLK_NEEDS_REDO)
 		{
 			Page		mappage = (Page) BufferGetPage(mapbuffer);
 			uint32	   *freep = NULL;
-			char	   *data;
 			uint32	   *bitmap_page_bit;
 
 			freep = HashPageGetBitmap(mappage);
 
 			data = XLogRecGetBlockData(record, 2, &datalen);
 			bitmap_page_bit = (uint32 *) data;
 
 			SETBIT(freep, *bitmap_page_bit);
 
 			PageSetLSN(mappage, lsn);
 			MarkBufferDirty(mapbuffer);
 		}
 		if (BufferIsValid(mapbuffer))
 			UnlockReleaseBuffer(mapbuffer);
 	}
 
 	if (XLogRecHasBlockRef(record, 3))
 	{
 		Buffer		newmapbuf;
 
 		newmapbuf = XLogInitBufferForRedo(record, 3);
 
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index aab8d6fa4e5..3133d1e0585 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -6256,45 +6256,45 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
 		return multi;
 	}
 
 	/*
 	 * Do a more thorough second pass over the multi to figure out which
 	 * member XIDs actually need to be kept.  Checking the precise status of
 	 * individual members might even show that we don't need to keep anything.
 	 */
 	nnewmembers = 0;
 	newmembers = palloc(sizeof(MultiXactMember) * nmembers);
 	has_lockers = false;
 	update_xid = InvalidTransactionId;
 	update_committed = false;
 	temp_xid_out = *mxid_oldest_xid_out;	/* init for FRM_RETURN_IS_MULTI */
 
 	for (i = 0; i < nmembers; i++)
 	{
 		/*
 		 * Determine whether to keep this member or ignore it.
 		 */
 		if (ISUPDATE_from_mxstatus(members[i].status))
 		{
-			TransactionId xid = members[i].xid;
+			xid = members[i].xid;
 
 			Assert(TransactionIdIsValid(xid));
 			if (TransactionIdPrecedes(xid, relfrozenxid))
 				ereport(ERROR,
 						(errcode(ERRCODE_DATA_CORRUPTED),
 						 errmsg_internal("found update xid %u from before relfrozenxid %u",
 										 xid, relfrozenxid)));
 
 			/*
 			 * It's an update; should we keep it?  If the transaction is known
 			 * aborted or crashed then it's okay to ignore it, otherwise not.
 			 * Note that an updater older than cutoff_xid cannot possibly be
 			 * committed, because HeapTupleSatisfiesVacuum would have returned
 			 * HEAPTUPLE_DEAD and we would not be trying to freeze the tuple.
 			 *
 			 * As with all tuple visibility routines, it's critical to test
 			 * TransactionIdIsInProgress before TransactionIdDidCommit,
 			 * because of race conditions explained in detail in
 			 * heapam_visibility.c.
 			 */
 			if (TransactionIdIsCurrentTransactionId(xid) ||
 				TransactionIdIsInProgress(xid))
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
index 8f7d12950e5..ec57f56adf3 100644
--- a/src/backend/access/transam/multixact.c
+++ b/src/backend/access/transam/multixact.c
@@ -1595,45 +1595,44 @@ mXactCachePut(MultiXactId multi, int nmembers, MultiXactMember *members)
 		debug_elog2(DEBUG2, "CachePut: initializing memory context");
 		MXactContext = AllocSetContextCreate(TopTransactionContext,
 											 "MultiXact cache context",
 											 ALLOCSET_SMALL_SIZES);
 	}
 
 	entry = (mXactCacheEnt *)
 		MemoryContextAlloc(MXactContext,
 						   offsetof(mXactCacheEnt, members) +
 						   nmembers * sizeof(MultiXactMember));
 
 	entry->multi = multi;
 	entry->nmembers = nmembers;
 	memcpy(entry->members, members, nmembers * sizeof(MultiXactMember));
 
 	/* mXactCacheGetBySet assumes the entries are sorted, so sort them */
 	qsort(entry->members, nmembers, sizeof(MultiXactMember), mxactMemberComparator);
 
 	dlist_push_head(&MXactCache, &entry->node);
 	if (MXactCacheMembers++ >= MAX_CACHE_ENTRIES)
 	{
 		dlist_node *node;
-		mXactCacheEnt *entry;
 
 		node = dlist_tail_node(&MXactCache);
 		dlist_delete(node);
 		MXactCacheMembers--;
 
 		entry = dlist_container(mXactCacheEnt, node, node);
 		debug_elog3(DEBUG2, "CachePut: pruning cached multi %u",
 					entry->multi);
 
 		pfree(entry);
 	}
 }
 
 static char *
 mxstatus_to_string(MultiXactStatus status)
 {
 	switch (status)
 	{
 		case MultiXactStatusForKeyShare:
 			return "keysh";
 		case MultiXactStatusForShare:
 			return "sh";
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index a090cada400..537845cada7 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -4701,45 +4701,44 @@ XLogInitNewTimeline(TimeLineID endTLI, XLogRecPtr endOfLog, TimeLineID newTLI)
 		/*
 		 * Make a copy of the file on the new timeline.
 		 *
 		 * Writing WAL isn't allowed yet, so there are no locking
 		 * considerations. But we should be just as tense as XLogFileInit to
 		 * avoid emplacing a bogus file.
 		 */
 		XLogFileCopy(newTLI, endLogSegNo, endTLI, endLogSegNo,
 					 XLogSegmentOffset(endOfLog, wal_segment_size));
 	}
 	else
 	{
 		/*
 		 * The switch happened at a segment boundary, so just create the next
 		 * segment on the new timeline.
 		 */
 		int			fd;
 
 		fd = XLogFileInit(startLogSegNo, newTLI);
 
 		if (close(fd) != 0)
 		{
-			char		xlogfname[MAXFNAMELEN];
 			int			save_errno = errno;
 
 			XLogFileName(xlogfname, newTLI, startLogSegNo, wal_segment_size);
 			errno = save_errno;
 			ereport(ERROR,
 					(errcode_for_file_access(),
 					 errmsg("could not close file \"%s\": %m", xlogfname)));
 		}
 	}
 
 	/*
 	 * Let's just make real sure there are not .ready or .done flags posted
 	 * for the new segment.
 	 */
 	XLogFileName(xlogfname, newTLI, startLogSegNo, wal_segment_size);
 	XLogArchiveCleanup(xlogfname);
 }
 
 /*
  * Perform cleanup actions at the conclusion of archive recovery.
  */
 static void
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index e7e37146f69..e6fcfc23b93 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -102,45 +102,44 @@ compute_return_type(TypeName *returnType, Oid languageOid,
 	if (typtup)
 	{
 		if (!((Form_pg_type) GETSTRUCT(typtup))->typisdefined)
 		{
 			if (languageOid == SQLlanguageId)
 				ereport(ERROR,
 						(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
 						 errmsg("SQL function cannot return shell type %s",
 								TypeNameToString(returnType))));
 			else
 				ereport(NOTICE,
 						(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 						 errmsg("return type %s is only a shell",
 								TypeNameToString(returnType))));
 		}
 		rettype = typeTypeId(typtup);
 		ReleaseSysCache(typtup);
 	}
 	else
 	{
 		char	   *typnam = TypeNameToString(returnType);
 		Oid			namespaceId;
-		AclResult	aclresult;
 		char	   *typname;
 		ObjectAddress address;
 
 		/*
 		 * Only C-coded functions can be I/O functions.  We enforce this
 		 * restriction here mainly to prevent littering the catalogs with
 		 * shell types due to simple typos in user-defined function
 		 * definitions.
 		 */
 		if (languageOid != INTERNALlanguageId &&
 			languageOid != ClanguageId)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_OBJECT),
 					 errmsg("type \"%s\" does not exist", typnam)));
 
 		/* Reject if there's typmod decoration, too */
 		if (returnType->typmods != NIL)
 			ereport(ERROR,
 					(errcode(ERRCODE_SYNTAX_ERROR),
 					 errmsg("type modifier cannot be specified for shell type \"%s\"",
 							typnam)));
 
@@ -1093,46 +1092,44 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt)
 			language = "sql";
 		else
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
 					 errmsg("no language specified")));
 	}
 
 	/* Look up the language and validate permissions */
 	languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language));
 	if (!HeapTupleIsValid(languageTuple))
 		ereport(ERROR,
 				(errcode(ERRCODE_UNDEFINED_OBJECT),
 				 errmsg("language \"%s\" does not exist", language),
 				 (extension_file_exists(language) ?
 				  errhint("Use CREATE EXTENSION to load the language into the database.") : 0)));
 
 	languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
 	languageOid = languageStruct->oid;
 
 	if (languageStruct->lanpltrusted)
 	{
 		/* if trusted language, need USAGE privilege */
-		AclResult	aclresult;
-
 		aclresult = pg_language_aclcheck(languageOid, GetUserId(), ACL_USAGE);
 		if (aclresult != ACLCHECK_OK)
 			aclcheck_error(aclresult, OBJECT_LANGUAGE,
 						   NameStr(languageStruct->lanname));
 	}
 	else
 	{
 		/* if untrusted language, must be superuser */
 		if (!superuser())
 			aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE,
 						   NameStr(languageStruct->lanname));
 	}
 
 	languageValidator = languageStruct->lanvalidator;
 
 	ReleaseSysCache(languageTuple);
 
 	/*
 	 * Only superuser is allowed to create leakproof functions because
 	 * leakproof functions can see tuples which have not yet been filtered out
 	 * by security barrier views or row-level security policies.
 	 */
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 29bc26669b0..a250a33f8cb 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -2465,45 +2465,44 @@ _SPI_execute_plan(SPIPlanPtr plan, const SPIExecuteOptions *options,
 	 * there be only one query.
 	 */
 	if (options->must_return_tuples && plan->plancache_list == NIL)
 		ereport(ERROR,
 				(errcode(ERRCODE_SYNTAX_ERROR),
 				 errmsg("empty query does not return tuples")));
 
 	foreach(lc1, plan->plancache_list)
 	{
 		CachedPlanSource *plansource = (CachedPlanSource *) lfirst(lc1);
 		List	   *stmt_list;
 		ListCell   *lc2;
 
 		spicallbackarg.query = plansource->query_string;
 
 		/*
 		 * If this is a one-shot plan, we still need to do parse analysis.
 		 */
 		if (plan->oneshot)
 		{
 			RawStmt    *parsetree = plansource->raw_parse_tree;
 			const char *src = plansource->query_string;
-			List	   *stmt_list;
 
 			/*
 			 * Parameter datatypes are driven by parserSetup hook if provided,
 			 * otherwise we use the fixed parameter list.
 			 */
 			if (parsetree == NULL)
 				stmt_list = NIL;
 			else if (plan->parserSetup != NULL)
 			{
 				Assert(plan->nargs == 0);
 				stmt_list = pg_analyze_and_rewrite_withcb(parsetree,
 														  src,
 														  plan->parserSetup,
 														  plan->parserSetupArg,
 														  _SPI_current->queryEnv);
 			}
 			else
 			{
 				stmt_list = pg_analyze_and_rewrite_fixedparams(parsetree,
 															   src,
 															   plan->argtypes,
 															   plan->nargs,
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 7d176e7b00a..0557e945ca7 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -2169,45 +2169,45 @@ match_clause_to_index(PlannerInfo *root,
 	 * but what if someone builds an expression index on a constant? It's not
 	 * totally unreasonable to do so with a partial index, either.)
 	 */
 	if (rinfo->pseudoconstant)
 		return;
 
 	/*
 	 * If clause can't be used as an indexqual because it must wait till after
 	 * some lower-security-level restriction clause, reject it.
 	 */
 	if (!restriction_is_securely_promotable(rinfo, index->rel))
 		return;
 
 	/* OK, check each index key column for a match */
 	for (indexcol = 0; indexcol < index->nkeycolumns; indexcol++)
 	{
 		IndexClause *iclause;
 		ListCell   *lc;
 
 		/* Ignore duplicates */
 		foreach(lc, clauseset->indexclauses[indexcol])
 		{
-			IndexClause *iclause = (IndexClause *) lfirst(lc);
+			iclause = (IndexClause *) lfirst(lc);
 
 			if (iclause->rinfo == rinfo)
 				return;
 		}
 
 		/* OK, try to match the clause to the index column */
 		iclause = match_clause_to_indexcol(root,
 										   rinfo,
 										   indexcol,
 										   index);
 		if (iclause)
 		{
 			/* Success, so record it */
 			clauseset->indexclauses[indexcol] =
 				lappend(clauseset->indexclauses[indexcol], iclause);
 			clauseset->nonempty = true;
 			return;
 		}
 	}
 }
 
 /*
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index df4ca129191..b15ecc83971 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2383,45 +2383,45 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 
 				/* We must run finalize_plan on the subquery */
 				rel = find_base_rel(root, sscan->scan.scanrelid);
 				subquery_params = rel->subroot->outer_params;
 				if (gather_param >= 0)
 					subquery_params = bms_add_member(bms_copy(subquery_params),
 													 gather_param);
 				finalize_plan(rel->subroot, sscan->subplan, gather_param,
 							  subquery_params, NULL);
 
 				/* Now we can add its extParams to the parent's params */
 				context.paramids = bms_add_members(context.paramids,
 												   sscan->subplan->extParam);
 				/* We need scan_params too, though */
 				context.paramids = bms_add_members(context.paramids,
 												   scan_params);
 			}
 			break;
 
 		case T_FunctionScan:
 			{
 				FunctionScan *fscan = (FunctionScan *) plan;
-				ListCell   *lc;
+				ListCell   *lc; //
 
 				/*
 				 * Call finalize_primnode independently on each function
 				 * expression, so that we can record which params are
 				 * referenced in each, in order to decide which need
 				 * re-evaluating during rescan.
 				 */
 				foreach(lc, fscan->functions)
 				{
 					RangeTblFunction *rtfunc = (RangeTblFunction *) lfirst(lc);
 					finalize_primnode_context funccontext;
 
 					funccontext = context;
 					funccontext.paramids = NULL;
 
 					finalize_primnode(rtfunc->funcexpr, &funccontext);
 
 					/* remember results for execution */
 					rtfunc->funcparams = funccontext.paramids;
 
 					/* add the function's params to the overall set */
 					context.paramids = bms_add_members(context.paramids,
@@ -2491,158 +2491,148 @@ finalize_plan(PlannerInfo *root, Plan *plan,
 		case T_NamedTuplestoreScan:
 			context.paramids = bms_add_members(context.paramids, scan_params);
 			break;
 
 		case T_ForeignScan:
 			{
 				ForeignScan *fscan = (ForeignScan *) plan;
 
 				finalize_primnode((Node *) fscan->fdw_exprs,
 								  &context);
 				finalize_primnode((Node *) fscan->fdw_recheck_quals,
 								  &context);
 
 				/* We assume fdw_scan_tlist cannot contain Params */
 				context.paramids = bms_add_members(context.paramids,
 												   scan_params);
 			}
 			break;
 
 		case T_CustomScan:
 			{
 				CustomScan *cscan = (CustomScan *) plan;
-				ListCell   *lc;
+				ListCell   *lc; //
 
 				finalize_primnode((Node *) cscan->custom_exprs,
 								  &context);
 				/* We assume custom_scan_tlist cannot contain Params */
 				context.paramids =
 					bms_add_members(context.paramids, scan_params);
 
 				/* child nodes if any */
 				foreach(lc, cscan->custom_plans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(lc),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_ModifyTable:
 			{
 				ModifyTable *mtplan = (ModifyTable *) plan;
 
 				/* Force descendant scan nodes to reference epqParam */
 				locally_added_param = mtplan->epqParam;
 				valid_params = bms_add_member(bms_copy(valid_params),
 											  locally_added_param);
 				scan_params = bms_add_member(bms_copy(scan_params),
 											 locally_added_param);
 				finalize_primnode((Node *) mtplan->returningLists,
 								  &context);
 				finalize_primnode((Node *) mtplan->onConflictSet,
 								  &context);
 				finalize_primnode((Node *) mtplan->onConflictWhere,
 								  &context);
 				/* exclRelTlist contains only Vars, doesn't need examination */
 			}
 			break;
 
 		case T_Append:
 			{
-				ListCell   *l;
-
 				foreach(l, ((Append *) plan)->appendplans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_MergeAppend:
 			{
-				ListCell   *l;
-
 				foreach(l, ((MergeAppend *) plan)->mergeplans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_BitmapAnd:
 			{
-				ListCell   *l;
-
 				foreach(l, ((BitmapAnd *) plan)->bitmapplans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_BitmapOr:
 			{
-				ListCell   *l;
-
 				foreach(l, ((BitmapOr *) plan)->bitmapplans)
 				{
 					context.paramids =
 						bms_add_members(context.paramids,
 										finalize_plan(root,
 													  (Plan *) lfirst(l),
 													  gather_param,
 													  valid_params,
 													  scan_params));
 				}
 			}
 			break;
 
 		case T_NestLoop:
 			{
-				ListCell   *l;
-
 				finalize_primnode((Node *) ((Join *) plan)->joinqual,
 								  &context);
 				/* collect set of params that will be passed to right child */
 				foreach(l, ((NestLoop *) plan)->nestParams)
 				{
 					NestLoopParam *nlp = (NestLoopParam *) lfirst(l);
 
 					nestloop_params = bms_add_member(nestloop_params,
 													 nlp->paramno);
 				}
 			}
 			break;
 
 		case T_MergeJoin:
 			finalize_primnode((Node *) ((Join *) plan)->joinqual,
 							  &context);
 			finalize_primnode((Node *) ((MergeJoin *) plan)->mergeclauses,
 							  &context);
 			break;
 
 		case T_HashJoin:
 			finalize_primnode((Node *) ((Join *) plan)->joinqual,
diff --git a/src/backend/partitioning/partbounds.c b/src/backend/partitioning/partbounds.c
index 091d6e886b6..2720a2508cb 100644
--- a/src/backend/partitioning/partbounds.c
+++ b/src/backend/partitioning/partbounds.c
@@ -4300,46 +4300,45 @@ get_qual_for_range(Relation parent, PartitionBoundSpec *spec,
 	int			i,
 				j;
 	PartitionRangeDatum *ldatum,
 			   *udatum;
 	PartitionKey key = RelationGetPartitionKey(parent);
 	Expr	   *keyCol;
 	Const	   *lower_val,
 			   *upper_val;
 	List	   *lower_or_arms,
 			   *upper_or_arms;
 	int			num_or_arms,
 				current_or_arm;
 	ListCell   *lower_or_start_datum,
 			   *upper_or_start_datum;
 	bool		need_next_lower_arm,
 				need_next_upper_arm;
 
 	if (spec->is_default)
 	{
 		List	   *or_expr_args = NIL;
 		PartitionDesc pdesc = RelationGetPartitionDesc(parent, false);
 		Oid		   *inhoids = pdesc->oids;
-		int			nparts = pdesc->nparts,
-					i;
+		int			nparts = pdesc->nparts;
 
 		for (i = 0; i < nparts; i++)
 		{
 			Oid			inhrelid = inhoids[i];
 			HeapTuple	tuple;
 			Datum		datum;
 			bool		isnull;
 			PartitionBoundSpec *bspec;
 
 			tuple = SearchSysCache1(RELOID, inhrelid);
 			if (!HeapTupleIsValid(tuple))
 				elog(ERROR, "cache lookup failed for relation %u", inhrelid);
 
 			datum = SysCacheGetAttr(RELOID, tuple,
 									Anum_pg_class_relpartbound,
 									&isnull);
 			if (isnull)
 				elog(ERROR, "null relpartbound for relation %u", inhrelid);
 
 			bspec = (PartitionBoundSpec *)
 				stringToNode(TextDatumGetCString(datum));
 			if (!IsA(bspec, PartitionBoundSpec))
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 89cf9f9389c..8ac78a6cf38 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -2301,45 +2301,44 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
 						 * previous tuple's toast chunks.
 						 */
 						Assert(change->data.tp.clear_toast_afterwards);
 						ReorderBufferToastReset(rb, txn);
 
 						/* We don't need this record anymore. */
 						ReorderBufferReturnChange(rb, specinsert, true);
 						specinsert = NULL;
 					}
 					break;
 
 				case REORDER_BUFFER_CHANGE_TRUNCATE:
 					{
 						int			i;
 						int			nrelids = change->data.truncate.nrelids;
 						int			nrelations = 0;
 						Relation   *relations;
 
 						relations = palloc0(nrelids * sizeof(Relation));
 						for (i = 0; i < nrelids; i++)
 						{
 							Oid			relid = change->data.truncate.relids[i];
-							Relation	relation;
 
 							relation = RelationIdGetRelation(relid);
 
 							if (!RelationIsValid(relation))
 								elog(ERROR, "could not open relation with OID %u", relid);
 
 							if (!RelationIsLogicallyLogged(relation))
 								continue;
 
 							relations[nrelations++] = relation;
 						}
 
 						/* Apply the truncate. */
 						ReorderBufferApplyTruncate(rb, txn, nrelations,
 												   relations, change,
 												   streaming);
 
 						for (i = 0; i < nrelations; i++)
 							RelationClose(relations[i]);
 
 						break;
 					}
diff --git a/src/backend/rewrite/rowsecurity.c b/src/backend/rewrite/rowsecurity.c
index a233dd47585..b2a72374306 100644
--- a/src/backend/rewrite/rowsecurity.c
+++ b/src/backend/rewrite/rowsecurity.c
@@ -805,45 +805,44 @@ add_with_check_options(Relation rel,
 		wco->polname = NULL;
 		wco->cascaded = false;
 
 		if (list_length(permissive_quals) == 1)
 			wco->qual = (Node *) linitial(permissive_quals);
 		else
 			wco->qual = (Node *) makeBoolExpr(OR_EXPR, permissive_quals, -1);
 
 		ChangeVarNodes(wco->qual, 1, rt_index, 0);
 
 		*withCheckOptions = list_append_unique(*withCheckOptions, wco);
 
 		/*
 		 * Now add WithCheckOptions for each of the restrictive policy clauses
 		 * (which will be combined together using AND).  We use a separate
 		 * WithCheckOption for each restrictive policy to allow the policy
 		 * name to be included in error reports if the policy is violated.
 		 */
 		foreach(item, restrictive_policies)
 		{
 			RowSecurityPolicy *policy = (RowSecurityPolicy *) lfirst(item);
 			Expr	   *qual = QUAL_FOR_WCO(policy);
-			WithCheckOption *wco;
 
 			if (qual != NULL)
 			{
 				qual = copyObject(qual);
 				ChangeVarNodes((Node *) qual, 1, rt_index, 0);
 
 				wco = makeNode(WithCheckOption);
 				wco->kind = kind;
 				wco->relname = pstrdup(RelationGetRelationName(rel));
 				wco->polname = pstrdup(policy->policy_name);
 				wco->qual = (Node *) qual;
 				wco->cascaded = false;
 
 				*withCheckOptions = list_append_unique(*withCheckOptions, wco);
 				*hasSubLinks |= policy->hassublinks;
 			}
 		}
 	}
 	else
 	{
 		/*
 		 * If there were no policy clauses to check new data, add a single
diff --git a/src/backend/utils/adt/rangetypes_spgist.c b/src/backend/utils/adt/rangetypes_spgist.c
index 1190b8000bc..71a6053b6a0 100644
--- a/src/backend/utils/adt/rangetypes_spgist.c
+++ b/src/backend/utils/adt/rangetypes_spgist.c
@@ -674,73 +674,71 @@ spg_range_quad_inner_consistent(PG_FUNCTION_ARGS)
 			if (minLower)
 			{
 				/*
 				 * If the centroid's lower bound is less than or equal to the
 				 * minimum lower bound, anything in the 3rd and 4th quadrants
 				 * will have an even smaller lower bound, and thus can't
 				 * match.
 				 */
 				if (range_cmp_bounds(typcache, &centroidLower, minLower) <= 0)
 					which &= (1 << 1) | (1 << 2) | (1 << 5);
 			}
 			if (maxLower)
 			{
 				/*
 				 * If the centroid's lower bound is greater than the maximum
 				 * lower bound, anything in the 1st and 2nd quadrants will
 				 * also have a greater than or equal lower bound, and thus
 				 * can't match. If the centroid's lower bound is equal to the
 				 * maximum lower bound, we can still exclude the 1st and 2nd
 				 * quadrants if we're looking for a value strictly greater
 				 * than the maximum.
 				 */
-				int			cmp;
 
 				cmp = range_cmp_bounds(typcache, &centroidLower, maxLower);
 				if (cmp > 0 || (!inclusive && cmp == 0))
 					which &= (1 << 3) | (1 << 4) | (1 << 5);
 			}
 			if (minUpper)
 			{
 				/*
 				 * If the centroid's upper bound is less than or equal to the
 				 * minimum upper bound, anything in the 2nd and 3rd quadrants
 				 * will have an even smaller upper bound, and thus can't
 				 * match.
 				 */
 				if (range_cmp_bounds(typcache, &centroidUpper, minUpper) <= 0)
 					which &= (1 << 1) | (1 << 4) | (1 << 5);
 			}
 			if (maxUpper)
 			{
 				/*
 				 * If the centroid's upper bound is greater than the maximum
 				 * upper bound, anything in the 1st and 4th quadrants will
 				 * also have a greater than or equal upper bound, and thus
 				 * can't match. If the centroid's upper bound is equal to the
 				 * maximum upper bound, we can still exclude the 1st and 4th
 				 * quadrants if we're looking for a value strictly greater
 				 * than the maximum.
 				 */
-				int			cmp;
 
 				cmp = range_cmp_bounds(typcache, &centroidUpper, maxUpper);
 				if (cmp > 0 || (!inclusive && cmp == 0))
 					which &= (1 << 2) | (1 << 3) | (1 << 5);
 			}
 
 			if (which == 0)
 				break;			/* no need to consider remaining conditions */
 		}
 	}
 
 	/* We must descend into the quadrant(s) identified by 'which' */
 	out->nodeNumbers = (int *) palloc(sizeof(int) * in->nNodes);
 	if (needPrevious)
 		out->traversalValues = (void **) palloc(sizeof(void *) * in->nNodes);
 	out->nNodes = 0;
 
 	/*
 	 * Elements of traversalValues should be allocated in
 	 * traversalMemoryContext
 	 */
 	oldCtx = MemoryContextSwitchTo(in->traversalMemoryContext);
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 8280711f7ef..9959f6910e9 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -1284,45 +1284,44 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
 	idxrelrec = (Form_pg_class) GETSTRUCT(ht_idxrel);
 
 	/*
 	 * Fetch the pg_am tuple of the index' access method
 	 */
 	ht_am = SearchSysCache1(AMOID, ObjectIdGetDatum(idxrelrec->relam));
 	if (!HeapTupleIsValid(ht_am))
 		elog(ERROR, "cache lookup failed for access method %u",
 			 idxrelrec->relam);
 	amrec = (Form_pg_am) GETSTRUCT(ht_am);
 
 	/* Fetch the index AM's API struct */
 	amroutine = GetIndexAmRoutine(amrec->amhandler);
 
 	/*
 	 * Get the index expressions, if any.  (NOTE: we do not use the relcache
 	 * versions of the expressions and predicate, because we want to display
 	 * non-const-folded expressions.)
 	 */
 	if (!heap_attisnull(ht_idx, Anum_pg_index_indexprs, NULL))
 	{
 		Datum		exprsDatum;
-		bool		isnull;
 		char	   *exprsString;
 
 		exprsDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
 									 Anum_pg_index_indexprs, &isnull);
 		Assert(!isnull);
 		exprsString = TextDatumGetCString(exprsDatum);
 		indexprs = (List *) stringToNode(exprsString);
 		pfree(exprsString);
 	}
 	else
 		indexprs = NIL;
 
 	indexpr_item = list_head(indexprs);
 
 	context = deparse_context_for(get_relation_name(indrelid), indrelid);
 
 	/*
 	 * Start the index definition.  Note that the index's name should never be
 	 * schema-qualified, but the indexed rel's name may be.
 	 */
 	initStringInfo(&buf);
 
@@ -1481,45 +1480,44 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
 		 */
 		if (showTblSpc)
 		{
 			Oid			tblspc;
 
 			tblspc = get_rel_tablespace(indexrelid);
 			if (OidIsValid(tblspc))
 			{
 				if (isConstraint)
 					appendStringInfoString(&buf, " USING INDEX");
 				appendStringInfo(&buf, " TABLESPACE %s",
 								 quote_identifier(get_tablespace_name(tblspc)));
 			}
 		}
 
 		/*
 		 * If it's a partial index, decompile and append the predicate
 		 */
 		if (!heap_attisnull(ht_idx, Anum_pg_index_indpred, NULL))
 		{
 			Node	   *node;
 			Datum		predDatum;
-			bool		isnull;
 			char	   *predString;
 
 			/* Convert text string to node tree */
 			predDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
 										Anum_pg_index_indpred, &isnull);
 			Assert(!isnull);
 			predString = TextDatumGetCString(predDatum);
 			node = (Node *) stringToNode(predString);
 			pfree(predString);
 
 			/* Deparse */
 			str = deparse_expression_pretty(node, context, false, false,
 											prettyFlags, 0);
 			if (isConstraint)
 				appendStringInfo(&buf, " WHERE (%s)", str);
 			else
 				appendStringInfo(&buf, " WHERE %s", str);
 		}
 	}
 
 	/* Clean up */
 	ReleaseSysCache(ht_idx);
@@ -1926,45 +1924,44 @@ pg_get_partkeydef_worker(Oid relid, int prettyFlags,
 	Assert(form->partrelid == relid);
 
 	/* Must get partclass and partcollation the hard way */
 	datum = SysCacheGetAttr(PARTRELID, tuple,
 							Anum_pg_partitioned_table_partclass, &isnull);
 	Assert(!isnull);
 	partclass = (oidvector *) DatumGetPointer(datum);
 
 	datum = SysCacheGetAttr(PARTRELID, tuple,
 							Anum_pg_partitioned_table_partcollation, &isnull);
 	Assert(!isnull);
 	partcollation = (oidvector *) DatumGetPointer(datum);
 
 
 	/*
 	 * Get the expressions, if any.  (NOTE: we do not use the relcache
 	 * versions of the expressions, because we want to display
 	 * non-const-folded expressions.)
 	 */
 	if (!heap_attisnull(tuple, Anum_pg_partitioned_table_partexprs, NULL))
 	{
 		Datum		exprsDatum;
-		bool		isnull;
 		char	   *exprsString;
 
 		exprsDatum = SysCacheGetAttr(PARTRELID, tuple,
 									 Anum_pg_partitioned_table_partexprs, &isnull);
 		Assert(!isnull);
 		exprsString = TextDatumGetCString(exprsDatum);
 		partexprs = (List *) stringToNode(exprsString);
 
 		if (!IsA(partexprs, List))
 			elog(ERROR, "unexpected node type found in partexprs: %d",
 				 (int) nodeTag(partexprs));
 
 		pfree(exprsString);
 	}
 	else
 		partexprs = NIL;
 
 	partexpr_item = list_head(partexprs);
 	context = deparse_context_for(get_relation_name(relid), relid);
 
 	initStringInfo(&buf);
 
#25David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#24)
1 attachment(s)
Re: shadow variables - pg15 edition

On Thu, 25 Aug 2022 at 14:08, Justin Pryzby <pryzby@telsasoft.com> wrote:

Here, I've included the rest of your list.

OK, I've gone through v3-remove-var-declarations.txt, v4-reuse.txt
v4-reuse-more.txt and committed most of what you had and removed a few
that I thought should be renames instead.

I also added some additional ones after reprocessing the RenameOrScope
category from the spreadsheet.

With some minor adjustments to a small number of your ones, I pushed
what I came up with.

David

Attachments:

shadow_analysis.odsapplication/vnd.oasis.opendocument.spreadsheet; name=shadow_analysis.odsDownload
PKnvU�l9�..mimetypeapplication/vnd.oasis.opendocument.spreadsheetPKnvUConfigurations2/popupmenu/PKnvUConfigurations2/menubar/PKnvUConfigurations2/progressbar/PKnvUConfigurations2/toolbar/PKnvUConfigurations2/statusbar/PKnvUConfigurations2/images/Bitmaps/PKnvUConfigurations2/floater/PKnvUConfigurations2/toolpanel/PKnvUConfigurations2/accelerator/PKnvUmanifest.rdf���n�0D�|�e�x�^

�P�s�?p�!V������}�VU������F3oG����#{����"�L[���c��0d�|�&��K�cQm��S��!�`Y�<�#UUA^BYfQ���y�,��M[=:M�b�����p�6hD�_�7u8;���&�����=*(�P�N�5��0��%L:H�~H���;s�V�#����R^�%=KnE��o��aL��Q_�����vA�P�Sd�����?�&�PKLJq�EPKnvU
styles.xml�Zmo�6��_a���S$�i{���[�m0���KAS�L������)R�dIQ��)�ME���{�����m�F"$���7>��0�h�\y}x�_x��o.ySL����d����9�L�S������#I�<C)�s��<'�c��h�f���<���Q���P�[������9Rt�jb89+��yh�b�3.�`��A���o�>�����l;5���l�YG��._f�"F�j2��Gk@j�!�������i���@�������>���O�*��$�M�$��+$o���"W=��s�J��r��qu�l�.�Rh�����`�
�����^r�����T�&5P��h��e��.bER����3���]���DP=��a��$4��W�qhG��*�"T�P+H�0����b8�S��|��y��*.Ej�Qi.��0i�y�fWE:������X�|p�5�y��*�~�O6���Et�R�]D��#MD����i���P�}VF��S����]��|]���"�S����_�)@���l�So�v����#L��`&�EZ����Y�x�����TP�MJ�]e*������8����B
��)�����B�2&����M�oQ��O
�b�wy��R�(r�0��	j��=��EY�X�F�D��wR��Q�S,�����^�	L�n0�t�������!"1Z3��9�VOS�}L�y�J�W~��E��+�������T(���
�q�Lg�>�Q�#�b>g��5J`�dfCIV����k��!YQ��C�d9��+7c����7��u���m3�J����6��S��`��,���R�sz
��}Jif6��"�P%{�P�L���~-���� �f~M�>)�0��*�
~}��81Ngz�g���80���3����2V4��i�MnxKh���`�YT��=�X�>N��cgDx���_�����v�����n��~�o�����������'_n�"�6J�x�
��J�n^}`�w^D	����WBc���9�����E��,����q#h?h4zv���2_q��|�����8W����.��(t����=���#�s��9�"���H����-������R�P�C�������9��m�:$�169>$�����zz�T�!k%*
��@��o(���A��<J4��u�=& #��I6A���M.N_��C|��� yXo�3�`���nV��� 8��`A;$,'��������������#`{@���M{a����|�������SW������*�_w��M�}���HQ���{o�&]7J����0�D^y�s������D������U�&5O�^���~�-�������r��+����v,��!��G��ae]}�i��H`��X������.�R�=HX^��YUN���y��uu�=����������|�����n�����Y8�p\--Y�Jt*�U	���=K��@�J��?���bj'R$K�[;�%���W�����/.�W2r�W�)�����As���
��n�7W^�����������m�Z��p/�n����
NT
�^�=�����&F��=d�'A���~(��@�������UI������X2���"B��j^gW;��G%��Al�_���^���"��VM��a�+�pn~K���������C<�2��j����P�)�������PK;
�z�L'PKnvUmeta.xml��Ks�0�����-����d����v���;H�T
�<H���W���E�������6x��*�w�F���J7;�c�f���T��Q	������'pU������������9��0DC�����7XV�
_whq���;�Me���:��Nps���o,�b��9�G�K�k�1���a&��	��+��*�|k7��9������w�DI���1��4�K���cT�������W%#��$�fOO2��(Ms2=^�bj�r�����*��\~��'�|a�{�o����h�����/�7���MW~Vu_��x�`���|��=��6�M\!�sg��p8K�$����|�o���(i,Fk�U�&,�)��%���y���Y�w�N�`�]U�
�k�+E�(�mWmK�E5�d�	�e����o]��PK�%f�(PKnvU�Iq�Q!Q!Thumbnails/thumbnail.png�PNG


IHDR�:<�PLTE*
,**))8+0<9,(3*6>1';1,548-7C5/@2=E2=K=4A::@::P4AI=GS?P^F9+F=8V?1F>JKB9TE:DCGEDJEHKICCIFKOJELJKGKVNPMLRYTKGQNRWQLSRUTT[TX^\TRXUZ]XRZZ[JJaKVdM^qROaSUiP\qM`tZaiWhzfOEgUJfZUr[Lq[Q`\aib[vaMzgWjhiglulrytlgyqjssstvyrz~yus|yu|}}Zm�\s�bn�ix�k~�tz�q�}}�~~�m��{��s��s��t��~��x��y��|��y��~���nV�k\�s]�xi�|v�{f�~j�~r��n��}��l��u��~��y��~���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������6W�IDATx���\�������k��������-�H�[�����x��� ��J���"� �hq���z��I�A?�@H��B���F�	f�A��i�!���9�	W4�a�y����<|����=����y~�����j�FO�u������!���WH�����`x/{��3�K�`�w�{��~P��&�y�L�/?*����L���D�s�80����WhS�������2���F��k_��������"�c�vO���.,:����0�b=������������� N�o��|�;��\}$���\�����)�)�>�9���������������B�D]��HE����R�=���}��\��U��%�k���Q����H�:."��?��+^���W�O��yG��g�f���j�
V����j�+
��N�B��
�@�O����Z��������F��K�2�S��Y����/���E~8�,6�V��'+��kH��T�i�[+�6�xm3�<���,�U�_������`�3��)x�R��T��72�z������{g�W���?���\P����$?C�����fD����y�&�X�A�}-h_Z�$����+IV��Y)�q.j����Q,���B��
�P���W_���{r��_�0�d5���d�����\�6c�����H�vh�C�)�����{��m��~���u<p�.n��w�&ne��<~����
�.�*�%W�����b��}�N��+�g�}�s>���jA���#?M~�GU'�g�<q����4����[�@�d�n��z_n��s}J��	�\+������!=s~l�}�T_�U`�(-�V/���g|�>
��B(��Q���d�������5���[<���~oP�����/p��:�ikV�?O��}t�*��.��
�<g�Yc��E���|l-a{��~��5�����"]:#�����p��%�(ec�����7��g]����+���N�vuY�oG��(B����w���OG��R�#����
'��5S��WU���,��
������
qh`kUi�����C�����}J�����t��_|	�>-t�['���x�3��r$?2�����^y�I�����&�
}������+����2���zc�i~�|t��6JL����>yN�w�}W��Q���
���Q�r��? ��2���;"�sr���aA����������5>�O����/��;fd����:�u{#Yo�N���V����6T���|��`
���L[�E{����+t��UX��7-��-���e�0#�M�+���t�A���0H�]u�+��m_�wB$�ob�Sc�/��bv�(����3W!		�C���Bf�A@M�u)���~$d��0����
����xI'����3(�����Vg��Oy�^�3��+�"]��R�����9���N����o����#����{��1X�\�7�P��u�tMG��6k�f_Qz�ya���}���j���������CG�����BC��Wh�	��R
�t.�C�����"s]E]L���2�x���W��d(�#Xr7
D�R��5D��+�VV�u\��nNT�m]��|k�'U��rWKl+&���cQ���J��O�)������d��ll�"������35��^���e�&t�'4���7��s��G?�PD/���j��PO�!�n?�	�[h����M-�_�X��BW�aH%��T�)���U�(�m�^����Q�'Q�f������E���v�Z��U;�'���B��X���f������F(��K�'�n��t��>��R>�������.G�KM�&!�!�����>�9e�k����e����Eg�����k%?>�{�[��E]6��k�G����������i�>2����i�`uB4
�%P"���z�cz�U��Z�*Lou�n��@���|��k��jh[�9%<]?S����t�Z]��
��&t���hZP=������i�S�FDkp�	�2�/n�`1GN���ti?�}�����o)E|nnD��
�$m�����p�"�33>��[qH������W��{��rw������M�U�6*?���^F�}��,��JI�y2x��&���1]�ZT�:1z�����Cf_�/1$����j�kK8?���0�<����wx��m���
�'q�O8���hux\�=D���]��d�a�����#�u�<1�umC��{����C����,t��m������{����)\J��0��GQ�8lK[��s��s��,��`��4���)���F�kz4#���J��4�!I���bU+�q^�D�u]X����#�4�������&��"����[4{$'R����z�����h�o���=����K8K��A-��I�_�j�o^��������z ����jb����5�P�3m=��[���%������M�8� �������:��Z����`��N���'�P���o��������l����1��}���9�(���u�|��Z=.m5d�@��Ea��u��P�,��aezvh_�:%�l�><�����=��Z�8�C�.L�vR�	(��WH8om�CG6�Z�l��0�Bb�O~b)�
$������1Ns(=��i{-�f�e�����{��(������T_�H`��E��Z}o���o8���-��������0��G��_E9���j�N�[z�������3��oJ�8�:un<q�&�Z6�=��4W�,0,������Y��"��9���+y�8�Fl8i�x��Y�.���?I���A~����<~$m%��/�]������K�����!�h�0mV���I����y���#��#��W,*���>O��Yq�p�'+~;)EI�GD`�I�{I�������"A���)����G���Tn$���@�tW��@��Y?��K8H��,�������0O���1XuGQ���h/5AXU�t�t��K�ux�Nm9�,��6�p
h����%;J�$�c�K�)����~�8��9NXt��]�x��mv�b�(,z�jXs������E',:a�	�NXt������E',:a�	�NX��w�'h������GLz:���6�M�G,p�%���5�Ha������vO�2��q4��
�v�nX�����x4�)�O,N�=Al^aZo��
���`^�_��VO�#6|
��	^�+\Ih��-�xO�'lN�MF�����*"��0��}����{� ����R$_n�]t�����-�-�C�3�\�o^d.��;
�E�q�2���Y��R�3��"Q-6��y��qR�	�;�-,��O�-$�NLU<w�u�
?O^��[./g�[t��`�-���r�>�r)����wP������'h����	�b"������W7��y��A"���L��1�^r��XL������?�e��W#��e�������n�sv��g�����~�'��n�D�{%f�{�2+�UK���=Q����Y���C�C5]P�1�tO8��,�E�$a�6�1��AF[LN4R���AO}y��.X$���U[.������{�8Fu>`9���,���cW������"����a���q�6/��Du�mzf�O�
�-�@��sq9f��pX\�]�7sd���eG^���:�^x-�����H�~,Zt\��0%h8A�yED~�h��C-e��?,�$��0�Zd���91j�������D��4�/�	/����B0'�^�+��F��D���"�)�-E��S�q��"�x��]�OM	N����D��O�0�OW��.t��q?W�6��;����b���~�H��B�-=dl�Ms��S��"��[��;a���T��E��(P?YQ`��8�(�D��
oy�Q�����%��mu�r������B7^8�cX<.RrM�Z��=�Wu?6%h8� gP)����-������	aq	
�8ia��(�}c��G��1��&0���`���}�*���5V��I�b�}`����8z�|l���a��@)�W�jM�c��i��V�s�^\n=�����,vM��-�� �|�u�.��!�7J��:��Dq��n��
'���w;�N&J-M��u�K�p�������C����=��L����!��ilMO5����wBL:�����T�8����P0�p�k�N���{��2������2���	1"`��{
�q�����{J=��.?������d��E'Y��>�s��	1���Y�p����C�>�'P����e�F��LC������(��wB��a
�;�u���0����k���[�r���i��i�+���[�M�0
��h��%���R���������w�A��&H�������F���������B?^0���N�L[���P�	����6�x(;����)��� �l���q6zmo��j
���=qT��VD�}�{�jMX�G�����w���+1lY��6��}��a����'��M[����<�_B�~^���)�JL��9��(�a�	;�J�
=���/y��d���%#Yw�Ty%��4��Y��!��O>�{"3���?���1��I\X ]���r#��r����E#a	���n��~��'P/E�K�kX������:@�����Y���+qo�T����zL[�7-]Z�L[�[�?�(���]�M-�����6�]�����|�H�o\��\�[WU���ZM9�_jD���	s!>Num����'<��X~ZQ���y,�z�x�_�*?A�����}/Z�R�_1���vr!�(���#E����gX��{�pL������?d�G�#�3�>'c��nV��n
���"�s
�v,F�)�z#��j{bT��Y�tqO�E}C	] q;���c�o��^�\������z9��)�~7>N��	W���(��di��a�#��[1��qvFI���������'��j���/g��-L�z$m�q���n����8"������?�����m�D���T^_^o��v������ ��g��k�*)
0���fg��&��^�Y$������b5���o*S�`Yi7���L�[������[��	�����9�Q����m�aH?�U���O.SF�<��zGmEH~���M���#@�NP�}_<N[��e��.����e��r��N[i�
���/�������Z����ikd�F$�N�j���L�U}�dU]������p�����D��H������?������|WQ��?��9
�C�l��Cz�Y��!4��x�����_=��?�[~P)��E�F��ul��E���6i<��x�^�x����
����#z|{�Q�}|_+�&0���m�E�u�e���=�#�
�E}k��g=@���>m�����Nn�<t�'��p��H���M?��
5q!��y��i�W<z�Q����&p����9j���!�u�������R�$�"HIX��a��v�C���y�-��J���n�Qm�a�Bg�w��K�x������v6��b���)��{��qLzTUd��s6�=��5j�W����,�������F�}u~��c�����}�Z�-8�Z�%0G���E��!~����� ��\��Qz������b��?��Q�+2/0����X��Y>����[$mR������������V���"D?��Q�/��K��V�o4M���_�'�������r��P>��jp[��<6�������E�S���LC(~d�#���g=�J�=���N�?H�w���� �N[�������<5QQkI���IQ������N�����}l���W��FJ�}�{�J�7[uRa��oC��r����.\m����_��,���O�����<���$v{�G��O�����I�y��\��
_���/�m����Z�8���E����o�o����:������l@)K�@qm�x.��}G����C�_<[m{Y�K9s>�����I?���L4����\������K��i��,O���G��|�pg�lKX�s>K*�#�^���o$?��	����34&��qxEr�:2I~���$�����.�����'q��O{tM�H���%�W��5���~zl��9����!~��]|����v{u�=�3-!:��0^��I�t���L[���j&�������U�j�AQ�<:�o���x�~���+�Z����������������.8��,������G��&tM���|�>	�?K@�����"Y�^|��-b~�3���A�������Q�i�`1� ���G�#�r����*������s>�?GO��_s��f!%n���������,bX����p�R�im�|�<�R����=�(m���L!d\��=ie�Pi���i��:j���G���B����?���4`��s�W���0�)������F?�J�7��/�q�-�'Y7��u������%j��J(�.��0b�����\a+v��B�k�a��Lr���W�SS��\SNO�����.5�}�K�;_�k��X��g�'S2�,�������C�g�	1���g�?'n�j*�(^S
��Q�*� u��y2X�V�c�[s���[�b��)���\���
7K��C��C-3����_{����.[��U�d-f��{�%M%�%��A-,�Z��2�8��Li�x]�N��nr�}�|��G���=1�j����vC��Oi�R�{B���; �����_�>�x$��)�=1�jOx���K,�L�n<�=���p~���d��`�o�s,����8"�MnxR����&|�%��nY!$�tW$��	�8�|��h����?{"L���������X��M[��-�9{�>���e�f	y�@�rY�#�'HyP!m6@�8�g���m��e�A��_�#�'n<��^Og]��������.h�wvA�����9����'HyPa#K�B���9�^e���O���wG.#*�����IKV;��S��ww�h���i�*����
��;��EBwM*#v�������x���cXG��$��nozm��B���?��h�DbL�O����{V����P,u ~���q�c`�pd��}]\f����C����c���Qd��������cr�E_u�|�.p���\���*|���U'�e��G|�?�����V��~��t�B-5��
�"�)9qv+$��Je
����8
?VST'��s�rWw�3�;�O�|�l��{�:�X	���k���$������@mJX���Ut(t��r�`Xd���h����-����/����u�����"\�����o~�.gB5�=Wr�����t����q��Zl��((J�`v?)U]���
[�w��%�n�q�s����m�`ym���]�j;���m��Eg�r�`�G�z�6��%Hu\���`���[��J��������[�UC=�x96���,�C�h�V��2MDr�f���/`��
�yu����6���z�E��>��,��7�W���h��O|������X-R��Y�ed�5&0�k�p�)c�������@�7�m!���d�����|��cA���n4"��?���	����@>�;BS�0~b�������=7��,Y2���H7��~����ls]�N�3�;�?|������IEND�B`�PKnvUsettings.xml�Z]s�:}��"�w�W�&�@'�M�-mLr�����D�z$C}e�&����2��<1X���ju�H������V $E����-������u7�n�Y�]�|N]�=tC�jHPJw�Gz8�v���B�m$�J���\��0�qo;1�}����Jv�H{�X4;��I3���^3�~����8�&}������5��"�������S��N>
M��gq�����IT���h�86�������W���qO��SIg.�)V��6�n�\Y��E�9�^�#�+3��PO-�����V�`�!�����w������
�
�=X���Q�*%ct���.Ct����J���qB���4��9%:"�s��g	��;d� �%Uz�[�n��|��_r�����\�����A>�K�8E]S�9����G�o��Ut	���EIx*2@�o����N�H�@�z�;C�B����;�?�(�ftzOX�GMm����, &��O+�;K�>���"���!T����o
ku����a4�q7e����c�����������RR��:r0���Q���O*ZyK��:6p����X@�<*$�.f����Y������2mf���d(
��s��t������2$R{�|����BF�$D������
�>���g����^��$��Z=�XAhv��c�,M�6��t8	�8!RA>��0��9��-c& ur��,4�Vd�<|���Px'�ytEe��5�;_5uR��5��FK���(��?[�l���$��on�� q6�s�c����O�t��k-�h���X34�����?��/ �����C�����A�#���((�(�x��*�$�#nt��Hy�
�������
��an�4g���� >E��e����+���������H8=�����Pc���b�q�'+�.C�	e�Gu�$��z"� ����R'��}����J����x)�����gZj'+�7���``IFH��.��YA�{���}+���7r�N*����P�[�P�Y#��3+�7\�����������o1�)�G�s��W�Z�����W��]��B��>(h�}j��	PK�,A�!PKnvUMETA-INF/manifest.xml��An� E�9���2���Bq������� ��`���[q���K�10���#6����b2����l}g�o�����`��j�
�����=L��a9��*�$Q9H�Z�`���I�������xg��p����H�����}�S�h�#e���� g�Qi��nO��B]���f@gk����0q��-�����a*k���8b�����<��Kbb��O���9���&��2�����s�2�i��N? \�^��2�Dgi0{G���b�RO��(i�����Cv?��M��%��1N� ��eIQ���f��~��PK�L��3,PKnvUcontent.xml�]�r�����O�r�NM��u����T.����e+������HPB�$���:���z�I�AR�d�6(� %���xD6	����������(tn����o�z���Cc��,����������W���K��7>�������X�_���MDy{������ML"*o�w�����������w6������7��N���C���(6wn�9�g�Tr���k���\�;n�;�����w!�?m���e{��d{�����]���F.IE�I�^��TwMvz�^g-+�*4�Lx��S�k��k}A���jY�<v/�������k�����o���>oA���g��W�D=���A.�f����-e��m�i4��x��"{/�����R0E������GB���P�>�;��4�&�Lh�����/V�7��4��B*��w	L�"av���~�O����h��|���BAB[w�!$F�56��
��H�0����������Z<�4��Op2��O?����L_�����'Xb�\zg�yt����{�h�[My���G.p;����������o��F�^�=//��I]�wg����p?x����BE����>���?(
}�w`�h�2��l)�����s-M���%T���l�7{�����U�����z��
��Av6�Z�h��^(�}�������]x{�-�%	�o ��L������������W���`kNcx�@��G$~��?2��l��H|�#	� �|�y�dR�H$Ly�["X���t�'�Ix�G�O4���F/j�y�K(���=:0�L��o��>��8I�_��Vv�����w�I<��4Vt=��@��(�Z_�}����P�J'�73A�����
u��;�K�k����^���;O����7���������A�7����V���N�umhq�&%�6�����=��tN<�S��zG�������0ov����?��T�O�H���Z�����#>��Cwrt��W����F`�P�J����+��IC�����(��|&�����,����+[��N�h�Y��>��4���	�y��x������5�>����g���;���-���q�����7?4\��Y�/vf���Hg�%��O��D���@�#����@@[�!�{DD�{}���������f&O���
����������l��
�����<��'f�_��#`�_.(U���i'�oAA����O$�c����o��b�; �����t��!��������
�9����
�9���^�BT�M(Q�-���0��Z�Z����6�9	S�R�$�0�^9k?��sp#}&y�����)|�y���7��n�k����~���N�m�GSV�b�N�rm�����V����nw6�/��a��>��l��LOm"����>9QW;g�^
�����7�9������_��Pzu����?�m�Z���z��c���T�$i;y�A�L��������x�a����PA� T'��	�q�8A� N'��	�q�8A� N'��	�q�8A� N'��$U,��yU����0r�12���N��G��2�x�l�����B�[">�l:������S�+���(g�c�W|r$|1���P1 U'":q�!��6B�T x��,��ut���}�F*G.�(4�%U!��A��i�z��%��)����qf����u��$�������,
e��E��5C����2�OUx�/��G�AVS8Gb���m�*g���Y1�
(�.�OQ8&��:,��p�Ym���!����s���\8D��ZH�P�bm�;BR��
���o-h�A��"��d"�������3}�
���v����^(:"�aZ��gb0���x�N*sp��'���s(�?�s4^�������{�1�O��Y���2l��[��iT���C3nA�����\����<����`����?#-�M8mG#�T��>�S���U�SV��;r�|����(:�bU�6�u��ikF$��6��*��w
�[]��-U�F� 3!��b���FL�G����3�9�X��i�R����U���}q��>.�������"""u!�	���*�3/����?S*VJP:
�T�	��mU�E�e���U����6�~"��*����Vnq�A����..��/�'�{���Q�_��_��LM9��������QDb�2Ca8�
 h~�j�3�����t"���b��7���]y�*x���[�!�YB��2����`���,�������-���m��z ���Mwu#�g������,UN����%YiD�����Nt��������j����v���J�d���>/��+	���4���?G8�0�`l�tiA��	��������� �	�T3��_#0��z�;{��r�y��g���J��U��$.����j�r�u�t�|D/�J�D@6N���Y�,�}��+�{������f��.�T�����U���a����<]ab���m�(�s���|��P�����[�t����@#Y�8�W<L*|Q��E�_�,�����T�U��2�]���5�n�<�RQ��\�3(��'��hQ=R&��R@��H91S��Q���PL�,����f��&�P�@��:].xH��T>��ch���������Q�F��H0a*��6��rm��KHcT00� {� ���/��s��������\����4�!���g��,|en�T���!|���Q��
��z��/z�_ ����|�Dl���,��kf���m�J9�i��Di��D�*Fc%V��s���3�f)�Y��gV6���f�0@�6j�Oqa�Q�?���C�q��G�L����k�%�b��B�k�� ��l�:`B*�o������kx_�:t�(3�G6)�������fq���fzWa��kX��_�'��B�
}5��z~mtFIe5�zc�0�ms����+V@`��9����R�?�
He��G��Y���
�;�_&r�Kg�����
t���l;X	����;%��������:�ZRIgF\�t�T����~�+=Xu��K��J.lW3J��u
�mhx��F��Vw,����a�Z�oY���iP���71s�hA�kYy�b�\[�\;��#���(�<T����y����,���BVC24�c1,[��G�(�a��x��-��{���-��BQVe��i������^&=Nf���
O���;.�_�����/���?9��xC�Y��>U��9�|��m���YB��E��8Y���F���w��x�U�:�+oQ�o��SE����'b����4�e
�d�����r��U8tc�
�B�y�]:pF���6���~�w�������R��md�(�]��8dtRW��7�M�M%i��fU����+d[m�V����w�b��eB�
��0�L^�7��Z���������A�T{P��O���_�+����fn	-�do�j}����W��������}86\�Adw��Nb?_��,Jn�!����H�y�,|�?Y]:I�k�3^����1���O�Y���)������jA�����/\��@r$?�H�(U,����`h.r�$;�^��
�*<�V^I��j��,(�NH���x���t���O��h5	�E���������{��pB���l^YT�``Xx`{��U��N�8�	3�6���t*��������oN�@0s�������I�y���2tf5UAgD�Y#�,
�pj@@�5�;\W����f�gh�XCm���*�X��ic�GT�!�;4�t>8>�=�K�<��@Abk���s�A�b:bL�yW�"w`X�oP��
�Z���	�MqF:�EJ5B�� ��b�4\}��#���R61�H��40���#Z�:\t�-O0�h`�'�(�-���h�d>��WY8\��3����+=��s�^u��g�[I�NV���#w�f��a�v-h�a<E$�A��x���?dS��uDAAA�a��a��a��[9|���C�����j����A�&��D
\�����8�Y`�L �~�$�V����k�jA\�j\��oH��_u��'�KA�b�0{v�{��\Qu�|�[��� �$G�����9����Q`XC\^��|��r��u�t�]�k�"V��������0��zb�;�^����������Fd�;����Z��yu*�n�ZU��p����8BPgF5e���u������2����-����	j#���t����]"w w wX��j���������7��A�
��A�	�;�����9�R5������9f�j�x���S�FO<�	�|K���gP9�OY���Nt.��%�jw�v�8��	��XR����{��Y����xp���}�D
�H�����L��_�y'���yD��{Quu"z]����c�&��=
`���7��,�Gt��2�~t���yG�xN�)9�3���S��o_3��C��j7�yy2�{����zb?�>�������O����\�{��h"�����r�'�����t�j��5��
r�x�x�x�xV#���b�A��(�.��hZ��-]�������f���D1�v=�{K��W��g���?��X2�h��n��-�?���K���������,�l�-��Iy���c,��z���_~�����o~��{���_������������2���7�C{�$pl8D�w�0��/%�R���*�q������m�q�q��k����`�8P��}����J!*����!=j#�*���a�M-�2�ukd��ac���g�j:��z�'���e���$�������)6��GVFVFV>qV>�,��9���GC��QZ����H�p���h� E���v,�����x]{C���;35s20��9(��D5�LT3Q�<����,V�=������Q��_����f�} h[���Xb��4I�nL�[��d�sK**J���2�����6�_�:wV&����
+d�5T��*R�����U�����iC������fG0n�����;��R�3IC������s�g�CC��&���`� ��|��OG���g����q6d����������M�)r)r)r)ri#�tb���S�R�R�R��&r��;6�RlL�K�K�K�K���Y��lL�K�K�K��K_8`���-���9�Z������e.�����2�UD����N��d`������	O6���_#���/��>�Ix������E�q�(z�u���;�$`_O�4-��

�0�q�z���m[^��0������}�!������_���y�c����*|�
��e�_�-8�P�1������|^���<�35R{�mT�VTY�%�IJ#������M�"��H���r����pA�8d�����������f�'�b���M�����'t�p����P~��M�Y�!	�d%��?�X I���&�1�Y/���k�������f.�.zT{{S��0_8�/��`��C��F��KyCN��I���VVo�k�Q
r��YE#Ty�j��jD�B��\TV���Z���P1]�n�\�����	L�������%^�F��&3��&,d{���.�>�3�p�K:���$��e��Y;��YD7���Wb�����	�n���a�L:1���R��?���������-+
�n��I��0!��]��N�q��TP}@�����cx���a���������zdq�R���b����JJ,>"�}d}_=�~��%�J������	�%�%�����������N��y��x*��-(U����N�k
WD����k��-���{��{7�������1��d�
Xjv�N]|{�DJ��q�����Hx,�K}���3�����K#+�X��w�PKU���#��PKnvU�l9�..mimetypePKnvUTConfigurations2/popupmenu/PKnvU�Configurations2/menubar/PKnvU�Configurations2/progressbar/PKnvU�Configurations2/toolbar/PKnvU2Configurations2/statusbar/PKnvUjConfigurations2/images/Bitmaps/PKnvU�Configurations2/floater/PKnvU�Configurations2/toolpanel/PKnvUConfigurations2/accelerator/PKnvULJq�EOmanifest.rdfPKnvU;
�z�L'
{styles.xmlPKnvU�%f�(g
meta.xmlPKnvU�Iq�Q!Q!6Thumbnails/thumbnail.pngPKnvU�,A�!�-settings.xmlPKnvU�L��3,82META-INF/manifest.xmlPKnvUU���#���3content.xmlPKe
K
#26David Rowley
dgrowleyml@gmail.com
In reply to: David Rowley (#23)
Re: shadow variables - pg15 edition

On Thu, 25 Aug 2022 at 13:46, David Rowley <dgrowleyml@gmail.com> wrote:

I've attached a patch which I think improves the code in
gistRelocateBuildBuffersOnSplit() so that there's no longer a shadowed
variable. I also benchmarked this method in a tight loop and can
measure no performance change from getting the loop index this way vs
the old way.

I've now pushed this patch too.

David

#27Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#20)
1 attachment(s)
Re: shadow variables - pg15 edition

On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:

I really think #2s should be done last. I'm not as comfortable with
the renaming and we might want to discuss tactics on that. We could
either opt to rename the shadowed or shadowing variable, or both. If
we rename the shadowing variable, then pending patches or forward
patches could use the wrong variable. If we rename the shadowed
variable then it's not impossible that backpatching could go wrong
where the new code intends to reference the outer variable using the
newly named variable, but when that's backpatched it uses the variable
with the same name in the inner scope. Renaming both would make the
problem more obvious.

The most *likely* outcome of renaming the *outer* variable is that
*every* cherry-pick involving that variable would fails to compile,
which is an *obvious* failure (good) but also kind of annoying if it
could've worked fine if it weren't renamed. I think most of the renames
should be applied to the inner var, because it's of narrower scope, and
more likely to cause a conflict (good) rather than appearing to apply
cleanly but then misbehave. But it seems reasonable to consider
renaming both if the inner scope is longer than a handful of lines.

Would you be able to write a patch for #4. I'll do #5 now. You could
do a draft patch for #2 as well, but I think it should be committed
last, if we decide it's a good move to make. It may be worth having
the discussion about if we actually want to run
-Wshadow=compatible-local as a standard build flag before we rename
anything.

I'm afraid the discussion about default flags would distract from fixing
the individual warnings, which itself preclude usability of the flag by
individual developers, or buildfarm, even as a local setting.

It can't be enabled until *all* the shadows are gone, due to -Werror on
the buildfarm and cirrusci. Unless perhaps we used -Wno-error=shadow.
I suppose we're only talking about enabling it for gcc?

The biggest benefit is if we fix *all* the local shadow vars, since that
allows someone to make use of the option, and thereby avoiding future
such issues. Enabling the option could conceivably avoid issues
cherry-picking into back branch - if an inner var is re-introduced
during conflict resolution, then a new warning would be issued, and
hopefully the developer would look more closely.

Would you check if any of these changes are good enough ?

--
Justin

Attachments:

v5.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 5887166061a..8a06b73948d 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -6256,45 +6256,45 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
 		return multi;
 	}
 
 	/*
 	 * Do a more thorough second pass over the multi to figure out which
 	 * member XIDs actually need to be kept.  Checking the precise status of
 	 * individual members might even show that we don't need to keep anything.
 	 */
 	nnewmembers = 0;
 	newmembers = palloc(sizeof(MultiXactMember) * nmembers);
 	has_lockers = false;
 	update_xid = InvalidTransactionId;
 	update_committed = false;
 	temp_xid_out = *mxid_oldest_xid_out;	/* init for FRM_RETURN_IS_MULTI */
 
 	for (i = 0; i < nmembers; i++)
 	{
 		/*
 		 * Determine whether to keep this member or ignore it.
 		 */
 		if (ISUPDATE_from_mxstatus(members[i].status))
 		{
-			TransactionId xid = members[i].xid;
+			xid = members[i].xid;
 
 			Assert(TransactionIdIsValid(xid));
 			if (TransactionIdPrecedes(xid, relfrozenxid))
 				ereport(ERROR,
 						(errcode(ERRCODE_DATA_CORRUPTED),
 						 errmsg_internal("found update xid %u from before relfrozenxid %u",
 										 xid, relfrozenxid)));
 
 			/*
 			 * It's an update; should we keep it?  If the transaction is known
 			 * aborted or crashed then it's okay to ignore it, otherwise not.
 			 * Note that an updater older than cutoff_xid cannot possibly be
 			 * committed, because HeapTupleSatisfiesVacuum would have returned
 			 * HEAPTUPLE_DEAD and we would not be trying to freeze the tuple.
 			 *
 			 * As with all tuple visibility routines, it's critical to test
 			 * TransactionIdIsInProgress before TransactionIdDidCommit,
 			 * because of race conditions explained in detail in
 			 * heapam_visibility.c.
 			 */
 			if (TransactionIdIsCurrentTransactionId(xid) ||
 				TransactionIdIsInProgress(xid))
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9b03579e6e0..9a83ebf3231 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1799,57 +1799,57 @@ heap_drop_with_catalog(Oid relid)
 	rel = relation_open(relid, AccessExclusiveLock);
 
 	/*
 	 * There can no longer be anyone *else* touching the relation, but we
 	 * might still have open queries or cursors, or pending trigger events, in
 	 * our own session.
 	 */
 	CheckTableNotInUse(rel, "DROP TABLE");
 
 	/*
 	 * This effectively deletes all rows in the table, and may be done in a
 	 * serializable transaction.  In that case we must record a rw-conflict in
 	 * to this transaction from each transaction holding a predicate lock on
 	 * the table.
 	 */
 	CheckTableForSerializableConflictIn(rel);
 
 	/*
 	 * Delete pg_foreign_table tuple first.
 	 */
 	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
 	{
-		Relation	rel;
-		HeapTuple	tuple;
+		Relation	pg_foreign_table;
+		HeapTuple	foreigntuple;
 
-		rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+		pg_foreign_table = table_open(ForeignTableRelationId, RowExclusiveLock);
 
-		tuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
-		if (!HeapTupleIsValid(tuple))
+		foreigntuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(foreigntuple))
 			elog(ERROR, "cache lookup failed for foreign table %u", relid);
 
-		CatalogTupleDelete(rel, &tuple->t_self);
+		CatalogTupleDelete(pg_foreign_table, &foreigntuple->t_self);
 
-		ReleaseSysCache(tuple);
-		table_close(rel, RowExclusiveLock);
+		ReleaseSysCache(foreigntuple);
+		table_close(pg_foreign_table, RowExclusiveLock);
 	}
 
 	/*
 	 * If a partitioned table, delete the pg_partitioned_table tuple.
 	 */
 	if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
 		RemovePartitionKeyByRelId(relid);
 
 	/*
 	 * If the relation being dropped is the default partition itself,
 	 * invalidate its entry in pg_partitioned_table.
 	 */
 	if (relid == defaultPartOid)
 		update_default_partition_oid(parentOid, InvalidOid);
 
 	/*
 	 * Schedule unlinking of the relation's physical files at commit.
 	 */
 	if (RELKIND_HAS_STORAGE(rel->rd_rel->relkind))
 		RelationDropStorage(rel);
 
 	/* ensure that stats are dropped if transaction commits */
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 8b574b86c47..f9366f588fb 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -87,70 +87,70 @@ parse_publication_options(ParseState *pstate,
 {
 	ListCell   *lc;
 
 	*publish_given = false;
 	*publish_via_partition_root_given = false;
 
 	/* defaults */
 	pubactions->pubinsert = true;
 	pubactions->pubupdate = true;
 	pubactions->pubdelete = true;
 	pubactions->pubtruncate = true;
 	*publish_via_partition_root = false;
 
 	/* Parse options */
 	foreach(lc, options)
 	{
 		DefElem    *defel = (DefElem *) lfirst(lc);
 
 		if (strcmp(defel->defname, "publish") == 0)
 		{
 			char	   *publish;
 			List	   *publish_list;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			if (*publish_given)
 				errorConflictingDefElem(defel, pstate);
 
 			/*
 			 * If publish option was given only the explicitly listed actions
 			 * should be published.
 			 */
 			pubactions->pubinsert = false;
 			pubactions->pubupdate = false;
 			pubactions->pubdelete = false;
 			pubactions->pubtruncate = false;
 
 			*publish_given = true;
 			publish = defGetString(defel);
 
 			if (!SplitIdentifierString(publish, ',', &publish_list))
 				ereport(ERROR,
 						(errcode(ERRCODE_SYNTAX_ERROR),
 						 errmsg("invalid list syntax for \"publish\" option")));
 
 			/* Process the option list. */
-			foreach(lc, publish_list)
+			foreach(lc2, publish_list)
 			{
-				char	   *publish_opt = (char *) lfirst(lc);
+				char	   *publish_opt = (char *) lfirst(lc2);
 
 				if (strcmp(publish_opt, "insert") == 0)
 					pubactions->pubinsert = true;
 				else if (strcmp(publish_opt, "update") == 0)
 					pubactions->pubupdate = true;
 				else if (strcmp(publish_opt, "delete") == 0)
 					pubactions->pubdelete = true;
 				else if (strcmp(publish_opt, "truncate") == 0)
 					pubactions->pubtruncate = true;
 				else
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
 							 errmsg("unrecognized \"publish\" value: \"%s\"", publish_opt)));
 			}
 		}
 		else if (strcmp(defel->defname, "publish_via_partition_root") == 0)
 		{
 			if (*publish_via_partition_root_given)
 				errorConflictingDefElem(defel, pstate);
 			*publish_via_partition_root_given = true;
 			*publish_via_partition_root = defGetBoolean(defel);
 		}
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index dacc989d855..7535b86bcae 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10204,45 +10204,45 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
 
 	foreach(cell, clone)
 	{
 		Oid			parentConstrOid = lfirst_oid(cell);
 		Form_pg_constraint constrForm;
 		Relation	pkrel;
 		HeapTuple	tuple;
 		int			numfks;
 		AttrNumber	conkey[INDEX_MAX_KEYS];
 		AttrNumber	mapped_conkey[INDEX_MAX_KEYS];
 		AttrNumber	confkey[INDEX_MAX_KEYS];
 		Oid			conpfeqop[INDEX_MAX_KEYS];
 		Oid			conppeqop[INDEX_MAX_KEYS];
 		Oid			conffeqop[INDEX_MAX_KEYS];
 		int			numfkdelsetcols;
 		AttrNumber	confdelsetcols[INDEX_MAX_KEYS];
 		Constraint *fkconstraint;
 		bool		attached;
 		Oid			indexOid;
 		Oid			constrOid;
 		ObjectAddress address,
 					referenced;
-		ListCell   *cell;
+		ListCell   *lc;
 		Oid			insertTriggerOid,
 					updateTriggerOid;
 
 		tuple = SearchSysCache1(CONSTROID, parentConstrOid);
 		if (!HeapTupleIsValid(tuple))
 			elog(ERROR, "cache lookup failed for constraint %u",
 				 parentConstrOid);
 		constrForm = (Form_pg_constraint) GETSTRUCT(tuple);
 
 		/* Don't clone constraints whose parents are being cloned */
 		if (list_member_oid(clone, constrForm->conparentid))
 		{
 			ReleaseSysCache(tuple);
 			continue;
 		}
 
 		/*
 		 * Need to prevent concurrent deletions.  If pkrel is a partitioned
 		 * relation, that means to lock all partitions.
 		 */
 		pkrel = table_open(constrForm->confrelid, ShareRowExclusiveLock);
 		if (pkrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
@@ -10257,47 +10257,47 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
 
 		/*
 		 * Get the "check" triggers belonging to the constraint to pass as
 		 * parent OIDs for similar triggers that will be created on the
 		 * partition in addFkRecurseReferencing().  They are also passed to
 		 * tryAttachPartitionForeignKey() below to simply assign as parents to
 		 * the partition's existing "check" triggers, that is, if the
 		 * corresponding constraints is deemed attachable to the parent
 		 * constraint.
 		 */
 		GetForeignKeyCheckTriggers(trigrel, constrForm->oid,
 								   constrForm->confrelid, constrForm->conrelid,
 								   &insertTriggerOid, &updateTriggerOid);
 
 		/*
 		 * Before creating a new constraint, see whether any existing FKs are
 		 * fit for the purpose.  If one is, attach the parent constraint to
 		 * it, and don't clone anything.  This way we avoid the expensive
 		 * verification step and don't end up with a duplicate FK, and we
 		 * don't need to recurse to partitions for this constraint.
 		 */
 		attached = false;
-		foreach(cell, partFKs)
+		foreach(lc, partFKs)
 		{
-			ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, cell);
+			ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, lc);
 
 			if (tryAttachPartitionForeignKey(fk,
 											 RelationGetRelid(partRel),
 											 parentConstrOid,
 											 numfks,
 											 mapped_conkey,
 											 confkey,
 											 conpfeqop,
 											 insertTriggerOid,
 											 updateTriggerOid,
 											 trigrel))
 			{
 				attached = true;
 				table_close(pkrel, NoLock);
 				break;
 			}
 		}
 		if (attached)
 		{
 			ReleaseSysCache(tuple);
 			continue;
 		}
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 7661e004a93..b0a9e7d7664 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1707,47 +1707,47 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
 								NULL, 1, &key);
 	while (HeapTupleIsValid(tuple = systable_getnext(tgscan)))
 	{
 		Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tuple);
 		Relation	partitionRel;
 
 		if (tgform->tgparentid != parentTriggerOid)
 			continue;			/* not our trigger */
 
 		partitionRel = table_open(partitionId, NoLock);
 
 		/* Rename the trigger on this partition */
 		renametrig_internal(tgrel, partitionRel, tuple, newname, expected_name);
 
 		/* And if this relation is partitioned, recurse to its partitions */
 		if (partitionRel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
 		{
 			PartitionDesc partdesc = RelationGetPartitionDesc(partitionRel,
 															  true);
 
 			for (int i = 0; i < partdesc->nparts; i++)
 			{
-				Oid			partitionId = partdesc->oids[i];
+				Oid			partid = partdesc->oids[i];
 
-				renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+				renametrig_partition(tgrel, partid, tgform->oid, newname,
 									 NameStr(tgform->tgname));
 			}
 		}
 		table_close(partitionRel, NoLock);
 
 		/* There should be at most one matching tuple */
 		break;
 	}
 	systable_endscan(tgscan);
 }
 
 /*
  * EnableDisableTrigger()
  *
  *	Called by ALTER TABLE ENABLE/DISABLE [ REPLICA | ALWAYS ] TRIGGER
  *	to change 'tgenabled' field for the specified trigger(s)
  *
  * rel: relation to process (caller must hold suitable lock on it)
  * tgname: trigger to process, or NULL to scan all triggers
  * fires_when: new value for tgenabled field. In addition to generic
  *			   enablement/disablement, this also defines when the trigger
  *			   should be fired in session replication roles.
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 933c3049016..736082c8fb3 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -3168,45 +3168,44 @@ hashagg_reset_spill_state(AggState *aggstate)
 AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
 	AggState   *aggstate;
 	AggStatePerAgg peraggs;
 	AggStatePerTrans pertransstates;
 	AggStatePerGroup *pergroups;
 	Plan	   *outerPlan;
 	ExprContext *econtext;
 	TupleDesc	scanDesc;
 	int			max_aggno;
 	int			max_transno;
 	int			numaggrefs;
 	int			numaggs;
 	int			numtrans;
 	int			phase;
 	int			phaseidx;
 	ListCell   *l;
 	Bitmapset  *all_grouped_cols = NULL;
 	int			numGroupingSets = 1;
 	int			numPhases;
 	int			numHashes;
-	int			i = 0;
 	int			j = 0;
 	bool		use_hashing = (node->aggstrategy == AGG_HASHED ||
 							   node->aggstrategy == AGG_MIXED);
 
 	/* check for unsupported flags */
 	Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
 
 	/*
 	 * create state structure
 	 */
 	aggstate = makeNode(AggState);
 	aggstate->ss.ps.plan = (Plan *) node;
 	aggstate->ss.ps.state = estate;
 	aggstate->ss.ps.ExecProcNode = ExecAgg;
 
 	aggstate->aggs = NIL;
 	aggstate->numaggs = 0;
 	aggstate->numtrans = 0;
 	aggstate->aggstrategy = node->aggstrategy;
 	aggstate->aggsplit = node->aggsplit;
 	aggstate->maxsets = 0;
 	aggstate->projected_set = -1;
@@ -3259,45 +3258,45 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	aggstate->numphases = numPhases;
 
 	aggstate->aggcontexts = (ExprContext **)
 		palloc0(sizeof(ExprContext *) * numGroupingSets);
 
 	/*
 	 * Create expression contexts.  We need three or more, one for
 	 * per-input-tuple processing, one for per-output-tuple processing, one
 	 * for all the hashtables, and one for each grouping set.  The per-tuple
 	 * memory context of the per-grouping-set ExprContexts (aggcontexts)
 	 * replaces the standalone memory context formerly used to hold transition
 	 * values.  We cheat a little by using ExecAssignExprContext() to build
 	 * all of them.
 	 *
 	 * NOTE: the details of what is stored in aggcontexts and what is stored
 	 * in the regular per-query memory context are driven by a simple
 	 * decision: we want to reset the aggcontext at group boundaries (if not
 	 * hashing) and in ExecReScanAgg to recover no-longer-wanted space.
 	 */
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 	aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
 
-	for (i = 0; i < numGroupingSets; ++i)
+	for (int i = 0; i < numGroupingSets; ++i)
 	{
 		ExecAssignExprContext(estate, &aggstate->ss.ps);
 		aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
 	}
 
 	if (use_hashing)
 		aggstate->hashcontext = CreateWorkExprContext(estate);
 
 	ExecAssignExprContext(estate, &aggstate->ss.ps);
 
 	/*
 	 * Initialize child nodes.
 	 *
 	 * If we are doing a hashed aggregation then the child plan does not need
 	 * to handle REWIND efficiently; see ExecReScanAgg.
 	 */
 	if (node->aggstrategy == AGG_HASHED)
 		eflags &= ~EXEC_FLAG_REWIND;
 	outerPlan = outerPlan(node);
 	outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
 	/*
@@ -3399,75 +3398,76 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		Agg		   *aggnode;
 		Sort	   *sortnode;
 
 		if (phaseidx > 0)
 		{
 			aggnode = list_nth_node(Agg, node->chain, phaseidx - 1);
 			sortnode = castNode(Sort, outerPlan(aggnode));
 		}
 		else
 		{
 			aggnode = node;
 			sortnode = NULL;
 		}
 
 		Assert(phase <= 1 || sortnode);
 
 		if (aggnode->aggstrategy == AGG_HASHED
 			|| aggnode->aggstrategy == AGG_MIXED)
 		{
 			AggStatePerPhase phasedata = &aggstate->phases[0];
 			AggStatePerHash perhash;
 			Bitmapset  *cols = NULL;
+			int			setno = phasedata->numsets++;
 
 			Assert(phase == 0);
-			i = phasedata->numsets++;
-			perhash = &aggstate->perhash[i];
+			perhash = &aggstate->perhash[setno];
 
 			/* phase 0 always points to the "real" Agg in the hash case */
 			phasedata->aggnode = node;
 			phasedata->aggstrategy = node->aggstrategy;
 
 			/* but the actual Agg node representing this hash is saved here */
 			perhash->aggnode = aggnode;
 
-			phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;
+			phasedata->gset_lengths[setno] = perhash->numCols = aggnode->numCols;
 
 			for (j = 0; j < aggnode->numCols; ++j)
 				cols = bms_add_member(cols, aggnode->grpColIdx[j]);
 
-			phasedata->grouped_cols[i] = cols;
+			phasedata->grouped_cols[setno] = cols;
 
 			all_grouped_cols = bms_add_members(all_grouped_cols, cols);
 			continue;
 		}
 		else
 		{
 			AggStatePerPhase phasedata = &aggstate->phases[++phase];
 			int			num_sets;
 
 			phasedata->numsets = num_sets = list_length(aggnode->groupingSets);
 
 			if (num_sets)
 			{
+				int i;
 				phasedata->gset_lengths = palloc(num_sets * sizeof(int));
 				phasedata->grouped_cols = palloc(num_sets * sizeof(Bitmapset *));
 
 				i = 0;
 				foreach(l, aggnode->groupingSets)
 				{
 					int			current_length = list_length(lfirst(l));
 					Bitmapset  *cols = NULL;
 
 					/* planner forces this to be correct */
 					for (j = 0; j < current_length; ++j)
 						cols = bms_add_member(cols, aggnode->grpColIdx[j]);
 
 					phasedata->grouped_cols[i] = cols;
 					phasedata->gset_lengths[i] = current_length;
 
 					++i;
 				}
 
 				all_grouped_cols = bms_add_members(all_grouped_cols,
 												   phasedata->grouped_cols[0]);
 			}
@@ -3515,71 +3515,73 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 				/* and for all grouped columns, unless already computed */
 				if (phasedata->eqfunctions[aggnode->numCols - 1] == NULL)
 				{
 					phasedata->eqfunctions[aggnode->numCols - 1] =
 						execTuplesMatchPrepare(scanDesc,
 											   aggnode->numCols,
 											   aggnode->grpColIdx,
 											   aggnode->grpOperators,
 											   aggnode->grpCollations,
 											   (PlanState *) aggstate);
 				}
 			}
 
 			phasedata->aggnode = aggnode;
 			phasedata->aggstrategy = aggnode->aggstrategy;
 			phasedata->sortnode = sortnode;
 		}
 	}
 
 	/*
 	 * Convert all_grouped_cols to a descending-order list.
 	 */
-	i = -1;
-	while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
-		aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+	{
+		int i = -1;
+		while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+			aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+	}
 
 	/*
 	 * Set up aggregate-result storage in the output expr context, and also
 	 * allocate my private per-agg working storage
 	 */
 	econtext = aggstate->ss.ps.ps_ExprContext;
 	econtext->ecxt_aggvalues = (Datum *) palloc0(sizeof(Datum) * numaggs);
 	econtext->ecxt_aggnulls = (bool *) palloc0(sizeof(bool) * numaggs);
 
 	peraggs = (AggStatePerAgg) palloc0(sizeof(AggStatePerAggData) * numaggs);
 	pertransstates = (AggStatePerTrans) palloc0(sizeof(AggStatePerTransData) * numtrans);
 
 	aggstate->peragg = peraggs;
 	aggstate->pertrans = pertransstates;
 
 
 	aggstate->all_pergroups =
 		(AggStatePerGroup *) palloc0(sizeof(AggStatePerGroup)
 									 * (numGroupingSets + numHashes));
 	pergroups = aggstate->all_pergroups;
 
 	if (node->aggstrategy != AGG_HASHED)
 	{
-		for (i = 0; i < numGroupingSets; i++)
+		for (int i = 0; i < numGroupingSets; i++)
 		{
 			pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
 													  * numaggs);
 		}
 
 		aggstate->pergroups = pergroups;
 		pergroups += numGroupingSets;
 	}
 
 	/*
 	 * Hashing can only appear in the initial phase.
 	 */
 	if (use_hashing)
 	{
 		Plan	   *outerplan = outerPlan(node);
 		uint64		totalGroups = 0;
 		int			i;
 
 		aggstate->hash_metacxt = AllocSetContextCreate(aggstate->ss.ps.state->es_query_cxt,
 													   "HashAgg meta context",
 													   ALLOCSET_DEFAULT_SIZES);
 		aggstate->hash_spill_rslot = ExecInitExtraTupleSlot(estate, scanDesc,
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 29bc26669b0..a250a33f8cb 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -2465,45 +2465,44 @@ _SPI_execute_plan(SPIPlanPtr plan, const SPIExecuteOptions *options,
 	 * there be only one query.
 	 */
 	if (options->must_return_tuples && plan->plancache_list == NIL)
 		ereport(ERROR,
 				(errcode(ERRCODE_SYNTAX_ERROR),
 				 errmsg("empty query does not return tuples")));
 
 	foreach(lc1, plan->plancache_list)
 	{
 		CachedPlanSource *plansource = (CachedPlanSource *) lfirst(lc1);
 		List	   *stmt_list;
 		ListCell   *lc2;
 
 		spicallbackarg.query = plansource->query_string;
 
 		/*
 		 * If this is a one-shot plan, we still need to do parse analysis.
 		 */
 		if (plan->oneshot)
 		{
 			RawStmt    *parsetree = plansource->raw_parse_tree;
 			const char *src = plansource->query_string;
-			List	   *stmt_list;
 
 			/*
 			 * Parameter datatypes are driven by parserSetup hook if provided,
 			 * otherwise we use the fixed parameter list.
 			 */
 			if (parsetree == NULL)
 				stmt_list = NIL;
 			else if (plan->parserSetup != NULL)
 			{
 				Assert(plan->nargs == 0);
 				stmt_list = pg_analyze_and_rewrite_withcb(parsetree,
 														  src,
 														  plan->parserSetup,
 														  plan->parserSetupArg,
 														  _SPI_current->queryEnv);
 			}
 			else
 			{
 				stmt_list = pg_analyze_and_rewrite_fixedparams(parsetree,
 															   src,
 															   plan->argtypes,
 															   plan->nargs,
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 75acea149c7..74adc4f3946 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2526,48 +2526,48 @@ cost_append(AppendPath *apath, PlannerInfo *root)
 	apath->path.rows = 0;
 
 	if (apath->subpaths == NIL)
 		return;
 
 	if (!apath->path.parallel_aware)
 	{
 		List	   *pathkeys = apath->path.pathkeys;
 
 		if (pathkeys == NIL)
 		{
 			Path	   *subpath = (Path *) linitial(apath->subpaths);
 
 			/*
 			 * For an unordered, non-parallel-aware Append we take the startup
 			 * cost as the startup cost of the first subpath.
 			 */
 			apath->path.startup_cost = subpath->startup_cost;
 
 			/* Compute rows and costs as sums of subplan rows and costs. */
 			foreach(l, apath->subpaths)
 			{
-				Path	   *subpath = (Path *) lfirst(l);
+				Path	   *sub = (Path *) lfirst(l);
 
-				apath->path.rows += subpath->rows;
-				apath->path.total_cost += subpath->total_cost;
+				apath->path.rows += sub->rows;
+				apath->path.total_cost += sub->total_cost;
 			}
 		}
 		else
 		{
 			/*
 			 * For an ordered, non-parallel-aware Append we take the startup
 			 * cost as the sum of the subpath startup costs.  This ensures
 			 * that we don't underestimate the startup cost when a query's
 			 * LIMIT is such that several of the children have to be run to
 			 * satisfy it.  This might be overkill --- another plausible hack
 			 * would be to take the Append's startup cost as the maximum of
 			 * the child startup costs.  But we don't want to risk believing
 			 * that an ORDER BY LIMIT query can be satisfied at small cost
 			 * when the first child has small startup cost but later ones
 			 * don't.  (If we had the ability to deal with nonlinear cost
 			 * interpolation for partial retrievals, we would not need to be
 			 * so conservative about this.)
 			 *
 			 * This case is also different from the above in that we have to
 			 * account for possibly injecting sorts into subpaths that aren't
 			 * natively ordered.
 			 */
diff --git a/src/backend/optimizer/path/tidpath.c b/src/backend/optimizer/path/tidpath.c
index 279ca1f5b44..23194d6e007 100644
--- a/src/backend/optimizer/path/tidpath.c
+++ b/src/backend/optimizer/path/tidpath.c
@@ -286,48 +286,48 @@ TidQualFromRestrictInfoList(PlannerInfo *root, List *rlist, RelOptInfo *rel)
 		{
 			ListCell   *j;
 
 			/*
 			 * We must be able to extract a CTID condition from every
 			 * sub-clause of an OR, or we can't use it.
 			 */
 			foreach(j, ((BoolExpr *) rinfo->orclause)->args)
 			{
 				Node	   *orarg = (Node *) lfirst(j);
 				List	   *sublist;
 
 				/* OR arguments should be ANDs or sub-RestrictInfos */
 				if (is_andclause(orarg))
 				{
 					List	   *andargs = ((BoolExpr *) orarg)->args;
 
 					/* Recurse in case there are sub-ORs */
 					sublist = TidQualFromRestrictInfoList(root, andargs, rel);
 				}
 				else
 				{
-					RestrictInfo *rinfo = castNode(RestrictInfo, orarg);
+					RestrictInfo *list = castNode(RestrictInfo, orarg);
 
-					Assert(!restriction_is_or_clause(rinfo));
-					sublist = TidQualFromRestrictInfo(root, rinfo, rel);
+					Assert(!restriction_is_or_clause(list));
+					sublist = TidQualFromRestrictInfo(root, list, rel);
 				}
 
 				/*
 				 * If nothing found in this arm, we can't do anything with
 				 * this OR clause.
 				 */
 				if (sublist == NIL)
 				{
 					rlst = NIL; /* forget anything we had */
 					break;		/* out of loop over OR args */
 				}
 
 				/*
 				 * OK, continue constructing implicitly-OR'ed result list.
 				 */
 				rlst = list_concat(rlst, sublist);
 			}
 		}
 		else
 		{
 			/* Not an OR clause, so handle base cases */
 			rlst = TidQualFromRestrictInfo(root, rinfo, rel);
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 71052c841d7..f97c2f5256c 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -639,47 +639,47 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
 
 	add_path(result_rel, path);
 
 	/*
 	 * Estimate number of groups.  For now we just assume the output is unique
 	 * --- this is certainly true for the UNION case, and we want worst-case
 	 * estimates anyway.
 	 */
 	result_rel->rows = path->rows;
 
 	/*
 	 * Now consider doing the same thing using the partial paths plus Append
 	 * plus Gather.
 	 */
 	if (partial_paths_valid)
 	{
 		Path	   *ppath;
 		int			parallel_workers = 0;
 
 		/* Find the highest number of workers requested for any subpath. */
 		foreach(lc, partial_pathlist)
 		{
-			Path	   *path = lfirst(lc);
+			Path	   *partial_path = lfirst(lc);
 
-			parallel_workers = Max(parallel_workers, path->parallel_workers);
+			parallel_workers = Max(parallel_workers, partial_path->parallel_workers);
 		}
 		Assert(parallel_workers > 0);
 
 		/*
 		 * If the use of parallel append is permitted, always request at least
 		 * log2(# of children) paths.  We assume it can be useful to have
 		 * extra workers in this case because they will be spread out across
 		 * the children.  The precise formula is just a guess; see
 		 * add_paths_to_append_rel.
 		 */
 		if (enable_parallel_append)
 		{
 			parallel_workers = Max(parallel_workers,
 								   pg_leftmost_one_pos32(list_length(partial_pathlist)) + 1);
 			parallel_workers = Min(parallel_workers,
 								   max_parallel_workers_per_gather);
 		}
 		Assert(parallel_workers > 0);
 
 		ppath = (Path *)
 			create_append_path(root, result_rel, NIL, partial_pathlist,
 							   NIL, NULL,
diff --git a/src/backend/optimizer/util/paramassign.c b/src/backend/optimizer/util/paramassign.c
index 8e2d4bf5158..933460989b3 100644
--- a/src/backend/optimizer/util/paramassign.c
+++ b/src/backend/optimizer/util/paramassign.c
@@ -418,93 +418,93 @@ replace_nestloop_param_placeholdervar(PlannerInfo *root, PlaceHolderVar *phv)
  * while planning the subquery.  So we need not modify the subplan or the
  * PlannerParamItems here.  What we do need to do is add entries to
  * root->curOuterParams to signal the parent nestloop plan node that it must
  * provide these values.  This differs from replace_nestloop_param_var in
  * that the PARAM_EXEC slots to use have already been determined.
  *
  * Note that we also use root->curOuterRels as an implicit parameter for
  * sanity checks.
  */
 void
 process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 {
 	ListCell   *lc;
 
 	foreach(lc, subplan_params)
 	{
 		PlannerParamItem *pitem = lfirst_node(PlannerParamItem, lc);
 
 		if (IsA(pitem->item, Var))
 		{
 			Var		   *var = (Var *) pitem->item;
 			NestLoopParam *nlp;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			/* If not from a nestloop outer rel, complain */
 			if (!bms_is_member(var->varno, root->curOuterRels))
 				elog(ERROR, "non-LATERAL parameter required by subquery");
 
 			/* Is this param already listed in root->curOuterParams? */
-			foreach(lc, root->curOuterParams)
+			foreach(lc2, root->curOuterParams)
 			{
-				nlp = (NestLoopParam *) lfirst(lc);
+				nlp = (NestLoopParam *) lfirst(lc2);
 				if (nlp->paramno == pitem->paramId)
 				{
 					Assert(equal(var, nlp->paramval));
 					/* Present, so nothing to do */
 					break;
 				}
 			}
-			if (lc == NULL)
+			if (lc2 == NULL)
 			{
 				/* No, so add it */
 				nlp = makeNode(NestLoopParam);
 				nlp->paramno = pitem->paramId;
 				nlp->paramval = copyObject(var);
 				root->curOuterParams = lappend(root->curOuterParams, nlp);
 			}
 		}
 		else if (IsA(pitem->item, PlaceHolderVar))
 		{
 			PlaceHolderVar *phv = (PlaceHolderVar *) pitem->item;
 			NestLoopParam *nlp;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			/* If not from a nestloop outer rel, complain */
 			if (!bms_is_subset(find_placeholder_info(root, phv)->ph_eval_at,
 							   root->curOuterRels))
 				elog(ERROR, "non-LATERAL parameter required by subquery");
 
 			/* Is this param already listed in root->curOuterParams? */
-			foreach(lc, root->curOuterParams)
+			foreach(lc2, root->curOuterParams)
 			{
-				nlp = (NestLoopParam *) lfirst(lc);
+				nlp = (NestLoopParam *) lfirst(lc2);
 				if (nlp->paramno == pitem->paramId)
 				{
 					Assert(equal(phv, nlp->paramval));
 					/* Present, so nothing to do */
 					break;
 				}
 			}
-			if (lc == NULL)
+			if (lc2 == NULL)
 			{
 				/* No, so add it */
 				nlp = makeNode(NestLoopParam);
 				nlp->paramno = pitem->paramId;
 				nlp->paramval = (Var *) copyObject(phv);
 				root->curOuterParams = lappend(root->curOuterParams, nlp);
 			}
 		}
 		else
 			elog(ERROR, "unexpected type of subquery parameter");
 	}
 }
 
 /*
  * Identify any NestLoopParams that should be supplied by a NestLoop plan
  * node with the specified lefthand rels.  Remove them from the active
  * root->curOuterParams list and return them as the result list.
  */
 List *
 identify_current_nestloop_params(PlannerInfo *root, Relids leftrelids)
 {
 	List	   *result;
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index b85fbebd00e..53a17ac3f6a 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -520,49 +520,49 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
 		 * likely expecting an un-tweaked function call.
 		 *
 		 * Note: the transformation changes a non-schema-qualified unnest()
 		 * function name into schema-qualified pg_catalog.unnest().  This
 		 * choice is also a bit debatable, but it seems reasonable to force
 		 * use of built-in unnest() when we make this transformation.
 		 */
 		if (IsA(fexpr, FuncCall))
 		{
 			FuncCall   *fc = (FuncCall *) fexpr;
 
 			if (list_length(fc->funcname) == 1 &&
 				strcmp(strVal(linitial(fc->funcname)), "unnest") == 0 &&
 				list_length(fc->args) > 1 &&
 				fc->agg_order == NIL &&
 				fc->agg_filter == NULL &&
 				fc->over == NULL &&
 				!fc->agg_star &&
 				!fc->agg_distinct &&
 				!fc->func_variadic &&
 				coldeflist == NIL)
 			{
-				ListCell   *lc;
+				ListCell   *lc2;
 
-				foreach(lc, fc->args)
+				foreach(lc2, fc->args)
 				{
-					Node	   *arg = (Node *) lfirst(lc);
+					Node	   *arg = (Node *) lfirst(lc2);
 					FuncCall   *newfc;
 
 					last_srf = pstate->p_last_srf;
 
 					newfc = makeFuncCall(SystemFuncName("unnest"),
 										 list_make1(arg),
 										 COERCE_EXPLICIT_CALL,
 										 fc->location);
 
 					newfexpr = transformExpr(pstate, (Node *) newfc,
 											 EXPR_KIND_FROM_FUNCTION);
 
 					/* nodeFunctionscan.c requires SRFs to be at top level */
 					if (pstate->p_last_srf != last_srf &&
 						pstate->p_last_srf != newfexpr)
 						ereport(ERROR,
 								(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 								 errmsg("set-returning functions must appear at top level of FROM"),
 								 parser_errposition(pstate,
 													exprLocation(pstate->p_last_srf))));
 
 					funcexprs = lappend(funcexprs, newfexpr);
diff --git a/src/backend/partitioning/partbounds.c b/src/backend/partitioning/partbounds.c
index 091d6e886b6..2720a2508cb 100644
--- a/src/backend/partitioning/partbounds.c
+++ b/src/backend/partitioning/partbounds.c
@@ -4300,46 +4300,45 @@ get_qual_for_range(Relation parent, PartitionBoundSpec *spec,
 	int			i,
 				j;
 	PartitionRangeDatum *ldatum,
 			   *udatum;
 	PartitionKey key = RelationGetPartitionKey(parent);
 	Expr	   *keyCol;
 	Const	   *lower_val,
 			   *upper_val;
 	List	   *lower_or_arms,
 			   *upper_or_arms;
 	int			num_or_arms,
 				current_or_arm;
 	ListCell   *lower_or_start_datum,
 			   *upper_or_start_datum;
 	bool		need_next_lower_arm,
 				need_next_upper_arm;
 
 	if (spec->is_default)
 	{
 		List	   *or_expr_args = NIL;
 		PartitionDesc pdesc = RelationGetPartitionDesc(parent, false);
 		Oid		   *inhoids = pdesc->oids;
-		int			nparts = pdesc->nparts,
-					i;
+		int			nparts = pdesc->nparts;
 
 		for (i = 0; i < nparts; i++)
 		{
 			Oid			inhrelid = inhoids[i];
 			HeapTuple	tuple;
 			Datum		datum;
 			bool		isnull;
 			PartitionBoundSpec *bspec;
 
 			tuple = SearchSysCache1(RELOID, inhrelid);
 			if (!HeapTupleIsValid(tuple))
 				elog(ERROR, "cache lookup failed for relation %u", inhrelid);
 
 			datum = SysCacheGetAttr(RELOID, tuple,
 									Anum_pg_class_relpartbound,
 									&isnull);
 			if (isnull)
 				elog(ERROR, "null relpartbound for relation %u", inhrelid);
 
 			bspec = (PartitionBoundSpec *)
 				stringToNode(TextDatumGetCString(datum));
 			if (!IsA(bspec, PartitionBoundSpec))
diff --git a/src/backend/partitioning/partprune.c b/src/backend/partitioning/partprune.c
index bf9fe5b7aaf..91b300f4dba 100644
--- a/src/backend/partitioning/partprune.c
+++ b/src/backend/partitioning/partprune.c
@@ -2270,49 +2270,48 @@ match_clause_to_partition_key(GeneratePruningStepsContext *context,
 			 */
 			if (arrexpr->multidims)
 				return PARTCLAUSE_UNSUPPORTED;
 
 			/*
 			 * Otherwise, we can just use the list of element values.
 			 */
 			elem_exprs = arrexpr->elements;
 		}
 		else
 		{
 			/* Give up on any other clause types. */
 			return PARTCLAUSE_UNSUPPORTED;
 		}
 
 		/*
 		 * Now generate a list of clauses, one for each array element, of the
 		 * form leftop saop_op elem_expr
 		 */
 		elem_clauses = NIL;
 		foreach(lc1, elem_exprs)
 		{
-			Expr	   *rightop = (Expr *) lfirst(lc1),
-					   *elem_clause;
+			Expr	   *elem_clause;
 
 			elem_clause = make_opclause(saop_op, BOOLOID, false,
-										leftop, rightop,
+										leftop, lfirst(lc1),
 										InvalidOid, saop_coll);
 			elem_clauses = lappend(elem_clauses, elem_clause);
 		}
 
 		/*
 		 * If we have an ANY clause and multiple elements, now turn the list
 		 * of clauses into an OR expression.
 		 */
 		if (saop->useOr && list_length(elem_clauses) > 1)
 			elem_clauses = list_make1(makeBoolExpr(OR_EXPR, elem_clauses, -1));
 
 		/* Finally, generate steps */
 		*clause_steps = gen_partprune_steps_internal(context, elem_clauses);
 		if (context->contradictory)
 			return PARTCLAUSE_MATCH_CONTRADICT;
 		else if (*clause_steps == NIL)
 			return PARTCLAUSE_UNSUPPORTED;	/* step generation failed */
 		return PARTCLAUSE_MATCH_STEPS;
 	}
 	else if (IsA(clause, NullTest))
 	{
 		NullTest   *nulltest = (NullTest *) clause;
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 89cf9f9389c..8ac78a6cf38 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -2301,45 +2301,44 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
 						 * previous tuple's toast chunks.
 						 */
 						Assert(change->data.tp.clear_toast_afterwards);
 						ReorderBufferToastReset(rb, txn);
 
 						/* We don't need this record anymore. */
 						ReorderBufferReturnChange(rb, specinsert, true);
 						specinsert = NULL;
 					}
 					break;
 
 				case REORDER_BUFFER_CHANGE_TRUNCATE:
 					{
 						int			i;
 						int			nrelids = change->data.truncate.nrelids;
 						int			nrelations = 0;
 						Relation   *relations;
 
 						relations = palloc0(nrelids * sizeof(Relation));
 						for (i = 0; i < nrelids; i++)
 						{
 							Oid			relid = change->data.truncate.relids[i];
-							Relation	relation;
 
 							relation = RelationIdGetRelation(relid);
 
 							if (!RelationIsValid(relation))
 								elog(ERROR, "could not open relation with OID %u", relid);
 
 							if (!RelationIsLogicallyLogged(relation))
 								continue;
 
 							relations[nrelations++] = relation;
 						}
 
 						/* Apply the truncate. */
 						ReorderBufferApplyTruncate(rb, txn, nrelations,
 												   relations, change,
 												   streaming);
 
 						for (i = 0; i < nrelations; i++)
 							RelationClose(relations[i]);
 
 						break;
 					}
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index bf698c1fc3f..744bc512b65 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1673,45 +1673,44 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
 		 *
 		 * XXX We have to do this even when there are no expressions in
 		 * clauses, otherwise find_strongest_dependency may fail for stats
 		 * with expressions (due to lookup of negative value in bitmap). So we
 		 * need to at least filter out those dependencies. Maybe we could do
 		 * it in a cheaper way (if there are no expr clauses, we can just
 		 * discard all negative attnums without any lookups).
 		 */
 		if (unique_exprs_cnt > 0 || stat->exprs != NIL)
 		{
 			int			ndeps = 0;
 
 			for (i = 0; i < deps->ndeps; i++)
 			{
 				bool		skip = false;
 				MVDependency *dep = deps->deps[i];
 				int			j;
 
 				for (j = 0; j < dep->nattributes; j++)
 				{
 					int			idx;
 					Node	   *expr;
-					int			k;
 					AttrNumber	unique_attnum = InvalidAttrNumber;
 					AttrNumber	attnum;
 
 					/* undo the per-statistics offset */
 					attnum = dep->attributes[j];
 
 					/*
 					 * For regular attributes we can simply check if it
 					 * matches any clause. If there's no matching clause, we
 					 * can just ignore it. We need to offset the attnum
 					 * though.
 					 */
 					if (AttrNumberIsForUserDefinedAttr(attnum))
 					{
 						dep->attributes[j] = attnum + attnum_offset;
 
 						if (!bms_is_member(dep->attributes[j], clauses_attnums))
 						{
 							skip = true;
 							break;
 						}
 
@@ -1721,53 +1720,53 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
 					/*
 					 * the attnum should be a valid system attnum (-1, -2,
 					 * ...)
 					 */
 					Assert(AttributeNumberIsValid(attnum));
 
 					/*
 					 * For expressions, we need to do two translations. First
 					 * we have to translate the negative attnum to index in
 					 * the list of expressions (in the statistics object).
 					 * Then we need to see if there's a matching clause. The
 					 * index of the unique expression determines the attnum
 					 * (and we offset it).
 					 */
 					idx = -(1 + attnum);
 
 					/* Is the expression index is valid? */
 					Assert((idx >= 0) && (idx < list_length(stat->exprs)));
 
 					expr = (Node *) list_nth(stat->exprs, idx);
 
 					/* try to find the expression in the unique list */
-					for (k = 0; k < unique_exprs_cnt; k++)
+					for (int m = 0; m < unique_exprs_cnt; m++)
 					{
 						/*
 						 * found a matching unique expression, use the attnum
 						 * (derived from index of the unique expression)
 						 */
-						if (equal(unique_exprs[k], expr))
+						if (equal(unique_exprs[m], expr))
 						{
-							unique_attnum = -(k + 1) + attnum_offset;
+							unique_attnum = -(m + 1) + attnum_offset;
 							break;
 						}
 					}
 
 					/*
 					 * Found no matching expression, so we can simply skip
 					 * this dependency, because there's no chance it will be
 					 * fully covered.
 					 */
 					if (unique_attnum == InvalidAttrNumber)
 					{
 						skip = true;
 						break;
 					}
 
 					/* otherwise remap it to the new attnum */
 					dep->attributes[j] = unique_attnum;
 				}
 
 				/* if found a matching dependency, keep it */
 				if (!skip)
 				{
diff --git a/src/backend/utils/adt/numutils.c b/src/backend/utils/adt/numutils.c
index cc3f95d3990..834ec0b5882 100644
--- a/src/backend/utils/adt/numutils.c
+++ b/src/backend/utils/adt/numutils.c
@@ -429,48 +429,48 @@ pg_ltoa(int32 value, char *a)
  * same.  Caller must ensure that a points to at least MAXINT8LEN bytes.
  */
 int
 pg_ulltoa_n(uint64 value, char *a)
 {
 	int			olength,
 				i = 0;
 	uint32		value2;
 
 	/* Degenerate case */
 	if (value == 0)
 	{
 		*a = '0';
 		return 1;
 	}
 
 	olength = decimalLength64(value);
 
 	/* Compute the result string. */
 	while (value >= 100000000)
 	{
 		const uint64 q = value / 100000000;
-		uint32		value2 = (uint32) (value - 100000000 * q);
+		uint32		value3 = (uint32) (value - 100000000 * q);
 
-		const uint32 c = value2 % 10000;
-		const uint32 d = value2 / 10000;
+		const uint32 c = value3 % 10000;
+		const uint32 d = value3 / 10000;
 		const uint32 c0 = (c % 100) << 1;
 		const uint32 c1 = (c / 100) << 1;
 		const uint32 d0 = (d % 100) << 1;
 		const uint32 d1 = (d / 100) << 1;
 
 		char	   *pos = a + olength - i;
 
 		value = q;
 
 		memcpy(pos - 2, DIGIT_TABLE + c0, 2);
 		memcpy(pos - 4, DIGIT_TABLE + c1, 2);
 		memcpy(pos - 6, DIGIT_TABLE + d0, 2);
 		memcpy(pos - 8, DIGIT_TABLE + d1, 2);
 		i += 8;
 	}
 
 	/* Switch to 32-bit for speed */
 	value2 = (uint32) value;
 
 	if (value2 >= 10000)
 	{
 		const uint32 c = value2 - 10000 * (value2 / 10000);
diff --git a/src/backend/utils/adt/partitionfuncs.c b/src/backend/utils/adt/partitionfuncs.c
index 109dc8023e1..a45c3f9d48a 100644
--- a/src/backend/utils/adt/partitionfuncs.c
+++ b/src/backend/utils/adt/partitionfuncs.c
@@ -219,29 +219,29 @@ pg_partition_ancestors(PG_FUNCTION_ARGS)
 
 		funcctx = SRF_FIRSTCALL_INIT();
 
 		if (!check_rel_can_be_partition(relid))
 			SRF_RETURN_DONE(funcctx);
 
 		oldcxt = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
 
 		ancestors = get_partition_ancestors(relid);
 		ancestors = lcons_oid(relid, ancestors);
 
 		/* The only state we need is the ancestors list */
 		funcctx->user_fctx = (void *) ancestors;
 
 		MemoryContextSwitchTo(oldcxt);
 	}
 
 	funcctx = SRF_PERCALL_SETUP();
 	ancestors = (List *) funcctx->user_fctx;
 
 	if (funcctx->call_cntr < list_length(ancestors))
 	{
-		Oid			relid = list_nth_oid(ancestors, funcctx->call_cntr);
+		Oid			nextrel = list_nth_oid(ancestors, funcctx->call_cntr);
 
-		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(nextrel));
 	}
 
 	SRF_RETURN_DONE(funcctx);
 }
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 9959f6910e9..ad2d1a3e4ec 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -8091,47 +8091,45 @@ get_parameter(Param *param, deparse_context *context)
 	if (param->paramkind == PARAM_EXTERN && context->namespaces != NIL)
 	{
 		dpns = llast(context->namespaces);
 		if (dpns->argnames &&
 			param->paramid > 0 &&
 			param->paramid <= dpns->numargs)
 		{
 			char	   *argname = dpns->argnames[param->paramid - 1];
 
 			if (argname)
 			{
 				bool		should_qualify = false;
 				ListCell   *lc;
 
 				/*
 				 * Qualify the parameter name if there are any other deparse
 				 * namespaces with range tables.  This avoids qualifying in
 				 * trivial cases like "RETURN a + b", but makes it safe in all
 				 * other cases.
 				 */
 				foreach(lc, context->namespaces)
 				{
-					deparse_namespace *dpns = lfirst(lc);
-
-					if (dpns->rtable_names != NIL)
+					if (((deparse_namespace *) lfirst(lc))->rtable_names != NIL)
 					{
 						should_qualify = true;
 						break;
 					}
 				}
 				if (should_qualify)
 				{
 					appendStringInfoString(context->buf, quote_identifier(dpns->funcname));
 					appendStringInfoChar(context->buf, '.');
 				}
 
 				appendStringInfoString(context->buf, quote_identifier(argname));
 				return;
 			}
 		}
 	}
 
 	/*
 	 * Not PARAM_EXEC, or couldn't find referent: just print $N.
 	 */
 	appendStringInfo(context->buf, "$%d", param->paramid);
 }
diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c
index 93d9cef06ba..8d7b6b58c05 100644
--- a/src/pl/plpgsql/src/pl_funcs.c
+++ b/src/pl/plpgsql/src/pl_funcs.c
@@ -1628,51 +1628,50 @@ plpgsql_dumptree(PLpgSQL_function *func)
 					{
 						printf("                                  DEFAULT ");
 						dump_expr(var->default_val);
 						printf("\n");
 					}
 					if (var->cursor_explicit_expr != NULL)
 					{
 						if (var->cursor_explicit_argrow >= 0)
 							printf("                                  CURSOR argument row %d\n", var->cursor_explicit_argrow);
 
 						printf("                                  CURSOR IS ");
 						dump_expr(var->cursor_explicit_expr);
 						printf("\n");
 					}
 					if (var->promise != PLPGSQL_PROMISE_NONE)
 						printf("                                  PROMISE %d\n",
 							   (int) var->promise);
 				}
 				break;
 			case PLPGSQL_DTYPE_ROW:
 				{
 					PLpgSQL_row *row = (PLpgSQL_row *) d;
-					int			i;
 
 					printf("ROW %-16s fields", row->refname);
-					for (i = 0; i < row->nfields; i++)
+					for (int j = 0; j < row->nfields; j++)
 					{
-						printf(" %s=var %d", row->fieldnames[i],
-							   row->varnos[i]);
+						printf(" %s=var %d", row->fieldnames[j],
+							   row->varnos[j]);
 					}
 					printf("\n");
 				}
 				break;
 			case PLPGSQL_DTYPE_REC:
 				printf("REC %-16s typoid %u\n",
 					   ((PLpgSQL_rec *) d)->refname,
 					   ((PLpgSQL_rec *) d)->rectypeid);
 				if (((PLpgSQL_rec *) d)->isconst)
 					printf("                                  CONSTANT\n");
 				if (((PLpgSQL_rec *) d)->notnull)
 					printf("                                  NOT NULL\n");
 				if (((PLpgSQL_rec *) d)->default_val != NULL)
 				{
 					printf("                                  DEFAULT ");
 					dump_expr(((PLpgSQL_rec *) d)->default_val);
 					printf("\n");
 				}
 				break;
 			case PLPGSQL_DTYPE_RECFIELD:
 				printf("RECFIELD %-16s of REC %d\n",
 					   ((PLpgSQL_recfield *) d)->fieldname,
#28David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#27)
1 attachment(s)
Re: shadow variables - pg15 edition

On Tue, 30 Aug 2022 at 17:44, Justin Pryzby <pryzby@telsasoft.com> wrote:

Would you check if any of these changes are good enough ?

I looked through v5.txt and modified it so that the fix for the shadow
warnings are more aligned to the spreadsheet I created.

I also fixed some additional warnings which leaves just 5 warnings. Namely:

../../../src/include/utils/elog.h:317:29: warning: declaration of
‘_save_exception_stack’ shadows a previous local
../../../src/include/utils/elog.h:318:39: warning: declaration of
‘_save_context_stack’ shadows a previous local
../../../src/include/utils/elog.h:319:28: warning: declaration of
‘_local_sigjmp_buf’ shadows a previous local
../../../src/include/utils/elog.h:320:22: warning: declaration of
‘_do_rethrow’ shadows a previous local
pgbench.c:7509:40: warning: declaration of ‘now’ shadows a previous local

The first 4 of those are due to a nested PG_TRY(). The final one I
just ran out of inspiration on what to rename the variable to.

If there are no objections then I'll push this in the next day or 2.

David

Attachments:

rename_shadowed_vars.patchtext/plain; charset=US-ASCII; name=rename_shadowed_vars.patchDownload
diff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.c
index ed16f93acc..9a0bcf6698 100644
--- a/src/backend/access/brin/brin_minmax_multi.c
+++ b/src/backend/access/brin/brin_minmax_multi.c
@@ -3059,16 +3059,16 @@ brin_minmax_multi_summary_out(PG_FUNCTION_ARGS)
 		char	   *a,
 				   *b;
 		text	   *c;
-		StringInfoData str;
+		StringInfoData buf;
 
-		initStringInfo(&str);
+		initStringInfo(&buf);
 
 		a = OutputFunctionCall(&fmgrinfo, ranges_deserialized->values[idx++]);
 		b = OutputFunctionCall(&fmgrinfo, ranges_deserialized->values[idx++]);
 
-		appendStringInfo(&str, "%s ... %s", a, b);
+		appendStringInfo(&buf, "%s ... %s", a, b);
 
-		c = cstring_to_text_with_len(str.data, str.len);
+		c = cstring_to_text_with_len(buf.data, buf.len);
 
 		astate_values = accumArrayResult(astate_values,
 										 PointerGetDatum(c),
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index fc85ba99ac..553500cec0 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -397,7 +397,7 @@ restartScanEntry:
 		{
 			BlockNumber rootPostingTree = GinGetPostingTree(itup);
 			GinBtreeStack *stack;
-			Page		page;
+			Page		entrypage;
 			ItemPointerData minItem;
 
 			/*
@@ -428,13 +428,13 @@ restartScanEntry:
 			 */
 			IncrBufferRefCount(entry->buffer);
 
-			page = BufferGetPage(entry->buffer);
+			entrypage = BufferGetPage(entry->buffer);
 
 			/*
 			 * Load the first page into memory.
 			 */
 			ItemPointerSetMin(&minItem);
-			entry->list = GinDataLeafPageGetItems(page, &entry->nlist, minItem);
+			entry->list = GinDataLeafPageGetItems(entrypage, &entry->nlist, minItem);
 
 			entry->predictNumberResult = stack->predictNumber * entry->nlist;
 
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 75b214824d..bd4d85041d 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -6283,14 +6283,14 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
 		 */
 		if (ISUPDATE_from_mxstatus(members[i].status))
 		{
-			TransactionId xid = members[i].xid;
+			TransactionId txid = members[i].xid;
 
-			Assert(TransactionIdIsValid(xid));
-			if (TransactionIdPrecedes(xid, relfrozenxid))
+			Assert(TransactionIdIsValid(txid));
+			if (TransactionIdPrecedes(txid, relfrozenxid))
 				ereport(ERROR,
 						(errcode(ERRCODE_DATA_CORRUPTED),
 						 errmsg_internal("found update xid %u from before relfrozenxid %u",
-										 xid, relfrozenxid)));
+										 txid, relfrozenxid)));
 
 			/*
 			 * It's an update; should we keep it?  If the transaction is known
@@ -6304,13 +6304,13 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
 			 * because of race conditions explained in detail in
 			 * heapam_visibility.c.
 			 */
-			if (TransactionIdIsCurrentTransactionId(xid) ||
-				TransactionIdIsInProgress(xid))
+			if (TransactionIdIsCurrentTransactionId(txid) ||
+				TransactionIdIsInProgress(txid))
 			{
 				Assert(!TransactionIdIsValid(update_xid));
-				update_xid = xid;
+				update_xid = txid;
 			}
-			else if (TransactionIdDidCommit(xid))
+			else if (TransactionIdDidCommit(txid))
 			{
 				/*
 				 * The transaction committed, so we can tell caller to set
@@ -6319,7 +6319,7 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
 				 */
 				Assert(!TransactionIdIsValid(update_xid));
 				update_committed = true;
-				update_xid = xid;
+				update_xid = txid;
 			}
 			else
 			{
diff --git a/src/backend/access/transam/clog.c b/src/backend/access/transam/clog.c
index 3d9088a704..a7dfcfb4da 100644
--- a/src/backend/access/transam/clog.c
+++ b/src/backend/access/transam/clog.c
@@ -516,23 +516,23 @@ TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status,
 	/* Walk the list and update the status of all XIDs. */
 	while (nextidx != INVALID_PGPROCNO)
 	{
-		PGPROC	   *proc = &ProcGlobal->allProcs[nextidx];
+		PGPROC	   *nextproc = &ProcGlobal->allProcs[nextidx];
 
 		/*
 		 * Transactions with more than THRESHOLD_SUBTRANS_CLOG_OPT sub-XIDs
 		 * should not use group XID status update mechanism.
 		 */
-		Assert(proc->subxidStatus.count <= THRESHOLD_SUBTRANS_CLOG_OPT);
+		Assert(nextproc->subxidStatus.count <= THRESHOLD_SUBTRANS_CLOG_OPT);
 
-		TransactionIdSetPageStatusInternal(proc->clogGroupMemberXid,
-										   proc->subxidStatus.count,
-										   proc->subxids.xids,
-										   proc->clogGroupMemberXidStatus,
-										   proc->clogGroupMemberLsn,
-										   proc->clogGroupMemberPage);
+		TransactionIdSetPageStatusInternal(nextproc->clogGroupMemberXid,
+										   nextproc->subxidStatus.count,
+										   nextproc->subxids.xids,
+										   nextproc->clogGroupMemberXidStatus,
+										   nextproc->clogGroupMemberLsn,
+										   nextproc->clogGroupMemberPage);
 
 		/* Move to next proc in list. */
-		nextidx = pg_atomic_read_u32(&proc->clogGroupNext);
+		nextidx = pg_atomic_read_u32(&nextproc->clogGroupNext);
 	}
 
 	/* We're done with the lock now. */
@@ -545,18 +545,18 @@ TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status,
 	 */
 	while (wakeidx != INVALID_PGPROCNO)
 	{
-		PGPROC	   *proc = &ProcGlobal->allProcs[wakeidx];
+		PGPROC	   *wakeproc = &ProcGlobal->allProcs[wakeidx];
 
-		wakeidx = pg_atomic_read_u32(&proc->clogGroupNext);
-		pg_atomic_write_u32(&proc->clogGroupNext, INVALID_PGPROCNO);
+		wakeidx = pg_atomic_read_u32(&wakeproc->clogGroupNext);
+		pg_atomic_write_u32(&wakeproc->clogGroupNext, INVALID_PGPROCNO);
 
 		/* ensure all previous writes are visible before follower continues. */
 		pg_write_barrier();
 
-		proc->clogGroupMember = false;
+		wakeproc->clogGroupMember = false;
 
-		if (proc != MyProc)
-			PGSemaphoreUnlock(proc->sem);
+		if (wakeproc != MyProc)
+			PGSemaphoreUnlock(wakeproc->sem);
 	}
 
 	return true;
diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index e252ad7421..74fb529380 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -275,12 +275,12 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)
 	PG_ENSURE_ERROR_CLEANUP(do_pg_abort_backup, BoolGetDatum(false));
 	{
 		ListCell   *lc;
-		tablespaceinfo *ti;
+		tablespaceinfo *newti;
 
 		/* Add a node for the base directory at the end */
-		ti = palloc0(sizeof(tablespaceinfo));
-		ti->size = -1;
-		state.tablespaces = lappend(state.tablespaces, ti);
+		newti = palloc0(sizeof(tablespaceinfo));
+		newti->size = -1;
+		state.tablespaces = lappend(state.tablespaces, newti);
 
 		/*
 		 * Calculate the total backup size by summing up the size of each
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9a80ccdccd..5b49cc5a09 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1818,19 +1818,19 @@ heap_drop_with_catalog(Oid relid)
 	 */
 	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
 	{
-		Relation	rel;
-		HeapTuple	tuple;
+		Relation	ftrel;
+		HeapTuple	fttuple;
 
-		rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+		ftrel = table_open(ForeignTableRelationId, RowExclusiveLock);
 
-		tuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
-		if (!HeapTupleIsValid(tuple))
+		fttuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(fttuple))
 			elog(ERROR, "cache lookup failed for foreign table %u", relid);
 
-		CatalogTupleDelete(rel, &tuple->t_self);
+		CatalogTupleDelete(ftrel, &fttuple->t_self);
 
-		ReleaseSysCache(tuple);
-		table_close(rel, RowExclusiveLock);
+		ReleaseSysCache(fttuple);
+		table_close(ftrel, RowExclusiveLock);
 	}
 
 	/*
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index a7022824d8..92aed2ff8b 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -1151,10 +1151,8 @@ FuncnameGetCandidates(List *names, int nargs, List *argnames,
 		if (argnumbers)
 		{
 			/* Re-order the argument types into call's logical order */
-			int			i;
-
-			for (i = 0; i < pronargs; i++)
-				newResult->args[i] = proargtypes[argnumbers[i]];
+			for (int j = 0; j < pronargs; j++)
+				newResult->args[j] = proargtypes[argnumbers[j]];
 		}
 		else
 		{
@@ -1163,12 +1161,10 @@ FuncnameGetCandidates(List *names, int nargs, List *argnames,
 		}
 		if (variadic)
 		{
-			int			i;
-
 			newResult->nvargs = effective_nargs - pronargs + 1;
 			/* Expand variadic argument into N copies of element type */
-			for (i = pronargs - 1; i < effective_nargs; i++)
-				newResult->args[i] = va_elem_type;
+			for (int j = pronargs - 1; j < effective_nargs; j++)
+				newResult->args[j] = va_elem_type;
 		}
 		else
 			newResult->nvargs = 0;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 8514ebfe91..a8b75eb1be 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -107,7 +107,7 @@ parse_publication_options(ParseState *pstate,
 		{
 			char	   *publish;
 			List	   *publish_list;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			if (*publish_given)
 				errorConflictingDefElem(defel, pstate);
@@ -131,9 +131,9 @@ parse_publication_options(ParseState *pstate,
 								"publish")));
 
 			/* Process the option list. */
-			foreach(lc, publish_list)
+			foreach(lc2, publish_list)
 			{
-				char	   *publish_opt = (char *) lfirst(lc);
+				char	   *publish_opt = (char *) lfirst(lc2);
 
 				if (strcmp(publish_opt, "insert") == 0)
 					pubactions->pubinsert = true;
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 7d8a75d23c..1f774ac065 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10223,7 +10223,7 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
 		Oid			constrOid;
 		ObjectAddress address,
 					referenced;
-		ListCell   *cell;
+		ListCell   *lc;
 		Oid			insertTriggerOid,
 					updateTriggerOid;
 
@@ -10276,9 +10276,9 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
 		 * don't need to recurse to partitions for this constraint.
 		 */
 		attached = false;
-		foreach(cell, partFKs)
+		foreach(lc, partFKs)
 		{
-			ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, cell);
+			ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, lc);
 
 			if (tryAttachPartitionForeignKey(fk,
 											 RelationGetRelid(partRel),
@@ -10877,7 +10877,7 @@ ATExecAlterConstrRecurse(Constraint *cmdcon, Relation conrel, Relation tgrel,
 		{
 			Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tgtuple);
 			Form_pg_trigger copy_tg;
-			HeapTuple	copyTuple;
+			HeapTuple	tgCopyTuple;
 
 			/*
 			 * Remember OIDs of other relation(s) involved in FK constraint.
@@ -10901,16 +10901,16 @@ ATExecAlterConstrRecurse(Constraint *cmdcon, Relation conrel, Relation tgrel,
 				tgform->tgfoid != F_RI_FKEY_CHECK_UPD)
 				continue;
 
-			copyTuple = heap_copytuple(tgtuple);
-			copy_tg = (Form_pg_trigger) GETSTRUCT(copyTuple);
+			tgCopyTuple = heap_copytuple(tgtuple);
+			copy_tg = (Form_pg_trigger) GETSTRUCT(tgCopyTuple);
 
 			copy_tg->tgdeferrable = cmdcon->deferrable;
 			copy_tg->tginitdeferred = cmdcon->initdeferred;
-			CatalogTupleUpdate(tgrel, &copyTuple->t_self, copyTuple);
+			CatalogTupleUpdate(tgrel, &tgCopyTuple->t_self, tgCopyTuple);
 
 			InvokeObjectPostAlterHook(TriggerRelationId, tgform->oid, 0);
 
-			heap_freetuple(copyTuple);
+			heap_freetuple(tgCopyTuple);
 		}
 
 		systable_endscan(tgscan);
@@ -18083,14 +18083,14 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel)
 		if (!found)
 		{
 			IndexStmt  *stmt;
-			Oid			constraintOid;
+			Oid			conOid;
 
 			stmt = generateClonedIndexStmt(NULL,
 										   idxRel, attmap,
-										   &constraintOid);
+										   &conOid);
 			DefineIndex(RelationGetRelid(attachrel), stmt, InvalidOid,
 						RelationGetRelid(idxRel),
-						constraintOid,
+						conOid,
 						true, false, false, false, false);
 		}
 
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 0fcf090f22..182e6161e0 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1694,9 +1694,9 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
 
 			for (int i = 0; i < partdesc->nparts; i++)
 			{
-				Oid			partitionId = partdesc->oids[i];
+				Oid			partoid = partdesc->oids[i];
 
-				renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+				renametrig_partition(tgrel, partoid, tgform->oid, newname,
 									 NameStr(tgform->tgname));
 			}
 		}
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index fe74e49814..373bcf6188 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -3483,8 +3483,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 			 */
 			if (aggnode->aggstrategy == AGG_SORTED)
 			{
-				int			i = 0;
-
 				Assert(aggnode->numCols > 0);
 
 				/*
@@ -3495,9 +3493,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 					(ExprState **) palloc0(aggnode->numCols * sizeof(ExprState *));
 
 				/* for each grouping set */
-				for (i = 0; i < phasedata->numsets; i++)
+				for (int k = 0; k < phasedata->numsets; k++)
 				{
-					int			length = phasedata->gset_lengths[i];
+					int			length = phasedata->gset_lengths[k];
 
 					if (phasedata->eqfunctions[length - 1] != NULL)
 						continue;
@@ -3576,7 +3574,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 	{
 		Plan	   *outerplan = outerPlan(node);
 		uint64		totalGroups = 0;
-		int			i;
 
 		aggstate->hash_metacxt = AllocSetContextCreate(aggstate->ss.ps.state->es_query_cxt,
 													   "HashAgg meta context",
@@ -3599,8 +3596,8 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 		 * when there is more than one grouping set, but should still be
 		 * reasonable.
 		 */
-		for (i = 0; i < aggstate->num_hashes; i++)
-			totalGroups += aggstate->perhash[i].aggnode->numGroups;
+		for (int k = 0; k < aggstate->num_hashes; k++)
+			totalGroups += aggstate->perhash[k].aggnode->numGroups;
 
 		hash_agg_set_limits(aggstate->hashentrysize, totalGroups, 0,
 							&aggstate->hash_mem_limit,
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 29bc26669b..fd5796f1b9 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -2484,35 +2484,35 @@ _SPI_execute_plan(SPIPlanPtr plan, const SPIExecuteOptions *options,
 		{
 			RawStmt    *parsetree = plansource->raw_parse_tree;
 			const char *src = plansource->query_string;
-			List	   *stmt_list;
+			List	   *querytree_list;
 
 			/*
 			 * Parameter datatypes are driven by parserSetup hook if provided,
 			 * otherwise we use the fixed parameter list.
 			 */
 			if (parsetree == NULL)
-				stmt_list = NIL;
+				querytree_list = NIL;
 			else if (plan->parserSetup != NULL)
 			{
 				Assert(plan->nargs == 0);
-				stmt_list = pg_analyze_and_rewrite_withcb(parsetree,
-														  src,
-														  plan->parserSetup,
-														  plan->parserSetupArg,
-														  _SPI_current->queryEnv);
+				querytree_list = pg_analyze_and_rewrite_withcb(parsetree,
+															   src,
+															   plan->parserSetup,
+															   plan->parserSetupArg,
+															   _SPI_current->queryEnv);
 			}
 			else
 			{
-				stmt_list = pg_analyze_and_rewrite_fixedparams(parsetree,
-															   src,
-															   plan->argtypes,
-															   plan->nargs,
-															   _SPI_current->queryEnv);
+				querytree_list = pg_analyze_and_rewrite_fixedparams(parsetree,
+																	src,
+																	plan->argtypes,
+																	plan->nargs,
+																	_SPI_current->queryEnv);
 			}
 
 			/* Finish filling in the CachedPlanSource */
 			CompleteCachedPlan(plansource,
-							   stmt_list,
+							   querytree_list,
 							   NULL,
 							   plan->argtypes,
 							   plan->nargs,
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 5ef29eea69..4c6b1d1f55 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2217,13 +2217,13 @@ cost_append(AppendPath *apath)
 
 		if (pathkeys == NIL)
 		{
-			Path	   *subpath = (Path *) linitial(apath->subpaths);
+			Path	   *firstsubpath = (Path *) linitial(apath->subpaths);
 
 			/*
 			 * For an unordered, non-parallel-aware Append we take the startup
 			 * cost as the startup cost of the first subpath.
 			 */
-			apath->path.startup_cost = subpath->startup_cost;
+			apath->path.startup_cost = firstsubpath->startup_cost;
 
 			/* Compute rows and costs as sums of subplan rows and costs. */
 			foreach(l, apath->subpaths)
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 63a8eef45c..c31fcc917d 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -1303,11 +1303,11 @@ generate_bitmap_or_paths(PlannerInfo *root, RelOptInfo *rel,
 			}
 			else
 			{
-				RestrictInfo *rinfo = castNode(RestrictInfo, orarg);
+				RestrictInfo *ri = castNode(RestrictInfo, orarg);
 				List	   *orargs;
 
-				Assert(!restriction_is_or_clause(rinfo));
-				orargs = list_make1(rinfo);
+				Assert(!restriction_is_or_clause(ri));
+				orargs = list_make1(ri);
 
 				indlist = build_paths_for_OR(root, rel,
 											 orargs,
diff --git a/src/backend/optimizer/path/tidpath.c b/src/backend/optimizer/path/tidpath.c
index 279ca1f5b4..c4e035b049 100644
--- a/src/backend/optimizer/path/tidpath.c
+++ b/src/backend/optimizer/path/tidpath.c
@@ -305,10 +305,10 @@ TidQualFromRestrictInfoList(PlannerInfo *root, List *rlist, RelOptInfo *rel)
 				}
 				else
 				{
-					RestrictInfo *rinfo = castNode(RestrictInfo, orarg);
+					RestrictInfo *ri = castNode(RestrictInfo, orarg);
 
-					Assert(!restriction_is_or_clause(rinfo));
-					sublist = TidQualFromRestrictInfo(root, rinfo, rel);
+					Assert(!restriction_is_or_clause(ri));
+					sublist = TidQualFromRestrictInfo(root, ri, rel);
 				}
 
 				/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 9c5836683c..5d0fd6e072 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3449,7 +3449,6 @@ get_number_of_groups(PlannerInfo *root,
 		{
 			/* Add up the estimates for each grouping set */
 			ListCell   *lc;
-			ListCell   *lc2;
 
 			Assert(gd);			/* keep Coverity happy */
 
@@ -3458,17 +3457,18 @@ get_number_of_groups(PlannerInfo *root,
 			foreach(lc, gd->rollups)
 			{
 				RollupData *rollup = lfirst_node(RollupData, lc);
-				ListCell   *lc;
+				ListCell   *lc2;
+				ListCell   *lc3;
 
 				groupExprs = get_sortgrouplist_exprs(rollup->groupClause,
 													 target_list);
 
 				rollup->numGroups = 0.0;
 
-				forboth(lc, rollup->gsets, lc2, rollup->gsets_data)
+				forboth(lc2, rollup->gsets, lc3, rollup->gsets_data)
 				{
-					List	   *gset = (List *) lfirst(lc);
-					GroupingSetData *gs = lfirst_node(GroupingSetData, lc2);
+					List	   *gset = (List *) lfirst(lc2);
+					GroupingSetData *gs = lfirst_node(GroupingSetData, lc3);
 					double		numGroups = estimate_num_groups(root,
 																groupExprs,
 																path_rows,
@@ -3484,6 +3484,8 @@ get_number_of_groups(PlannerInfo *root,
 
 			if (gd->hash_sets_idx)
 			{
+				ListCell   *lc2;
+
 				gd->dNumHashGroups = 0;
 
 				groupExprs = get_sortgrouplist_exprs(parse->groupClause,
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 71052c841d..f6046768fb 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -658,9 +658,10 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
 		/* Find the highest number of workers requested for any subpath. */
 		foreach(lc, partial_pathlist)
 		{
-			Path	   *path = lfirst(lc);
+			Path	   *subpath = lfirst(lc);
 
-			parallel_workers = Max(parallel_workers, path->parallel_workers);
+			parallel_workers = Max(parallel_workers,
+								   subpath->parallel_workers);
 		}
 		Assert(parallel_workers > 0);
 
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index bf3a7cae60..7fb32a0710 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4463,16 +4463,16 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
 	if (!isNull)
 	{
 		Node	   *n;
-		List	   *querytree_list;
+		List	   *query_list;
 
 		n = stringToNode(TextDatumGetCString(tmp));
 		if (IsA(n, List))
-			querytree_list = linitial_node(List, castNode(List, n));
+			query_list = linitial_node(List, castNode(List, n));
 		else
-			querytree_list = list_make1(n);
-		if (list_length(querytree_list) != 1)
+			query_list = list_make1(n);
+		if (list_length(query_list) != 1)
 			goto fail;
-		querytree = linitial(querytree_list);
+		querytree = linitial(query_list);
 
 		/*
 		 * Because we'll insist below that the querytree have an empty rtable
diff --git a/src/backend/optimizer/util/paramassign.c b/src/backend/optimizer/util/paramassign.c
index 8e2d4bf515..933460989b 100644
--- a/src/backend/optimizer/util/paramassign.c
+++ b/src/backend/optimizer/util/paramassign.c
@@ -437,16 +437,16 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 		{
 			Var		   *var = (Var *) pitem->item;
 			NestLoopParam *nlp;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			/* If not from a nestloop outer rel, complain */
 			if (!bms_is_member(var->varno, root->curOuterRels))
 				elog(ERROR, "non-LATERAL parameter required by subquery");
 
 			/* Is this param already listed in root->curOuterParams? */
-			foreach(lc, root->curOuterParams)
+			foreach(lc2, root->curOuterParams)
 			{
-				nlp = (NestLoopParam *) lfirst(lc);
+				nlp = (NestLoopParam *) lfirst(lc2);
 				if (nlp->paramno == pitem->paramId)
 				{
 					Assert(equal(var, nlp->paramval));
@@ -454,7 +454,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 					break;
 				}
 			}
-			if (lc == NULL)
+			if (lc2 == NULL)
 			{
 				/* No, so add it */
 				nlp = makeNode(NestLoopParam);
@@ -467,7 +467,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 		{
 			PlaceHolderVar *phv = (PlaceHolderVar *) pitem->item;
 			NestLoopParam *nlp;
-			ListCell   *lc;
+			ListCell   *lc2;
 
 			/* If not from a nestloop outer rel, complain */
 			if (!bms_is_subset(find_placeholder_info(root, phv)->ph_eval_at,
@@ -475,9 +475,9 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 				elog(ERROR, "non-LATERAL parameter required by subquery");
 
 			/* Is this param already listed in root->curOuterParams? */
-			foreach(lc, root->curOuterParams)
+			foreach(lc2, root->curOuterParams)
 			{
-				nlp = (NestLoopParam *) lfirst(lc);
+				nlp = (NestLoopParam *) lfirst(lc2);
 				if (nlp->paramno == pitem->paramId)
 				{
 					Assert(equal(phv, nlp->paramval));
@@ -485,7 +485,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 					break;
 				}
 			}
-			if (lc == NULL)
+			if (lc2 == NULL)
 			{
 				/* No, so add it */
 				nlp = makeNode(NestLoopParam);
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 202a38f813..c2b5474f5f 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -539,11 +539,11 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
 				!fc->func_variadic &&
 				coldeflist == NIL)
 			{
-				ListCell   *lc;
+				ListCell   *lc2;
 
-				foreach(lc, fc->args)
+				foreach(lc2, fc->args)
 				{
-					Node	   *arg = (Node *) lfirst(lc);
+					Node	   *arg = (Node *) lfirst(lc2);
 					FuncCall   *newfc;
 
 					last_srf = pstate->p_last_srf;
diff --git a/src/backend/partitioning/partbounds.c b/src/backend/partitioning/partbounds.c
index 7f74ed212f..a49e97a225 100644
--- a/src/backend/partitioning/partbounds.c
+++ b/src/backend/partitioning/partbounds.c
@@ -4321,11 +4321,11 @@ get_qual_for_range(Relation parent, PartitionBoundSpec *spec,
 		PartitionDesc pdesc = RelationGetPartitionDesc(parent, false);
 		Oid		   *inhoids = pdesc->oids;
 		int			nparts = pdesc->nparts,
-					i;
+					k;
 
-		for (i = 0; i < nparts; i++)
+		for (k = 0; k < nparts; k++)
 		{
-			Oid			inhrelid = inhoids[i];
+			Oid			inhrelid = inhoids[k];
 			HeapTuple	tuple;
 			Datum		datum;
 			bool		isnull;
diff --git a/src/backend/partitioning/partprune.c b/src/backend/partitioning/partprune.c
index 5ab4612367..6188bf69cb 100644
--- a/src/backend/partitioning/partprune.c
+++ b/src/backend/partitioning/partprune.c
@@ -2289,11 +2289,10 @@ match_clause_to_partition_key(GeneratePruningStepsContext *context,
 		elem_clauses = NIL;
 		foreach(lc1, elem_exprs)
 		{
-			Expr	   *rightop = (Expr *) lfirst(lc1),
-					   *elem_clause;
+			Expr	   *elem_clause;
 
 			elem_clause = make_opclause(saop_op, BOOLOID, false,
-										leftop, rightop,
+										leftop, lfirst(lc1),
 										InvalidOid, saop_coll);
 			elem_clauses = lappend(elem_clauses, elem_clause);
 		}
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 03d9c9c86a..6dff9915a5 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -2320,17 +2320,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
 						for (i = 0; i < nrelids; i++)
 						{
 							Oid			relid = change->data.truncate.relids[i];
-							Relation	relation;
+							Relation	rel;
 
-							relation = RelationIdGetRelation(relid);
+							rel = RelationIdGetRelation(relid);
 
-							if (!RelationIsValid(relation))
+							if (!RelationIsValid(rel))
 								elog(ERROR, "could not open relation with OID %u", relid);
 
-							if (!RelationIsLogicallyLogged(relation))
+							if (!RelationIsLogicallyLogged(rel))
 								continue;
 
-							relations[nrelations++] = relation;
+							relations[nrelations++] = rel;
 						}
 
 						/* Apply the truncate. */
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 3767466ef3..6cbb67c92a 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -180,7 +180,7 @@ WalReceiverMain(void)
 	bool		first_stream;
 	WalRcvData *walrcv = WalRcv;
 	TimestampTz last_recv_timestamp;
-	TimestampTz now;
+	TimestampTz starttime;
 	bool		ping_sent;
 	char	   *err;
 	char	   *sender_host = NULL;
@@ -192,7 +192,7 @@ WalReceiverMain(void)
 	 */
 	Assert(walrcv != NULL);
 
-	now = GetCurrentTimestamp();
+	starttime = GetCurrentTimestamp();
 
 	/*
 	 * Mark walreceiver as running in shared memory.
@@ -248,7 +248,7 @@ WalReceiverMain(void)
 
 	/* Initialise to a sanish value */
 	walrcv->lastMsgSendTime =
-		walrcv->lastMsgReceiptTime = walrcv->latestWalEndTime = now;
+		walrcv->lastMsgReceiptTime = walrcv->latestWalEndTime = starttime;
 
 	/* Report the latch to use to awaken this process */
 	walrcv->latch = &MyProc->procLatch;
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index bf698c1fc3..744bc512b6 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1692,7 +1692,6 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
 				{
 					int			idx;
 					Node	   *expr;
-					int			k;
 					AttrNumber	unique_attnum = InvalidAttrNumber;
 					AttrNumber	attnum;
 
@@ -1740,15 +1739,15 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
 					expr = (Node *) list_nth(stat->exprs, idx);
 
 					/* try to find the expression in the unique list */
-					for (k = 0; k < unique_exprs_cnt; k++)
+					for (int m = 0; m < unique_exprs_cnt; m++)
 					{
 						/*
 						 * found a matching unique expression, use the attnum
 						 * (derived from index of the unique expression)
 						 */
-						if (equal(unique_exprs[k], expr))
+						if (equal(unique_exprs[m], expr))
 						{
-							unique_attnum = -(k + 1) + attnum_offset;
+							unique_attnum = -(m + 1) + attnum_offset;
 							break;
 						}
 					}
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 5f5803f681..3d1049cf75 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -3922,7 +3922,7 @@ GetSingleProcBlockerStatusData(PGPROC *blocked_proc, BlockedProcsData *data)
 	SHM_QUEUE  *procLocks;
 	PROCLOCK   *proclock;
 	PROC_QUEUE *waitQueue;
-	PGPROC	   *proc;
+	PGPROC	   *queued_proc;
 	int			queue_size;
 	int			i;
 
@@ -3989,13 +3989,13 @@ GetSingleProcBlockerStatusData(PGPROC *blocked_proc, BlockedProcsData *data)
 	}
 
 	/* Collect PIDs from the lock's wait queue, stopping at blocked_proc */
-	proc = (PGPROC *) waitQueue->links.next;
+	queued_proc = (PGPROC *) waitQueue->links.next;
 	for (i = 0; i < queue_size; i++)
 	{
-		if (proc == blocked_proc)
+		if (queued_proc == blocked_proc)
 			break;
-		data->waiter_pids[data->npids++] = proc->pid;
-		proc = (PGPROC *) proc->links.next;
+		data->waiter_pids[data->npids++] = queued_proc->pid;
+		queued_proc = (PGPROC *) queued_proc->links.next;
 	}
 
 	bproc->num_locks = data->nlocks - bproc->first_lock;
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 37aaab1338..13fa07b0ff 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -1450,7 +1450,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
 			int			usecs;
 			long		msecs;
 			SHM_QUEUE  *procLocks;
-			PROCLOCK   *proclock;
+			PROCLOCK   *curproclock;
 			bool		first_holder = true,
 						first_waiter = true;
 			int			lockHoldersNum = 0;
@@ -1480,44 +1480,45 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
 			LWLockAcquire(partitionLock, LW_SHARED);
 
 			procLocks = &(lock->procLocks);
-			proclock = (PROCLOCK *) SHMQueueNext(procLocks, procLocks,
-												 offsetof(PROCLOCK, lockLink));
+			curproclock = (PROCLOCK *) SHMQueueNext(procLocks, procLocks,
+													offsetof(PROCLOCK, lockLink));
 
-			while (proclock)
+			while (curproclock)
 			{
 				/*
-				 * we are a waiter if myProc->waitProcLock == proclock; we are
-				 * a holder if it is NULL or something different
+				 * we are a waiter if myProc->waitProcLock == curproclock; we
+				 * are a holder if it is NULL or something different
 				 */
-				if (proclock->tag.myProc->waitProcLock == proclock)
+				if (curproclock->tag.myProc->waitProcLock == curproclock)
 				{
 					if (first_waiter)
 					{
 						appendStringInfo(&lock_waiters_sbuf, "%d",
-										 proclock->tag.myProc->pid);
+										 curproclock->tag.myProc->pid);
 						first_waiter = false;
 					}
 					else
 						appendStringInfo(&lock_waiters_sbuf, ", %d",
-										 proclock->tag.myProc->pid);
+										 curproclock->tag.myProc->pid);
 				}
 				else
 				{
 					if (first_holder)
 					{
 						appendStringInfo(&lock_holders_sbuf, "%d",
-										 proclock->tag.myProc->pid);
+										 curproclock->tag.myProc->pid);
 						first_holder = false;
 					}
 					else
 						appendStringInfo(&lock_holders_sbuf, ", %d",
-										 proclock->tag.myProc->pid);
+										 curproclock->tag.myProc->pid);
 
 					lockHoldersNum++;
 				}
 
-				proclock = (PROCLOCK *) SHMQueueNext(procLocks, &proclock->lockLink,
-													 offsetof(PROCLOCK, lockLink));
+				curproclock = (PROCLOCK *) SHMQueueNext(procLocks,
+														&curproclock->lockLink,
+														offsetof(PROCLOCK, lockLink));
 			}
 
 			LWLockRelease(partitionLock);
diff --git a/src/backend/tsearch/ts_typanalyze.c b/src/backend/tsearch/ts_typanalyze.c
index e2d2ec18c9..187d9f16b1 100644
--- a/src/backend/tsearch/ts_typanalyze.c
+++ b/src/backend/tsearch/ts_typanalyze.c
@@ -405,12 +405,12 @@ compute_tsvector_stats(VacAttrStats *stats,
 			 */
 			for (i = 0; i < num_mcelem; i++)
 			{
-				TrackItem  *item = sort_table[i];
+				TrackItem  *titem = sort_table[i];
 
 				mcelem_values[i] =
-					PointerGetDatum(cstring_to_text_with_len(item->key.lexeme,
-															 item->key.length));
-				mcelem_freqs[i] = (double) item->frequency / (double) nonnull_cnt;
+					PointerGetDatum(cstring_to_text_with_len(titem->key.lexeme,
+															 titem->key.length));
+				mcelem_freqs[i] = (double) titem->frequency / (double) nonnull_cnt;
 			}
 			mcelem_freqs[i++] = (double) minfreq / (double) nonnull_cnt;
 			mcelem_freqs[i] = (double) maxfreq / (double) nonnull_cnt;
diff --git a/src/backend/utils/adt/array_typanalyze.c b/src/backend/utils/adt/array_typanalyze.c
index 2360c680ac..68f845bdee 100644
--- a/src/backend/utils/adt/array_typanalyze.c
+++ b/src/backend/utils/adt/array_typanalyze.c
@@ -541,12 +541,12 @@ compute_array_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,
 			 */
 			for (i = 0; i < num_mcelem; i++)
 			{
-				TrackItem  *item = sort_table[i];
+				TrackItem  *titem = sort_table[i];
 
-				mcelem_values[i] = datumCopy(item->key,
+				mcelem_values[i] = datumCopy(titem->key,
 											 extra_data->typbyval,
 											 extra_data->typlen);
-				mcelem_freqs[i] = (double) item->frequency /
+				mcelem_freqs[i] = (double) titem->frequency /
 					(double) nonnull_cnt;
 			}
 			mcelem_freqs[i++] = (double) minfreq / (double) nonnull_cnt;
diff --git a/src/backend/utils/adt/datetime.c b/src/backend/utils/adt/datetime.c
index 350039cc86..7848deeea9 100644
--- a/src/backend/utils/adt/datetime.c
+++ b/src/backend/utils/adt/datetime.c
@@ -1019,17 +1019,17 @@ DecodeDateTime(char **field, int *ftype, int nf,
 				if (ptype == DTK_JULIAN)
 				{
 					char	   *cp;
-					int			val;
+					int			jday;
 
 					if (tzp == NULL)
 						return DTERR_BAD_FORMAT;
 
 					errno = 0;
-					val = strtoint(field[i], &cp, 10);
+					jday = strtoint(field[i], &cp, 10);
 					if (errno == ERANGE || val < 0)
 						return DTERR_FIELD_OVERFLOW;
 
-					j2date(val, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
+					j2date(jday, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
 					isjulian = true;
 
 					/* Get the time zone from the end of the string */
@@ -1181,10 +1181,10 @@ DecodeDateTime(char **field, int *ftype, int nf,
 				if (ptype != 0)
 				{
 					char	   *cp;
-					int			val;
+					int			value;
 
 					errno = 0;
-					val = strtoint(field[i], &cp, 10);
+					value = strtoint(field[i], &cp, 10);
 					if (errno == ERANGE)
 						return DTERR_FIELD_OVERFLOW;
 
@@ -1209,7 +1209,7 @@ DecodeDateTime(char **field, int *ftype, int nf,
 					switch (ptype)
 					{
 						case DTK_YEAR:
-							tm->tm_year = val;
+							tm->tm_year = value;
 							tmask = DTK_M(YEAR);
 							break;
 
@@ -1222,33 +1222,33 @@ DecodeDateTime(char **field, int *ftype, int nf,
 							if ((fmask & DTK_M(MONTH)) != 0 &&
 								(fmask & DTK_M(HOUR)) != 0)
 							{
-								tm->tm_min = val;
+								tm->tm_min = value;
 								tmask = DTK_M(MINUTE);
 							}
 							else
 							{
-								tm->tm_mon = val;
+								tm->tm_mon = value;
 								tmask = DTK_M(MONTH);
 							}
 							break;
 
 						case DTK_DAY:
-							tm->tm_mday = val;
+							tm->tm_mday = value;
 							tmask = DTK_M(DAY);
 							break;
 
 						case DTK_HOUR:
-							tm->tm_hour = val;
+							tm->tm_hour = value;
 							tmask = DTK_M(HOUR);
 							break;
 
 						case DTK_MINUTE:
-							tm->tm_min = val;
+							tm->tm_min = value;
 							tmask = DTK_M(MINUTE);
 							break;
 
 						case DTK_SECOND:
-							tm->tm_sec = val;
+							tm->tm_sec = value;
 							tmask = DTK_M(SECOND);
 							if (*cp == '.')
 							{
@@ -1268,10 +1268,10 @@ DecodeDateTime(char **field, int *ftype, int nf,
 
 						case DTK_JULIAN:
 							/* previous field was a label for "julian date" */
-							if (val < 0)
+							if (value < 0)
 								return DTERR_FIELD_OVERFLOW;
 							tmask = DTK_DATE_M;
-							j2date(val, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
+							j2date(value, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
 							isjulian = true;
 
 							/* fractional Julian Day? */
@@ -2066,7 +2066,7 @@ DecodeTimeOnly(char **field, int *ftype, int nf,
 				if (ptype != 0)
 				{
 					char	   *cp;
-					int			val;
+					int			value;
 
 					/* Only accept a date under limited circumstances */
 					switch (ptype)
@@ -2082,7 +2082,7 @@ DecodeTimeOnly(char **field, int *ftype, int nf,
 					}
 
 					errno = 0;
-					val = strtoint(field[i], &cp, 10);
+					value = strtoint(field[i], &cp, 10);
 					if (errno == ERANGE)
 						return DTERR_FIELD_OVERFLOW;
 
@@ -2107,7 +2107,7 @@ DecodeTimeOnly(char **field, int *ftype, int nf,
 					switch (ptype)
 					{
 						case DTK_YEAR:
-							tm->tm_year = val;
+							tm->tm_year = value;
 							tmask = DTK_M(YEAR);
 							break;
 
@@ -2120,33 +2120,33 @@ DecodeTimeOnly(char **field, int *ftype, int nf,
 							if ((fmask & DTK_M(MONTH)) != 0 &&
 								(fmask & DTK_M(HOUR)) != 0)
 							{
-								tm->tm_min = val;
+								tm->tm_min = value;
 								tmask = DTK_M(MINUTE);
 							}
 							else
 							{
-								tm->tm_mon = val;
+								tm->tm_mon = value;
 								tmask = DTK_M(MONTH);
 							}
 							break;
 
 						case DTK_DAY:
-							tm->tm_mday = val;
+							tm->tm_mday = value;
 							tmask = DTK_M(DAY);
 							break;
 
 						case DTK_HOUR:
-							tm->tm_hour = val;
+							tm->tm_hour = value;
 							tmask = DTK_M(HOUR);
 							break;
 
 						case DTK_MINUTE:
-							tm->tm_min = val;
+							tm->tm_min = value;
 							tmask = DTK_M(MINUTE);
 							break;
 
 						case DTK_SECOND:
-							tm->tm_sec = val;
+							tm->tm_sec = value;
 							tmask = DTK_M(SECOND);
 							if (*cp == '.')
 							{
@@ -2166,10 +2166,10 @@ DecodeTimeOnly(char **field, int *ftype, int nf,
 
 						case DTK_JULIAN:
 							/* previous field was a label for "julian date" */
-							if (val < 0)
+							if (value < 0)
 								return DTERR_FIELD_OVERFLOW;
 							tmask = DTK_DATE_M;
-							j2date(val, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
+							j2date(value, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
 							isjulian = true;
 
 							if (*cp == '.')
diff --git a/src/backend/utils/adt/numutils.c b/src/backend/utils/adt/numutils.c
index cc3f95d399..834ec0b588 100644
--- a/src/backend/utils/adt/numutils.c
+++ b/src/backend/utils/adt/numutils.c
@@ -448,10 +448,10 @@ pg_ulltoa_n(uint64 value, char *a)
 	while (value >= 100000000)
 	{
 		const uint64 q = value / 100000000;
-		uint32		value2 = (uint32) (value - 100000000 * q);
+		uint32		value3 = (uint32) (value - 100000000 * q);
 
-		const uint32 c = value2 % 10000;
-		const uint32 d = value2 / 10000;
+		const uint32 c = value3 % 10000;
+		const uint32 d = value3 / 10000;
 		const uint32 c0 = (c % 100) << 1;
 		const uint32 c1 = (c / 100) << 1;
 		const uint32 d0 = (d % 100) << 1;
diff --git a/src/backend/utils/adt/partitionfuncs.c b/src/backend/utils/adt/partitionfuncs.c
index 109dc8023e..96b5ae52d2 100644
--- a/src/backend/utils/adt/partitionfuncs.c
+++ b/src/backend/utils/adt/partitionfuncs.c
@@ -238,9 +238,9 @@ pg_partition_ancestors(PG_FUNCTION_ARGS)
 
 	if (funcctx->call_cntr < list_length(ancestors))
 	{
-		Oid			relid = list_nth_oid(ancestors, funcctx->call_cntr);
+		Oid			resultrel = list_nth_oid(ancestors, funcctx->call_cntr);
 
-		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(resultrel));
 	}
 
 	SRF_RETURN_DONE(funcctx);
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index c418403537..855d48372a 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -8099,9 +8099,9 @@ get_parameter(Param *param, deparse_context *context)
 				 */
 				foreach(lc, context->namespaces)
 				{
-					deparse_namespace *dpns = lfirst(lc);
+					deparse_namespace *depns = lfirst(lc);
 
-					if (dpns->rtable_names != NIL)
+					if (depns->rtable_names != NIL)
 					{
 						should_qualify = true;
 						break;
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index a141146e70..ab613dd49e 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -3552,27 +3552,27 @@ do_connect(enum trivalue reuse_previous_specification,
 			param_is_newly_set(PQhost(o_conn), PQhost(pset.db)) ||
 			param_is_newly_set(PQport(o_conn), PQport(pset.db)))
 		{
-			char	   *host = PQhost(pset.db);
+			char	   *connhost = PQhost(pset.db);
 			char	   *hostaddr = PQhostaddr(pset.db);
 
-			if (is_unixsock_path(host))
+			if (is_unixsock_path(connhost))
 			{
-				/* hostaddr overrides host */
+				/* hostaddr overrides connhost */
 				if (hostaddr && *hostaddr)
 					printf(_("You are now connected to database \"%s\" as user \"%s\" on address \"%s\" at port \"%s\".\n"),
 						   PQdb(pset.db), PQuser(pset.db), hostaddr, PQport(pset.db));
 				else
 					printf(_("You are now connected to database \"%s\" as user \"%s\" via socket in \"%s\" at port \"%s\".\n"),
-						   PQdb(pset.db), PQuser(pset.db), host, PQport(pset.db));
+						   PQdb(pset.db), PQuser(pset.db), connhost, PQport(pset.db));
 			}
 			else
 			{
-				if (hostaddr && *hostaddr && strcmp(host, hostaddr) != 0)
+				if (hostaddr && *hostaddr && strcmp(connhost, hostaddr) != 0)
 					printf(_("You are now connected to database \"%s\" as user \"%s\" on host \"%s\" (address \"%s\") at port \"%s\".\n"),
-						   PQdb(pset.db), PQuser(pset.db), host, hostaddr, PQport(pset.db));
+						   PQdb(pset.db), PQuser(pset.db), connhost, hostaddr, PQport(pset.db));
 				else
 					printf(_("You are now connected to database \"%s\" as user \"%s\" on host \"%s\" at port \"%s\".\n"),
-						   PQdb(pset.db), PQuser(pset.db), host, PQport(pset.db));
+						   PQdb(pset.db), PQuser(pset.db), connhost, PQport(pset.db));
 			}
 		}
 		else
diff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h
index 4a3d0ec2c5..329687c1a5 100644
--- a/src/include/lib/simplehash.h
+++ b/src/include/lib/simplehash.h
@@ -546,13 +546,13 @@ SH_GROW(SH_TYPE * tb, uint64 newsize)
 		if (oldentry->status == SH_STATUS_IN_USE)
 		{
 			uint32		hash;
-			uint32		startelem;
+			uint32		startelem2;
 			uint32		curelem;
 			SH_ELEMENT_TYPE *newentry;
 
 			hash = SH_ENTRY_HASH(tb, oldentry);
-			startelem = SH_INITIAL_BUCKET(tb, hash);
-			curelem = startelem;
+			startelem2 = SH_INITIAL_BUCKET(tb, hash);
+			curelem = startelem2;
 
 			/* find empty element to put data into */
 			while (true)
@@ -564,7 +564,7 @@ SH_GROW(SH_TYPE * tb, uint64 newsize)
 					break;
 				}
 
-				curelem = SH_NEXT(tb, curelem, startelem);
+				curelem = SH_NEXT(tb, curelem, startelem2);
 			}
 
 			/* copy entry to new slot */
diff --git a/src/interfaces/ecpg/ecpglib/execute.c b/src/interfaces/ecpg/ecpglib/execute.c
index bd94bd4e6c..641851983d 100644
--- a/src/interfaces/ecpg/ecpglib/execute.c
+++ b/src/interfaces/ecpg/ecpglib/execute.c
@@ -367,10 +367,10 @@ ecpg_store_result(const PGresult *results, int act_field,
 						/* check strlen for each tuple */
 						for (act_tuple = 0; act_tuple < ntuples; act_tuple++)
 						{
-							int			len = strlen(PQgetvalue(results, act_tuple, act_field)) + 1;
+							int			slen = strlen(PQgetvalue(results, act_tuple, act_field)) + 1;
 
-							if (len > var->varcharsize)
-								var->varcharsize = len;
+							if (slen > var->varcharsize)
+								var->varcharsize = slen;
 						}
 						var->offset *= var->varcharsize;
 						len = var->offset * ntuples;
diff --git a/src/interfaces/ecpg/ecpglib/misc.c b/src/interfaces/ecpg/ecpglib/misc.c
index 1eef1ec044..7f75e18733 100644
--- a/src/interfaces/ecpg/ecpglib/misc.c
+++ b/src/interfaces/ecpg/ecpglib/misc.c
@@ -558,7 +558,7 @@ ECPGset_var(int number, void *pointer, int lineno)
 	ptr = (struct var_list *) calloc(1L, sizeof(struct var_list));
 	if (!ptr)
 	{
-		struct sqlca_t *sqlca = ECPGget_sqlca();
+		sqlca = ECPGget_sqlca();
 
 		if (sqlca == NULL)
 		{
diff --git a/src/interfaces/ecpg/pgtypeslib/dt_common.c b/src/interfaces/ecpg/pgtypeslib/dt_common.c
index e0fae3d5f1..99bdc94d6d 100644
--- a/src/interfaces/ecpg/pgtypeslib/dt_common.c
+++ b/src/interfaces/ecpg/pgtypeslib/dt_common.c
@@ -1820,16 +1820,16 @@ DecodeDateTime(char **field, int *ftype, int nf,
 				if (ptype == DTK_JULIAN)
 				{
 					char	   *cp;
-					int			val;
+					int			jday;
 
 					if (tzp == NULL)
 						return -1;
 
-					val = strtoint(field[i], &cp, 10);
+					jday = strtoint(field[i], &cp, 10);
 					if (*cp != '-')
 						return -1;
 
-					j2date(val, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
+					j2date(jday, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
 					/* Get the time zone from the end of the string */
 					if (DecodeTimezone(cp, tzp) != 0)
 						return -1;
@@ -1958,9 +1958,9 @@ DecodeDateTime(char **field, int *ftype, int nf,
 				if (ptype != 0)
 				{
 					char	   *cp;
-					int			val;
+					int			value;
 
-					val = strtoint(field[i], &cp, 10);
+					value = strtoint(field[i], &cp, 10);
 
 					/*
 					 * only a few kinds are allowed to have an embedded
@@ -1983,7 +1983,7 @@ DecodeDateTime(char **field, int *ftype, int nf,
 					switch (ptype)
 					{
 						case DTK_YEAR:
-							tm->tm_year = val;
+							tm->tm_year = value;
 							tmask = DTK_M(YEAR);
 							break;
 
@@ -1996,33 +1996,33 @@ DecodeDateTime(char **field, int *ftype, int nf,
 							if ((fmask & DTK_M(MONTH)) != 0 &&
 								(fmask & DTK_M(HOUR)) != 0)
 							{
-								tm->tm_min = val;
+								tm->tm_min = value;
 								tmask = DTK_M(MINUTE);
 							}
 							else
 							{
-								tm->tm_mon = val;
+								tm->tm_mon = value;
 								tmask = DTK_M(MONTH);
 							}
 							break;
 
 						case DTK_DAY:
-							tm->tm_mday = val;
+							tm->tm_mday = value;
 							tmask = DTK_M(DAY);
 							break;
 
 						case DTK_HOUR:
-							tm->tm_hour = val;
+							tm->tm_hour = value;
 							tmask = DTK_M(HOUR);
 							break;
 
 						case DTK_MINUTE:
-							tm->tm_min = val;
+							tm->tm_min = value;
 							tmask = DTK_M(MINUTE);
 							break;
 
 						case DTK_SECOND:
-							tm->tm_sec = val;
+							tm->tm_sec = value;
 							tmask = DTK_M(SECOND);
 							if (*cp == '.')
 							{
@@ -2046,7 +2046,7 @@ DecodeDateTime(char **field, int *ftype, int nf,
 							 * previous field was a label for "julian date"?
 							 ***/
 							tmask = DTK_DATE_M;
-							j2date(val, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
+							j2date(value, &tm->tm_year, &tm->tm_mon, &tm->tm_mday);
 							/* fractional Julian Day? */
 							if (*cp == '.')
 							{
diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c
index 93d9cef06b..8d7b6b58c0 100644
--- a/src/pl/plpgsql/src/pl_funcs.c
+++ b/src/pl/plpgsql/src/pl_funcs.c
@@ -1647,13 +1647,12 @@ plpgsql_dumptree(PLpgSQL_function *func)
 			case PLPGSQL_DTYPE_ROW:
 				{
 					PLpgSQL_row *row = (PLpgSQL_row *) d;
-					int			i;
 
 					printf("ROW %-16s fields", row->refname);
-					for (i = 0; i < row->nfields; i++)
+					for (int j = 0; j < row->nfields; j++)
 					{
-						printf(" %s=var %d", row->fieldnames[i],
-							   row->varnos[i]);
+						printf(" %s=var %d", row->fieldnames[j],
+							   row->varnos[j]);
 					}
 					printf("\n");
 				}
#29Justin Pryzby
pryzby@telsasoft.com
In reply to: David Rowley (#28)
Re: shadow variables - pg15 edition

On Tue, Oct 04, 2022 at 02:27:09PM +1300, David Rowley wrote:

On Tue, 30 Aug 2022 at 17:44, Justin Pryzby <pryzby@telsasoft.com> wrote:

Would you check if any of these changes are good enough ?

I looked through v5.txt and modified it so that the fix for the shadow
warnings are more aligned to the spreadsheet I created.

Thanks

diff --git a/src/backend/utils/adt/datetime.c b/src/backend/utils/adt/datetime.c
index 350039cc86..7848deeea9 100644
--- a/src/backend/utils/adt/datetime.c
+++ b/src/backend/utils/adt/datetime.c
@@ -1019,17 +1019,17 @@ DecodeDateTime(char **field, int *ftype, int nf,
if (ptype == DTK_JULIAN)
{
char	   *cp;
-					int			val;
+					int			jday;

if (tzp == NULL)
return DTERR_BAD_FORMAT;

errno = 0;
-					val = strtoint(field[i], &cp, 10);
+					jday = strtoint(field[i], &cp, 10);
if (errno == ERANGE || val < 0)
return DTERR_FIELD_OVERFLOW;

Here, you forgot to change "val < 0".

I tried to see how to make that fail (differently) but can't see yet how
pass a negative julian date..

--
Justin

#30David Rowley
dgrowleyml@gmail.com
In reply to: Justin Pryzby (#29)
Re: shadow variables - pg15 edition

On Tue, 4 Oct 2022 at 15:30, Justin Pryzby <pryzby@telsasoft.com> wrote:

Here, you forgot to change "val < 0".

Thanks. I made another review pass of each change to ensure I didn't
miss any others. There were no other issues, so I pushed the adjusted
patch.

5 warnings remain. 4 of these are for PG_TRY() and co.

David

#31David Rowley
dgrowleyml@gmail.com
In reply to: David Rowley (#30)
1 attachment(s)
Re: shadow variables - pg15 edition

On Wed, 5 Oct 2022 at 21:05, David Rowley <dgrowleyml@gmail.com> wrote:

5 warnings remain. 4 of these are for PG_TRY() and co.

I've attached a draft patch for a method I was considering to fix the
warnings we're getting from the nested PG_TRY() statement in
utility.c.

The C preprocessor does not allow name overloading in macros, but of
course, it does allow variable argument marcos with ... so I just
used that and added ##__VA_ARGS__ to each variable. I think that
should work ok providing callers only supply 0 or 1 arguments to the
macro, and of course, make that parameter value the same for each set
of macros used in the PG_TRY() statement.

The good thing about the optional argument is that we don't need to
touch any existing users of PG_TRY(). The attached just modifies the
inner-most PG_TRY() in the only nested PG_TRY() we have in the tree in
utility.c.

The only warning remaining after applying the attached is the "now"
warning in pgbench.c:7509. I'd considered changing this to "thenow"
which translates to "right now" in the part of Scotland that I'm from.
I also considered "nownow", which is used in South Africa [1]https://www.goodthingsguy.com/fun/now-now-just-now/.
Anyway, I'm not really being serious, but I didn't come up with
anything better than "now2". It's just I didn't like that as it sort
of implies there are multiple definitions of "now" and I struggle with
that... maybe I'm just thinking too much in terms of Newtonian
Relativity...

David

[1]: https://www.goodthingsguy.com/fun/now-now-just-now/

Attachments:

add_parameter_to_pg_try.patchtext/plain; charset=US-ASCII; name=add_parameter_to_pg_try.patchDownload
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index aa00815787..247d0816ad 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -1678,16 +1678,16 @@ ProcessUtilitySlow(ParseState *pstate,
 				 * command itself is queued, which is enough.
 				 */
 				EventTriggerInhibitCommandCollection();
-				PG_TRY();
+				PG_TRY(2);
 				{
 					address = ExecRefreshMatView((RefreshMatViewStmt *) parsetree,
 												 queryString, params, qc);
 				}
-				PG_FINALLY();
+				PG_FINALLY(2);
 				{
 					EventTriggerUndoInhibitCommandCollection();
 				}
-				PG_END_TRY();
+				PG_END_TRY(2);
 				break;
 
 			case T_CreateTrigStmt:
diff --git a/src/include/utils/elog.h b/src/include/utils/elog.h
index 4dd9658a3c..c79d6cfba3 100644
--- a/src/include/utils/elog.h
+++ b/src/include/utils/elog.h
@@ -310,39 +310,48 @@ extern PGDLLIMPORT ErrorContextCallback *error_context_stack;
  * pedantry; we have seen bugs from compilers improperly optimizing code
  * away when such a variable was not marked.  Beware that gcc's -Wclobbered
  * warnings are just about entirely useless for catching such oversights.
+ *
+ * Each of these macros accepts an optional argument which can be specified
+ * to apply a suffix to the variables declared within the macros.  This suffix
+ * can be used to avoid the compiler emitting warnings about shadowed
+ * variables when compiling with -Wshadow in situations where nested PG_TRY()
+ * statements are required.  The optional suffix may contain any character
+ * that's allowed in a variable name.  The suffix, if specified must be the
+ * same within each set of PG_TRY() / PG_CATCH() / PG_FINALLY() / PG_END_TRY()
+ * statements.
  *----------
  */
-#define PG_TRY()  \
+#define PG_TRY(...)  \
 	do { \
-		sigjmp_buf *_save_exception_stack = PG_exception_stack; \
-		ErrorContextCallback *_save_context_stack = error_context_stack; \
-		sigjmp_buf _local_sigjmp_buf; \
-		bool _do_rethrow = false; \
-		if (sigsetjmp(_local_sigjmp_buf, 0) == 0) \
+		sigjmp_buf *_save_exception_stack##__VA_ARGS__ = PG_exception_stack; \
+		ErrorContextCallback *_save_context_stack##__VA_ARGS__ = error_context_stack; \
+		sigjmp_buf _local_sigjmp_buf##__VA_ARGS__; \
+		bool _do_rethrow##__VA_ARGS__ = false; \
+		if (sigsetjmp(_local_sigjmp_buf##__VA_ARGS__, 0) == 0) \
 		{ \
-			PG_exception_stack = &_local_sigjmp_buf
+			PG_exception_stack = &_local_sigjmp_buf##__VA_ARGS__
 
-#define PG_CATCH()	\
+#define PG_CATCH(...)	\
 		} \
 		else \
 		{ \
-			PG_exception_stack = _save_exception_stack; \
-			error_context_stack = _save_context_stack
+			PG_exception_stack = _save_exception_stack##__VA_ARGS__; \
+			error_context_stack = _save_context_stack##__VA_ARGS__
 
-#define PG_FINALLY() \
+#define PG_FINALLY(...) \
 		} \
 		else \
-			_do_rethrow = true; \
+			_do_rethrow##__VA_ARGS__ = true; \
 		{ \
-			PG_exception_stack = _save_exception_stack; \
-			error_context_stack = _save_context_stack
+			PG_exception_stack = _save_exception_stack##__VA_ARGS__; \
+			error_context_stack = _save_context_stack##__VA_ARGS__
 
-#define PG_END_TRY()  \
+#define PG_END_TRY(...)  \
 		} \
-		if (_do_rethrow) \
+		if (_do_rethrow##__VA_ARGS__) \
 				PG_RE_THROW(); \
-		PG_exception_stack = _save_exception_stack; \
-		error_context_stack = _save_context_stack; \
+		PG_exception_stack = _save_exception_stack##__VA_ARGS__; \
+		error_context_stack = _save_context_stack##__VA_ARGS__; \
 	} while (0)
 
 /*
#32Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: David Rowley (#31)
Re: shadow variables - pg15 edition

On 2022-Oct-05, David Rowley wrote:

The only warning remaining after applying the attached is the "now"
warning in pgbench.c:7509. I'd considered changing this to "thenow"
which translates to "right now" in the part of Scotland that I'm from.
I also considered "nownow", which is used in South Africa [1].
Anyway, I'm not really being serious, but I didn't come up with
anything better than "now2". It's just I didn't like that as it sort
of implies there are multiple definitions of "now" and I struggle with
that... maybe I'm just thinking too much in terms of Newtonian
Relativity...

:-D

A simpler idea might be to just remove the inner declaration, and have
that block set the outer var. There's no damage, since the block is
going to end and not access the previous value anymore.

diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index aa1a3541fe..91a067859b 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -7506,7 +7506,7 @@ threadRun(void *arg)
 		/* progress report is made by thread 0 for all threads */
 		if (progress && thread->tid == 0)
 		{
-			pg_time_usec_t now = pg_time_now();
+			now = pg_time_now();	/* not lazy; clobbers outer value */

if (now >= next_report)
{

The "now now" reference reminded me of "ahorita"
https://doorwaytomexico.com/paulina/ahorita-meaning-examples/
which is source of misunderstandings across borders in South America ...

--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"The important things in the world are problems with society that we don't
understand at all. The machines will become more complicated but they won't
be more complicated than the societies that run them." (Freeman Dyson)

#33Tom Lane
tgl@sss.pgh.pa.us
In reply to: David Rowley (#31)
Re: shadow variables - pg15 edition

David Rowley <dgrowleyml@gmail.com> writes:

I've attached a draft patch for a method I was considering to fix the
warnings we're getting from the nested PG_TRY() statement in
utility.c.

+1

The only warning remaining after applying the attached is the "now"
warning in pgbench.c:7509. I'd considered changing this to "thenow"
which translates to "right now" in the part of Scotland that I'm from.
I also considered "nownow", which is used in South Africa [1].
Anyway, I'm not really being serious, but I didn't come up with
anything better than "now2".

Yeah, "now2" seems as reasonable as anything.

regards, tom lane

#34David Rowley
dgrowleyml@gmail.com
In reply to: Tom Lane (#33)
Re: shadow variables - pg15 edition

On Thu, 6 Oct 2022 at 03:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:

David Rowley <dgrowleyml@gmail.com> writes:

I've attached a draft patch for a method I was considering to fix the
warnings we're getting from the nested PG_TRY() statement in
utility.c.

+1

Pushed.

The only warning remaining after applying the attached is the "now"
warning in pgbench.c:7509. I'd considered changing this to "thenow"
which translates to "right now" in the part of Scotland that I'm from.
I also considered "nownow", which is used in South Africa [1].
Anyway, I'm not really being serious, but I didn't come up with
anything better than "now2".

Yeah, "now2" seems as reasonable as anything.

Also pushed. (Thanks for saving me on that one.)

David

#35Andres Freund
andres@anarazel.de
In reply to: David Rowley (#34)
Re: shadow variables - pg15 edition

Hi,

On 2022-10-06 10:21:41 +1300, David Rowley wrote:

Also pushed. (Thanks for saving me on that one.)

Your commit message said the last shadowed variable. But building with
-Wshadow=compatible-local triggers a bunch of warnings for me (see trimmed at
the end). Looks like it "only" fixed it for src/, without optional
dependencies like gssapi and python.

I think we should add -Wshadow=compatible-local to our sets of warning flags
after fixing those.

[237/1827 42 12%] Compiling C object src/interfaces/libpq/libpq.a.p/fe-secure-gssapi.c.o
../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-secure-gssapi.c: In function ‘pg_GSS_write’:
../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-secure-gssapi.c:138:41: warning: declaration of ‘ret’ shadows a previous local [-Wshadow=compatible-local]
138 | ssize_t ret;
| ^~~
../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-secure-gssapi.c:92:25: note: shadowed declaration is here
92 | ssize_t ret = -1;
| ^~~

[1283/1827 42 70%] Compiling C object src/pl/plpython/plpython3.so.p/plpy_cursorobject.c.o
In file included from ../../../../home/andres/src/postgresql/src/include/postgres.h:48,
from ../../../../home/andres/src/postgresql/src/pl/plpython/plpy_cursorobject.c:7:
../../../../home/andres/src/postgresql/src/pl/plpython/plpy_cursorobject.c: In function ‘PLy_cursor_plan’:
../../../../home/andres/src/postgresql/src/include/utils/elog.h:325:29: warning: declaration of ‘_save_exception_stack’ shadows a previous local [-Wshadow=compatible-local]
325 | sigjmp_buf *_save_exception_stack##__VA_ARGS__ = PG_exception_stack; \
| ^~~~~~~~~~~~~~~~~~~~~
...

[1289/1827 42 70%] Compiling C object src/pl/plpython/plpython3.so.p/plpy_exec.c.o
../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c: In function ‘PLy_exec_trigger’:
../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c:378:46: warning: declaration of ‘tdata’ shadows a previous local [-Wshadow=compatible-local]
378 | TriggerData *tdata = (TriggerData *) fcinfo->context;
| ^~~~~
../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c:310:22: note: shadowed declaration is here
310 | TriggerData *tdata;
| ^~~~~

[1291/1827 42 70%] Compiling C object src/pl/plpython/plpython3.so.p/plpy_spi.c.o
In file included from ../../../../home/andres/src/postgresql/src/include/postgres.h:48,
from ../../../../home/andres/src/postgresql/src/pl/plpython/plpy_spi.c:7:
../../../../home/andres/src/postgresql/src/pl/plpython/plpy_spi.c: In function ‘PLy_spi_execute_plan’:
../../../../home/andres/src/postgresql/src/include/utils/elog.h:325:29: warning: declaration of ‘_save_exception_stack’ shadows a previous local [-Wshadow=compatible-local]
325 | sigjmp_buf *_save_exception_stack##__VA_ARGS__ = PG_exception_stack; \
| ^~~~~~~~~~~~~~~~~~~~~

[1344/1827 42 73%] Compiling C object contrib/bloom/bloom.so.p/blinsert.c.o
../../../../home/andres/src/postgresql/contrib/bloom/blinsert.c: In function ‘blinsert’:
../../../../home/andres/src/postgresql/contrib/bloom/blinsert.c:235:33: warning: declaration of ‘page’ shadows a previous local [-Wshadow=compatible-local]
235 | Page page;
| ^~~~
../../../../home/andres/src/postgresql/contrib/bloom/blinsert.c:210:25: note: shadowed declaration is here
210 | Page page,
| ^~~~

[1415/1827 42 77%] Compiling C object contrib/file_fdw/file_fdw.so.p/file_fdw.c.o
../../../../home/andres/src/postgresql/contrib/file_fdw/file_fdw.c: In function ‘get_file_fdw_attribute_options’:
../../../../home/andres/src/postgresql/contrib/file_fdw/file_fdw.c:453:29: warning: declaration of ‘options’ shadows a previous local [-Wshadow=compatible-local]
453 | List *options;
| ^~~~~~~
../../../../home/andres/src/postgresql/contrib/file_fdw/file_fdw.c:443:21: note: shadowed declaration is here
443 | List *options = NIL;
| ^~~~~~~

[1441/1827 42 78%] Compiling C object contrib/hstore/hstore.so.p/hstore_io.c.o
In file included from ../../../../home/andres/src/postgresql/contrib/hstore/hstore_io.c:12:
../../../../home/andres/src/postgresql/contrib/hstore/hstore_io.c: In function ‘hstorePairs’:
../../../../home/andres/src/postgresql/contrib/hstore/hstore.h:131:21: warning: declaration of ‘buflen’ shadows a parameter [-Wshadow=compatible-local]
131 | int buflen = (ptr_) - (buf_); \
| ^~~~~~
../../../../home/andres/src/postgresql/contrib/hstore/hstore_io.c:411:9: note: in expansion of macro ‘HS_FINALIZE’
411 | HS_FINALIZE(out, pcount, buf, ptr);
| ^~~~~~~~~~~
../../../../home/andres/src/postgresql/contrib/hstore/hstore_io.c:388:47: note: shadowed declaration is here
388 | hstorePairs(Pairs *pairs, int32 pcount, int32 buflen)
| ~~~~~~^~~~~~

[1564/1827 42 85%] Compiling C object contrib/postgres_fdw/postgres_fdw.so.p/deparse.c.o
../../../../home/andres/src/postgresql/contrib/postgres_fdw/deparse.c: In function ‘foreign_expr_walker’:
../../../../home/andres/src/postgresql/contrib/postgres_fdw/deparse.c:946:53: warning: declaration of ‘lc’ shadows a previous local [-Wshadow=compatible-local]
946 | ListCell *lc;
| ^~
../../../../home/andres/src/postgresql/contrib/postgres_fdw/deparse.c:904:45: note: shadowed declaration is here
904 | ListCell *lc;
| ^~

[1575/1827 38 86%] Compiling C object src/test/modules/test_integerset/test_integerset.so.p/test_integerset.c.o
../../../../home/andres/src/postgresql/src/test/modules/test_integerset/test_integerset.c: In function ‘test_huge_distances’:
../../../../home/andres/src/postgresql/src/test/modules/test_integerset/test_integerset.c:588:33: warning: declaration of ‘x’ shadows a previous local [-Wshadow=compatible-local]
588 | uint64 x = values[i];
| ^
../../../../home/andres/src/postgresql/src/test/modules/test_integerset/test_integerset.c:526:25: note: shadowed declaration is here
526 | uint64 x;
| ^

[1633/1827 3 89%] Compiling C object contrib/postgres_fdw/postgres_fdw.so.p/postgres_fdw.c.o
../../../../home/andres/src/postgresql/contrib/postgres_fdw/postgres_fdw.c: In function ‘postgresGetForeignPlan’:
../../../../home/andres/src/postgresql/contrib/postgres_fdw/postgres_fdw.c:1344:37: warning: declaration of ‘lc’ shadows a previous local [-Wshadow=compatible-local]
1344 | ListCell *lc;
| ^~
../../../../home/andres/src/postgresql/contrib/postgres_fdw/postgres_fdw.c:1238:21: note: shadowed declaration is here
1238 | ListCell *lc;
| ^~

Greetings,

Andres Freund

#36David Rowley
dgrowleyml@gmail.com
In reply to: Alvaro Herrera (#32)
Re: shadow variables - pg15 edition

On Thu, 6 Oct 2022 at 02:34, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:

A simpler idea might be to just remove the inner declaration, and have
that block set the outer var. There's no damage, since the block is
going to end and not access the previous value anymore.

diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index aa1a3541fe..91a067859b 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -7506,7 +7506,7 @@ threadRun(void *arg)
/* progress report is made by thread 0 for all threads */
if (progress && thread->tid == 0)
{
-                       pg_time_usec_t now = pg_time_now();
+                       now = pg_time_now();    /* not lazy; clobbers outer value */

I didn't want to do it that way because all this code is in a while
loop and the outer "now" will be reused after it's set by the code
above. It's not really immediately obvious to me what repercussions
that would have, but it didn't seem worth taking any risks.

David

#37David Rowley
dgrowleyml@gmail.com
In reply to: Andres Freund (#35)
Re: shadow variables - pg15 edition

On Thu, 6 Oct 2022 at 10:40, Andres Freund <andres@anarazel.de> wrote:

Your commit message said the last shadowed variable. But building with
-Wshadow=compatible-local triggers a bunch of warnings for me (see trimmed at
the end). Looks like it "only" fixed it for src/, without optional
dependencies like gssapi and python.

Well, that's embarrassing. You're right. I only fixed the ones I saw
from running make in the base directory of the tree. I'll set about
fixing these nownow.

David

#38David Rowley
dgrowleyml@gmail.com
In reply to: David Rowley (#37)
1 attachment(s)
Re: shadow variables - pg15 edition

On Thu, 6 Oct 2022 at 11:50, David Rowley <dgrowleyml@gmail.com> wrote:

On Thu, 6 Oct 2022 at 10:40, Andres Freund <andres@anarazel.de> wrote:

Your commit message said the last shadowed variable. But building with
-Wshadow=compatible-local triggers a bunch of warnings for me (see trimmed at
the end). Looks like it "only" fixed it for src/, without optional
dependencies like gssapi and python.

Well, that's embarrassing. You're right. I only fixed the ones I saw
from running make in the base directory of the tree. I'll set about
fixing these nownow.

Here's a patch which (I think) fixes the ones I missed.

David

Attachments:

final_shadow_cleanup.patchtext/plain; charset=US-ASCII; name=final_shadow_cleanup.patchDownload
diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c
index e64291e049..dd26d6ac29 100644
--- a/contrib/bloom/blinsert.c
+++ b/contrib/bloom/blinsert.c
@@ -232,8 +232,6 @@ blinsert(Relation index, Datum *values, bool *isnull,
 
 	if (metaData->nEnd > metaData->nStart)
 	{
-		Page		page;
-
 		blkno = metaData->notFullPage[metaData->nStart];
 		Assert(blkno != InvalidBlockNumber);
 
diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index de0b9a109c..67821cd25b 100644
--- a/contrib/file_fdw/file_fdw.c
+++ b/contrib/file_fdw/file_fdw.c
@@ -450,15 +450,15 @@ get_file_fdw_attribute_options(Oid relid)
 	for (attnum = 1; attnum <= natts; attnum++)
 	{
 		Form_pg_attribute attr = TupleDescAttr(tupleDesc, attnum - 1);
-		List	   *options;
+		List	   *column_options;
 		ListCell   *lc;
 
 		/* Skip dropped attributes. */
 		if (attr->attisdropped)
 			continue;
 
-		options = GetForeignColumnOptions(relid, attnum);
-		foreach(lc, options)
+		column_options = GetForeignColumnOptions(relid, attnum);
+		foreach(lc, column_options)
 		{
 			DefElem    *def = (DefElem *) lfirst(lc);
 
@@ -480,7 +480,7 @@ get_file_fdw_attribute_options(Oid relid)
 					fncolumns = lappend(fncolumns, makeString(attname));
 				}
 			}
-			/* maybe in future handle other options here */
+			/* maybe in future handle other column options here */
 		}
 	}
 
diff --git a/contrib/hstore/hstore.h b/contrib/hstore/hstore.h
index 4713e6ea7a..897af244a4 100644
--- a/contrib/hstore/hstore.h
+++ b/contrib/hstore/hstore.h
@@ -128,15 +128,15 @@ typedef struct
 /* finalize a newly-constructed hstore */
 #define HS_FINALIZE(hsp_,count_,buf_,ptr_)							\
 	do {															\
-		int buflen = (ptr_) - (buf_);								\
+		int _buflen = (ptr_) - (buf_);								\
 		if ((count_))												\
 			ARRPTR(hsp_)[0].entry |= HENTRY_ISFIRST;				\
 		if ((count_) != HS_COUNT((hsp_)))							\
 		{															\
 			HS_SETCOUNT((hsp_),(count_));							\
-			memmove(STRPTR(hsp_), (buf_), buflen);					\
+			memmove(STRPTR(hsp_), (buf_), _buflen);					\
 		}															\
-		SET_VARSIZE((hsp_), CALCDATASIZE((count_), buflen));		\
+		SET_VARSIZE((hsp_), CALCDATASIZE((count_), _buflen));		\
 	} while (0)
 
 /* ensure the varlena size of an existing hstore is correct */
diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c
index 09f37fb77a..9524765650 100644
--- a/contrib/postgres_fdw/deparse.c
+++ b/contrib/postgres_fdw/deparse.c
@@ -943,8 +943,6 @@ foreign_expr_walker(Node *node,
 				 */
 				if (agg->aggorder)
 				{
-					ListCell   *lc;
-
 					foreach(lc, agg->aggorder)
 					{
 						SortGroupClause *srt = (SortGroupClause *) lfirst(lc);
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index dd858aba03..8d013f5b1a 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -1341,8 +1341,6 @@ postgresGetForeignPlan(PlannerInfo *root,
 		 */
 		if (outer_plan)
 		{
-			ListCell   *lc;
-
 			/*
 			 * Right now, we only consider grouping and aggregation beyond
 			 * joins. Queries involving aggregates or grouping do not require
@@ -6272,10 +6270,10 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel,
 				 */
 				foreach(l, aggvars)
 				{
-					Expr	   *expr = (Expr *) lfirst(l);
+					Expr	   *aggref = (Expr *) lfirst(l);
 
-					if (IsA(expr, Aggref))
-						tlist = add_to_flat_tlist(tlist, list_make1(expr));
+					if (IsA(aggref, Aggref))
+						tlist = add_to_flat_tlist(tlist, list_make1(aggref));
 				}
 			}
 		}
@@ -6289,8 +6287,6 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel,
 	 */
 	if (havingQual)
 	{
-		ListCell   *lc;
-
 		foreach(lc, (List *) havingQual)
 		{
 			Expr	   *expr = (Expr *) lfirst(lc);
@@ -6324,7 +6320,6 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel,
 	if (fpinfo->local_conds)
 	{
 		List	   *aggvars = NIL;
-		ListCell   *lc;
 
 		foreach(lc, fpinfo->local_conds)
 		{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 6ea52ed866..dee0982eba 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -135,11 +135,11 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
 		 */
 		if (PqGSSSendLength)
 		{
-			ssize_t		ret;
+			ssize_t		retval;
 			ssize_t		amount = PqGSSSendLength - PqGSSSendNext;
 
-			ret = pqsecure_raw_write(conn, PqGSSSendBuffer + PqGSSSendNext, amount);
-			if (ret <= 0)
+			retval = pqsecure_raw_write(conn, PqGSSSendBuffer + PqGSSSendNext, amount);
+			if (retval <= 0)
 			{
 				/*
 				 * Report any previously-sent data; if there was none, reflect
@@ -149,16 +149,16 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
 				 */
 				if (bytes_sent)
 					return bytes_sent;
-				return ret;
+				return retval;
 			}
 
 			/*
 			 * Check if this was a partial write, and if so, move forward that
 			 * far in our buffer and try again.
 			 */
-			if (ret != amount)
+			if (retval != amount)
 			{
-				PqGSSSendNext += ret;
+				PqGSSSendNext += retval;
 				continue;
 			}
 
diff --git a/src/pl/plpython/plpy_cursorobject.c b/src/pl/plpython/plpy_cursorobject.c
index 6b6e743345..57e8f8ec21 100644
--- a/src/pl/plpython/plpy_cursorobject.c
+++ b/src/pl/plpython/plpy_cursorobject.c
@@ -215,18 +215,18 @@ PLy_cursor_plan(PyObject *ob, PyObject *args)
 			PyObject   *elem;
 
 			elem = PySequence_GetItem(args, j);
-			PG_TRY();
+			PG_TRY(2);
 			{
 				bool		isnull;
 
 				plan->values[j] = PLy_output_convert(arg, elem, &isnull);
 				nulls[j] = isnull ? 'n' : ' ';
 			}
-			PG_FINALLY();
+			PG_FINALLY(2);
 			{
 				Py_DECREF(elem);
 			}
-			PG_END_TRY();
+			PG_END_TRY(2);
 		}
 
 		portal = SPI_cursor_open(NULL, plan->plan, plan->values, nulls,
diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c
index 150b3a5977..74d5b70583 100644
--- a/src/pl/plpython/plpy_exec.c
+++ b/src/pl/plpython/plpy_exec.c
@@ -375,11 +375,11 @@ PLy_exec_trigger(FunctionCallInfo fcinfo, PLyProcedure *proc)
 				rv = NULL;
 			else if (pg_strcasecmp(srv, "MODIFY") == 0)
 			{
-				TriggerData *tdata = (TriggerData *) fcinfo->context;
+				TriggerData *trigdata = (TriggerData *) fcinfo->context;
 
-				if (TRIGGER_FIRED_BY_INSERT(tdata->tg_event) ||
-					TRIGGER_FIRED_BY_UPDATE(tdata->tg_event))
-					rv = PLy_modify_tuple(proc, plargs, tdata, rv);
+				if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event) ||
+					TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
+					rv = PLy_modify_tuple(proc, plargs, trigdata, rv);
 				else
 					ereport(WARNING,
 							(errmsg("PL/Python trigger function returned \"MODIFY\" in a DELETE trigger -- ignored")));
diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c
index 9a71a42c15..6b9f8d5b43 100644
--- a/src/pl/plpython/plpy_spi.c
+++ b/src/pl/plpython/plpy_spi.c
@@ -236,18 +236,18 @@ PLy_spi_execute_plan(PyObject *ob, PyObject *list, long limit)
 			PyObject   *elem;
 
 			elem = PySequence_GetItem(list, j);
-			PG_TRY();
+			PG_TRY(2);
 			{
 				bool		isnull;
 
 				plan->values[j] = PLy_output_convert(arg, elem, &isnull);
 				nulls[j] = isnull ? 'n' : ' ';
 			}
-			PG_FINALLY();
+			PG_FINALLY(2);
 			{
 				Py_DECREF(elem);
 			}
-			PG_END_TRY();
+			PG_END_TRY(2);
 		}
 
 		rv = SPI_execute_plan(plan->plan, plan->values, nulls,
diff --git a/src/test/modules/test_integerset/test_integerset.c b/src/test/modules/test_integerset/test_integerset.c
index 578d2e8aec..813ca4ba6b 100644
--- a/src/test/modules/test_integerset/test_integerset.c
+++ b/src/test/modules/test_integerset/test_integerset.c
@@ -585,26 +585,26 @@ test_huge_distances(void)
 	 */
 	for (int i = 0; i < num_values; i++)
 	{
-		uint64		x = values[i];
+		uint64		y = values[i];
 		bool		expected;
 		bool		result;
 
-		if (x > 0)
+		if (y > 0)
 		{
-			expected = (values[i - 1] == x - 1);
-			result = intset_is_member(intset, x - 1);
+			expected = (values[i - 1] == y - 1);
+			result = intset_is_member(intset, y - 1);
 			if (result != expected)
-				elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, x - 1);
+				elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, y - 1);
 		}
 
-		result = intset_is_member(intset, x);
+		result = intset_is_member(intset, y);
 		if (result != true)
-			elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, x);
+			elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, y);
 
-		expected = (i != num_values - 1) ? (values[i + 1] == x + 1) : false;
-		result = intset_is_member(intset, x + 1);
+		expected = (i != num_values - 1) ? (values[i + 1] == y + 1) : false;
+		result = intset_is_member(intset, y + 1);
 		if (result != expected)
-			elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, x + 1);
+			elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, y + 1);
 	}
 
 	/*
#39Andres Freund
andres@anarazel.de
In reply to: David Rowley (#38)
1 attachment(s)
Re: shadow variables - pg15 edition

Hi,

On 2022-10-06 13:00:41 +1300, David Rowley wrote:

Here's a patch which (I think) fixes the ones I missed.

Yep, does the trick for me.

I attached a patch to add -Wshadow=compatible-local to our set of warnings.

diff --git a/contrib/hstore/hstore.h b/contrib/hstore/hstore.h
index 4713e6ea7a..897af244a4 100644
--- a/contrib/hstore/hstore.h
+++ b/contrib/hstore/hstore.h
@@ -128,15 +128,15 @@ typedef struct
/* finalize a newly-constructed hstore */
#define HS_FINALIZE(hsp_,count_,buf_,ptr_)							\
do {															\
-		int buflen = (ptr_) - (buf_);								\
+		int _buflen = (ptr_) - (buf_);								\

Not pretty. Given that HS_FINALIZE already has multiple-eval hazards, perhaps
we could just remove the local?

--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -135,11 +135,11 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
*/
if (PqGSSSendLength)
{
-			ssize_t		ret;
+			ssize_t		retval;

That looks like it could easily lead to confusion further down the
line. Wouldn't the better fix here be to remove the inner variable?

--- a/src/pl/plpython/plpy_exec.c
+++ b/src/pl/plpython/plpy_exec.c
@@ -375,11 +375,11 @@ PLy_exec_trigger(FunctionCallInfo fcinfo, PLyProcedure *proc)
rv = NULL;
else if (pg_strcasecmp(srv, "MODIFY") == 0)
{
-				TriggerData *tdata = (TriggerData *) fcinfo->context;
+				TriggerData *trigdata = (TriggerData *) fcinfo->context;
-				if (TRIGGER_FIRED_BY_INSERT(tdata->tg_event) ||
-					TRIGGER_FIRED_BY_UPDATE(tdata->tg_event))
-					rv = PLy_modify_tuple(proc, plargs, tdata, rv);
+				if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event) ||
+					TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
+					rv = PLy_modify_tuple(proc, plargs, trigdata, rv);
else
ereport(WARNING,
(errmsg("PL/Python trigger function returned \"MODIFY\" in a DELETE trigger -- ignored")));

This doesn't strike me as a good fix either. Isn't the inner tdata exactly
the same as the outer tdata?

tdata = (TriggerData *) fcinfo->context;
...
TriggerData *trigdata = (TriggerData *) fcinfo->context;

--- a/src/test/modules/test_integerset/test_integerset.c
+++ b/src/test/modules/test_integerset/test_integerset.c
@@ -585,26 +585,26 @@ test_huge_distances(void)

This is one of the cases where our insistence on -Wdeclaration-after-statement
really makes this unnecessary ugly... Declaring x at the start of the function
just makes this harder to read.

Anyway, this isn't important code, and your fix seem ok.

Greetings,

Andres Freund

Attachments:

add-wshadow-compatible-local.difftext/x-diff; charset=us-asciiDownload
diff --git i/configure w/configure
index 1caca21b625..a5a03f6cec3 100755
--- i/configure
+++ w/configure
@@ -5852,6 +5852,97 @@ if test x"$pgac_cv_prog_CXX_cxxflags__Wcast_function_type" = x"yes"; then
 fi
 
 
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Wshadow=compatible-local, for CFLAGS" >&5
+$as_echo_n "checking whether ${CC} supports -Wshadow=compatible-local, for CFLAGS... " >&6; }
+if ${pgac_cv_prog_CC_cflags__Wshadow_compatible_local+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  pgac_save_CFLAGS=$CFLAGS
+pgac_save_CC=$CC
+CC=${CC}
+CFLAGS="${CFLAGS} -Wshadow=compatible-local"
+ac_save_c_werror_flag=$ac_c_werror_flag
+ac_c_werror_flag=yes
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_prog_CC_cflags__Wshadow_compatible_local=yes
+else
+  pgac_cv_prog_CC_cflags__Wshadow_compatible_local=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_c_werror_flag=$ac_save_c_werror_flag
+CFLAGS="$pgac_save_CFLAGS"
+CC="$pgac_save_CC"
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CC_cflags__Wshadow_compatible_local" >&5
+$as_echo "$pgac_cv_prog_CC_cflags__Wshadow_compatible_local" >&6; }
+if test x"$pgac_cv_prog_CC_cflags__Wshadow_compatible_local" = x"yes"; then
+  CFLAGS="${CFLAGS} -Wshadow=compatible-local"
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${CXX} supports -Wshadow=compatible-local, for CXXFLAGS" >&5
+$as_echo_n "checking whether ${CXX} supports -Wshadow=compatible-local, for CXXFLAGS... " >&6; }
+if ${pgac_cv_prog_CXX_cxxflags__Wshadow_compatible_local+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  pgac_save_CXXFLAGS=$CXXFLAGS
+pgac_save_CXX=$CXX
+CXX=${CXX}
+CXXFLAGS="${CXXFLAGS} -Wshadow=compatible-local"
+ac_save_cxx_werror_flag=$ac_cxx_werror_flag
+ac_cxx_werror_flag=yes
+ac_ext=cpp
+ac_cpp='$CXXCPP $CPPFLAGS'
+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
+
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_cxx_try_compile "$LINENO"; then :
+  pgac_cv_prog_CXX_cxxflags__Wshadow_compatible_local=yes
+else
+  pgac_cv_prog_CXX_cxxflags__Wshadow_compatible_local=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+ac_cxx_werror_flag=$ac_save_cxx_werror_flag
+CXXFLAGS="$pgac_save_CXXFLAGS"
+CXX="$pgac_save_CXX"
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CXX_cxxflags__Wshadow_compatible_local" >&5
+$as_echo "$pgac_cv_prog_CXX_cxxflags__Wshadow_compatible_local" >&6; }
+if test x"$pgac_cv_prog_CXX_cxxflags__Wshadow_compatible_local" = x"yes"; then
+  CXXFLAGS="${CXXFLAGS} -Wshadow=compatible-local"
+fi
+
+
   # This was included in -Wall/-Wformat in older GCC versions
 
 { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Wformat-security, for CFLAGS" >&5
diff --git i/configure.ac w/configure.ac
index 10fa55dd154..c696566a7ff 100644
--- i/configure.ac
+++ w/configure.ac
@@ -508,6 +508,8 @@ if test "$GCC" = yes -a "$ICC" = no; then
   PGAC_PROG_CXX_CFLAGS_OPT([-Wimplicit-fallthrough=3])
   PGAC_PROG_CC_CFLAGS_OPT([-Wcast-function-type])
   PGAC_PROG_CXX_CFLAGS_OPT([-Wcast-function-type])
+  PGAC_PROG_CC_CFLAGS_OPT([-Wshadow=compatible-local])
+  PGAC_PROG_CXX_CFLAGS_OPT([-Wshadow=compatible-local])
   # This was included in -Wall/-Wformat in older GCC versions
   PGAC_PROG_CC_CFLAGS_OPT([-Wformat-security])
   PGAC_PROG_CXX_CFLAGS_OPT([-Wformat-security])
diff --git i/meson.build w/meson.build
index 25a6fa941cc..ec6c45d39b9 100644
--- i/meson.build
+++ w/meson.build
@@ -1708,6 +1708,7 @@ common_warning_flags = [
   '-Wmissing-format-attribute',
   '-Wimplicit-fallthrough=3',
   '-Wcast-function-type',
+  '-Wshadow=compatible-local',
   # This was included in -Wall/-Wformat in older GCC versions
   '-Wformat-security',
 ]
#40David Rowley
dgrowleyml@gmail.com
In reply to: Andres Freund (#39)
1 attachment(s)
Re: shadow variables - pg15 edition

On Thu, 6 Oct 2022 at 13:39, Andres Freund <andres@anarazel.de> wrote:

I attached a patch to add -Wshadow=compatible-local to our set of warnings.

Thanks for writing that and for looking at the patch.

FWIW, I'm +1 for having this part of our default compilation flags. I
don't want to have to revisit this on a yearly basis. I imagine Justin
doesn't want to do that either. I feel that since this work has
already uncovered 2 existing bugs that it's worth having this as a
default compilation flag. Additionally, in the cases like in the
PLy_exec_trigger() trigger case below, I feel this has resulted in
slightly more simple code that's easier to follow. I feel having to be
slightly more inventive with variable names in a small number of cases
is worth the trouble. I feel the cases where this could get annoying
are probably limited to variables declared in macros. Maybe that's
just a reason to consider static inline functions instead. That
wouldn't work for macros such as PG_TRY(), but I think macros in that
category are rare. I think switching it on does not mean we can never
switch it off again should we ever find something we're unable to work
around. That just seems a little unlikely given that with the prior
commits plus the attached patch, we've managed to fix ~30 years worth
of opportunity to introduce shadowed local variables.

diff --git a/contrib/hstore/hstore.h b/contrib/hstore/hstore.h
#define HS_FINALIZE(hsp_,count_,buf_,ptr_)                                                   \
do {                                                                                                                    \
-             int buflen = (ptr_) - (buf_);                                                           \
+             int _buflen = (ptr_) - (buf_);                                                          \

Not pretty. Given that HS_FINALIZE already has multiple-eval hazards, perhaps
we could just remove the local?

You're right. It's not that pretty, but I don't feel like making the
hazards any worse is a good idea. This is old code. I'd rather change
it as little as possible to minimise the risk of introducing any bugs.
I'm open to other names for the variable, but I just don't want to
widen the scope for multiple evaluation hazards.

--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -135,11 +135,11 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
-                     ssize_t         ret;
+                     ssize_t         retval;

That looks like it could easily lead to confusion further down the
line. Wouldn't the better fix here be to remove the inner variable?

hmm. You're maybe able to see something I can't there, but to me, it
looks like reusing the outer variable could change the behaviour of
the function. Note at the end of the function we set "ret" just
before the goto label. It looks like it might be possible for the
goto to jump to the point after "ret = bytes_sent;", in which case we
should return -1, the default value for the outer "ret". If I go and
reuse the outer "ret" for something else then it'll return whatever
value it's left set to. I could study the code more and perhaps work
out that that cannot happen, but if it can't then it's really not
obvious to me and if it's not obvious then I just don't feel the need
to take any undue risks by reusing the outer variable. I'm open to
better names, but I'd just rather not reuse the outer scoped variable.

--- a/src/pl/plpython/plpy_exec.c
+++ b/src/pl/plpython/plpy_exec.c
@@ -375,11 +375,11 @@ PLy_exec_trigger(FunctionCallInfo fcinfo, PLyProcedure *proc)
-                             TriggerData *tdata = (TriggerData *) fcinfo->context;
+                             TriggerData *trigdata = (TriggerData *) fcinfo->context;

This doesn't strike me as a good fix either. Isn't the inner tdata exactly
the same as the outer data?

Yeah, you're right. I've adjusted the patch to use the outer scoped
variable and get rid of the inner scoped one.

--- a/src/test/modules/test_integerset/test_integerset.c
+++ b/src/test/modules/test_integerset/test_integerset.c
@@ -585,26 +585,26 @@ test_huge_distances(void)

This is one of the cases where our insistence on -Wdeclaration-after-statement
really makes this unnecessary ugly... Declaring x at the start of the function
just makes this harder to read.

Yeah, it's not pretty. Maybe one day we'll relax that rule. Until
then, I think it's not worth expending too much thought on a test
module.

David

Attachments:

final_shadow_cleanup_v2.patchtext/plain; charset=US-ASCII; name=final_shadow_cleanup_v2.patchDownload
diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c
index e64291e049..dd26d6ac29 100644
--- a/contrib/bloom/blinsert.c
+++ b/contrib/bloom/blinsert.c
@@ -232,8 +232,6 @@ blinsert(Relation index, Datum *values, bool *isnull,
 
 	if (metaData->nEnd > metaData->nStart)
 	{
-		Page		page;
-
 		blkno = metaData->notFullPage[metaData->nStart];
 		Assert(blkno != InvalidBlockNumber);
 
diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index de0b9a109c..67821cd25b 100644
--- a/contrib/file_fdw/file_fdw.c
+++ b/contrib/file_fdw/file_fdw.c
@@ -450,15 +450,15 @@ get_file_fdw_attribute_options(Oid relid)
 	for (attnum = 1; attnum <= natts; attnum++)
 	{
 		Form_pg_attribute attr = TupleDescAttr(tupleDesc, attnum - 1);
-		List	   *options;
+		List	   *column_options;
 		ListCell   *lc;
 
 		/* Skip dropped attributes. */
 		if (attr->attisdropped)
 			continue;
 
-		options = GetForeignColumnOptions(relid, attnum);
-		foreach(lc, options)
+		column_options = GetForeignColumnOptions(relid, attnum);
+		foreach(lc, column_options)
 		{
 			DefElem    *def = (DefElem *) lfirst(lc);
 
@@ -480,7 +480,7 @@ get_file_fdw_attribute_options(Oid relid)
 					fncolumns = lappend(fncolumns, makeString(attname));
 				}
 			}
-			/* maybe in future handle other options here */
+			/* maybe in future handle other column options here */
 		}
 	}
 
diff --git a/contrib/hstore/hstore.h b/contrib/hstore/hstore.h
index 4713e6ea7a..897af244a4 100644
--- a/contrib/hstore/hstore.h
+++ b/contrib/hstore/hstore.h
@@ -128,15 +128,15 @@ typedef struct
 /* finalize a newly-constructed hstore */
 #define HS_FINALIZE(hsp_,count_,buf_,ptr_)							\
 	do {															\
-		int buflen = (ptr_) - (buf_);								\
+		int _buflen = (ptr_) - (buf_);								\
 		if ((count_))												\
 			ARRPTR(hsp_)[0].entry |= HENTRY_ISFIRST;				\
 		if ((count_) != HS_COUNT((hsp_)))							\
 		{															\
 			HS_SETCOUNT((hsp_),(count_));							\
-			memmove(STRPTR(hsp_), (buf_), buflen);					\
+			memmove(STRPTR(hsp_), (buf_), _buflen);					\
 		}															\
-		SET_VARSIZE((hsp_), CALCDATASIZE((count_), buflen));		\
+		SET_VARSIZE((hsp_), CALCDATASIZE((count_), _buflen));		\
 	} while (0)
 
 /* ensure the varlena size of an existing hstore is correct */
diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c
index 09f37fb77a..9524765650 100644
--- a/contrib/postgres_fdw/deparse.c
+++ b/contrib/postgres_fdw/deparse.c
@@ -943,8 +943,6 @@ foreign_expr_walker(Node *node,
 				 */
 				if (agg->aggorder)
 				{
-					ListCell   *lc;
-
 					foreach(lc, agg->aggorder)
 					{
 						SortGroupClause *srt = (SortGroupClause *) lfirst(lc);
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index dd858aba03..8d013f5b1a 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -1341,8 +1341,6 @@ postgresGetForeignPlan(PlannerInfo *root,
 		 */
 		if (outer_plan)
 		{
-			ListCell   *lc;
-
 			/*
 			 * Right now, we only consider grouping and aggregation beyond
 			 * joins. Queries involving aggregates or grouping do not require
@@ -6272,10 +6270,10 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel,
 				 */
 				foreach(l, aggvars)
 				{
-					Expr	   *expr = (Expr *) lfirst(l);
+					Expr	   *aggref = (Expr *) lfirst(l);
 
-					if (IsA(expr, Aggref))
-						tlist = add_to_flat_tlist(tlist, list_make1(expr));
+					if (IsA(aggref, Aggref))
+						tlist = add_to_flat_tlist(tlist, list_make1(aggref));
 				}
 			}
 		}
@@ -6289,8 +6287,6 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel,
 	 */
 	if (havingQual)
 	{
-		ListCell   *lc;
-
 		foreach(lc, (List *) havingQual)
 		{
 			Expr	   *expr = (Expr *) lfirst(lc);
@@ -6324,7 +6320,6 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel,
 	if (fpinfo->local_conds)
 	{
 		List	   *aggvars = NIL;
-		ListCell   *lc;
 
 		foreach(lc, fpinfo->local_conds)
 		{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 6ea52ed866..dee0982eba 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -135,11 +135,11 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
 		 */
 		if (PqGSSSendLength)
 		{
-			ssize_t		ret;
+			ssize_t		retval;
 			ssize_t		amount = PqGSSSendLength - PqGSSSendNext;
 
-			ret = pqsecure_raw_write(conn, PqGSSSendBuffer + PqGSSSendNext, amount);
-			if (ret <= 0)
+			retval = pqsecure_raw_write(conn, PqGSSSendBuffer + PqGSSSendNext, amount);
+			if (retval <= 0)
 			{
 				/*
 				 * Report any previously-sent data; if there was none, reflect
@@ -149,16 +149,16 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
 				 */
 				if (bytes_sent)
 					return bytes_sent;
-				return ret;
+				return retval;
 			}
 
 			/*
 			 * Check if this was a partial write, and if so, move forward that
 			 * far in our buffer and try again.
 			 */
-			if (ret != amount)
+			if (retval != amount)
 			{
-				PqGSSSendNext += ret;
+				PqGSSSendNext += retval;
 				continue;
 			}
 
diff --git a/src/pl/plpython/plpy_cursorobject.c b/src/pl/plpython/plpy_cursorobject.c
index 6b6e743345..57e8f8ec21 100644
--- a/src/pl/plpython/plpy_cursorobject.c
+++ b/src/pl/plpython/plpy_cursorobject.c
@@ -215,18 +215,18 @@ PLy_cursor_plan(PyObject *ob, PyObject *args)
 			PyObject   *elem;
 
 			elem = PySequence_GetItem(args, j);
-			PG_TRY();
+			PG_TRY(2);
 			{
 				bool		isnull;
 
 				plan->values[j] = PLy_output_convert(arg, elem, &isnull);
 				nulls[j] = isnull ? 'n' : ' ';
 			}
-			PG_FINALLY();
+			PG_FINALLY(2);
 			{
 				Py_DECREF(elem);
 			}
-			PG_END_TRY();
+			PG_END_TRY(2);
 		}
 
 		portal = SPI_cursor_open(NULL, plan->plan, plan->values, nulls,
diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c
index 150b3a5977..923703535a 100644
--- a/src/pl/plpython/plpy_exec.c
+++ b/src/pl/plpython/plpy_exec.c
@@ -375,8 +375,6 @@ PLy_exec_trigger(FunctionCallInfo fcinfo, PLyProcedure *proc)
 				rv = NULL;
 			else if (pg_strcasecmp(srv, "MODIFY") == 0)
 			{
-				TriggerData *tdata = (TriggerData *) fcinfo->context;
-
 				if (TRIGGER_FIRED_BY_INSERT(tdata->tg_event) ||
 					TRIGGER_FIRED_BY_UPDATE(tdata->tg_event))
 					rv = PLy_modify_tuple(proc, plargs, tdata, rv);
diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c
index 9a71a42c15..6b9f8d5b43 100644
--- a/src/pl/plpython/plpy_spi.c
+++ b/src/pl/plpython/plpy_spi.c
@@ -236,18 +236,18 @@ PLy_spi_execute_plan(PyObject *ob, PyObject *list, long limit)
 			PyObject   *elem;
 
 			elem = PySequence_GetItem(list, j);
-			PG_TRY();
+			PG_TRY(2);
 			{
 				bool		isnull;
 
 				plan->values[j] = PLy_output_convert(arg, elem, &isnull);
 				nulls[j] = isnull ? 'n' : ' ';
 			}
-			PG_FINALLY();
+			PG_FINALLY(2);
 			{
 				Py_DECREF(elem);
 			}
-			PG_END_TRY();
+			PG_END_TRY(2);
 		}
 
 		rv = SPI_execute_plan(plan->plan, plan->values, nulls,
diff --git a/src/test/modules/test_integerset/test_integerset.c b/src/test/modules/test_integerset/test_integerset.c
index 578d2e8aec..813ca4ba6b 100644
--- a/src/test/modules/test_integerset/test_integerset.c
+++ b/src/test/modules/test_integerset/test_integerset.c
@@ -585,26 +585,26 @@ test_huge_distances(void)
 	 */
 	for (int i = 0; i < num_values; i++)
 	{
-		uint64		x = values[i];
+		uint64		y = values[i];
 		bool		expected;
 		bool		result;
 
-		if (x > 0)
+		if (y > 0)
 		{
-			expected = (values[i - 1] == x - 1);
-			result = intset_is_member(intset, x - 1);
+			expected = (values[i - 1] == y - 1);
+			result = intset_is_member(intset, y - 1);
 			if (result != expected)
-				elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, x - 1);
+				elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, y - 1);
 		}
 
-		result = intset_is_member(intset, x);
+		result = intset_is_member(intset, y);
 		if (result != true)
-			elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, x);
+			elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, y);
 
-		expected = (i != num_values - 1) ? (values[i + 1] == x + 1) : false;
-		result = intset_is_member(intset, x + 1);
+		expected = (i != num_values - 1) ? (values[i + 1] == y + 1) : false;
+		result = intset_is_member(intset, y + 1);
 		if (result != expected)
-			elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, x + 1);
+			elog(ERROR, "intset_is_member failed for " UINT64_FORMAT, y + 1);
 	}
 
 	/*
#41Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: David Rowley (#36)
Re: shadow variables - pg15 edition

On 2022-Oct-06, David Rowley wrote:

On Thu, 6 Oct 2022 at 02:34, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:

A simpler idea might be to just remove the inner declaration, and have
that block set the outer var. There's no damage, since the block is
going to end and not access the previous value anymore.

diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index aa1a3541fe..91a067859b 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -7506,7 +7506,7 @@ threadRun(void *arg)
/* progress report is made by thread 0 for all threads */
if (progress && thread->tid == 0)
{
-                       pg_time_usec_t now = pg_time_now();
+                       now = pg_time_now();    /* not lazy; clobbers outer value */

I didn't want to do it that way because all this code is in a while
loop and the outer "now" will be reused after it's set by the code
above. It's not really immediately obvious to me what repercussions
that would have, but it didn't seem worth taking any risks.

No, it's re-initialized to zero every time through the loop, so setting
it to something else at the bottom doesn't have any further effect.

If it were *not* reinitialized every time through the loop, then what
would happen is that every iteration in the loop (and each operation
within) would see exactly the same value of "now", because it's only set
"lazily" (meaning, if already set, don't change it.)

--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/

#42David Rowley
dgrowleyml@gmail.com
In reply to: Alvaro Herrera (#41)
Re: shadow variables - pg15 edition

On Thu, 6 Oct 2022 at 20:32, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:

On 2022-Oct-06, David Rowley wrote:

I didn't want to do it that way because all this code is in a while
loop and the outer "now" will be reused after it's set by the code
above. It's not really immediately obvious to me what repercussions
that would have, but it didn't seem worth taking any risks.

No, it's re-initialized to zero every time through the loop, so setting
it to something else at the bottom doesn't have any further effect.

Oh yeah, you're right.

If it were *not* reinitialized every time through the loop, then what
would happen is that every iteration in the loop (and each operation
within) would see exactly the same value of "now", because it's only set
"lazily" (meaning, if already set, don't change it.)

On my misread, that's what I was afraid of changing, but now seeing
that now = 0 at the start of each loop, I understand that
pg_time_now_lazy will get an up-to-date time on each loop.

I'm happy if you want to change it to use the outer scoped variable
instead of the now2 one.

David

#43David Rowley
dgrowleyml@gmail.com
In reply to: Andres Freund (#39)
Re: shadow variables - pg15 edition

On Thu, 6 Oct 2022 at 13:39, Andres Freund <andres@anarazel.de> wrote:

I attached a patch to add -Wshadow=compatible-local to our set of warnings.

Since I just committed the patch to fix the final warnings, I think we
should go ahead and commit the patch you wrote to add
-Wshadow=compatible-local to the standard build flags. I don't mind
doing this.

Does anyone think we shouldn't do it? Please let it be known soon.

David

#44David Rowley
dgrowleyml@gmail.com
In reply to: David Rowley (#43)
Re: shadow variables - pg15 edition

On Fri, 7 Oct 2022 at 13:24, David Rowley <dgrowleyml@gmail.com> wrote:

Since I just committed the patch to fix the final warnings, I think we
should go ahead and commit the patch you wrote to add
-Wshadow=compatible-local to the standard build flags. I don't mind
doing this.

Pushed.

David

#45Tom Lane
tgl@sss.pgh.pa.us
In reply to: David Rowley (#44)
Re: shadow variables - pg15 edition

David Rowley <dgrowleyml@gmail.com> writes:

On Fri, 7 Oct 2022 at 13:24, David Rowley <dgrowleyml@gmail.com> wrote:

Since I just committed the patch to fix the final warnings, I think we
should go ahead and commit the patch you wrote to add
-Wshadow=compatible-local to the standard build flags. I don't mind
doing this.

Pushed.

The buildfarm's showing a few instances of this warning, which seem
to indicate that not all versions of the Perl headers are clean:

fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]
fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]
fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]
fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]
fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]
fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]
snakefly | 2022-10-10 08:21:05 | Util.c:457:14: warning: declaration of 'cv' shadows a parameter [-Wshadow=compatible-local]

Before you ask:

fairywren: perl 5.24.3
snakefly: perl 5.16.3

which are a little old, but not *that* old.

Scraping the configure logs also shows that only half of the buildfarm
(exactly 50 out of 100 reporting animals) knows -Wshadow=compatible-local,
which suggests that we might see more of these if they all did. On the
other hand, animals with newer compilers probably also have newer Perl
installations, so assuming that the Perl crew have kept this clean
recently, maybe not.

Not sure if this is problematic enough to justify removing the switch.
A plausible alternative is to have a few animals with known-clean Perl
installations add the switch manually (and use -Werror), so that we find
out about violations without having warnings in the face of developers
who can't fix them. I'm willing to wait to see if anyone complains of
such warnings, though.

regards, tom lane

#46Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#45)
Re: shadow variables - pg15 edition

Hi,

On 2022-10-10 12:06:22 -0400, Tom Lane wrote:

Scraping the configure logs also shows that only half of the buildfarm
(exactly 50 out of 100 reporting animals) knows -Wshadow=compatible-local,
which suggests that we might see more of these if they all did.

I think it's not just newness - only gcc has compatible-local, even very new
clang doesn't.

This was fixed ~6 years ago in perl:

commit f2b9631d5d19d2b71c1776e1193173d13f3620bf
Author: David Mitchell <davem@iabyn.com>
Date: 2016-05-23 14:43:56 +0100

CX_POP_SAVEARRAY(): use more distinctive var name

Under -Wshadow, CX_POP_SAVEARRAY's local var 'av' can generate this
warning:

warning: declaration shadows a local variable [-Wshadow]

So rename it to cx_pop_savearay_av to reduce the risk of a clash.

(See http://nntp.perl.org/group/perl.perl5.porters/236444)

Not sure if this is problematic enough to justify removing the switch.
A plausible alternative is to have a few animals with known-clean Perl
installations add the switch manually (and use -Werror), so that we find
out about violations without having warnings in the face of developers
who can't fix them. I'm willing to wait to see if anyone complains of
such warnings, though.

Given the age of affected perl instances I suspect there'll not be a lot of
developers affected, and the number of warnings is reasonably small too. It'd
likely hurt more developers to not see the warnings locally, given that such
shadowing often causes bugs.

Greetings,

Andres Freund

#47Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Andres Freund (#46)
Re: shadow variables - pg15 edition

On 2022-Oct-10, Andres Freund wrote:

Given the age of affected perl instances I suspect there'll not be a lot of
developers affected, and the number of warnings is reasonably small too. It'd
likely hurt more developers to not see the warnings locally, given that such
shadowing often causes bugs.

Maybe we can install a filter-out in src/pl/plperl's Makefile for the
time being.

--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"Por suerte hoy explotó el califont porque si no me habría muerto
de aburrido" (Papelucho)

#48Andres Freund
andres@anarazel.de
In reply to: Alvaro Herrera (#47)
Re: shadow variables - pg15 edition

Hi,

On 2022-10-10 18:33:11 +0200, Alvaro Herrera wrote:

On 2022-Oct-10, Andres Freund wrote:

Given the age of affected perl instances I suspect there'll not be a lot of
developers affected, and the number of warnings is reasonably small too. It'd
likely hurt more developers to not see the warnings locally, given that such
shadowing often causes bugs.

Maybe we can install a filter-out in src/pl/plperl's Makefile for the
time being.

We could, but is it really a useful thing for something fixed 6 years ago?

Greetings,

Andres Freund

#49Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#48)
Re: shadow variables - pg15 edition

On 2022-10-10 09:37:38 -0700, Andres Freund wrote:

On 2022-10-10 18:33:11 +0200, Alvaro Herrera wrote:

On 2022-Oct-10, Andres Freund wrote:

Given the age of affected perl instances I suspect there'll not be a lot of
developers affected, and the number of warnings is reasonably small too. It'd
likely hurt more developers to not see the warnings locally, given that such
shadowing often causes bugs.

Maybe we can install a filter-out in src/pl/plperl's Makefile for the
time being.

We could, but is it really a useful thing for something fixed 6 years ago?

As an out, a hypothetical dev could add -Wno-shadow=compatible-local to their
CFLAGS.

#50Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Andres Freund (#49)
Re: shadow variables - pg15 edition

On 2022-Oct-10, Andres Freund wrote:

On 2022-10-10 09:37:38 -0700, Andres Freund wrote:

On 2022-10-10 18:33:11 +0200, Alvaro Herrera wrote:

On 2022-Oct-10, Andres Freund wrote:

Given the age of affected perl instances I suspect there'll not be a lot of
developers affected, and the number of warnings is reasonably small too. It'd
likely hurt more developers to not see the warnings locally, given that such
shadowing often causes bugs.

Maybe we can install a filter-out in src/pl/plperl's Makefile for the
time being.

We could, but is it really a useful thing for something fixed 6 years ago?

Well, for people purposefully installing using older installs of Perl
(not me, admittedly), it does seem useful, because you get the benefit
of checking shadow vars for the rest of the tree and still get no
warnings if everything is clean.

As an out, a hypothetical dev could add -Wno-shadow=compatible-local to their
CFLAGS.

But that disables it for the tree as a whole, which is not better.

We can remove the filter-out when we decide to move the Perl version
requirement up, say 4 years from now.

--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"El hombre nunca sabe de lo que es capaz hasta que lo intenta" (C. Dickens)

#51Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#50)
Re: shadow variables - pg15 edition

Alvaro Herrera <alvherre@alvh.no-ip.org> writes:

On 2022-Oct-10, Andres Freund wrote:

We could, but is it really a useful thing for something fixed 6 years ago?

Well, for people purposefully installing using older installs of Perl
(not me, admittedly), it does seem useful, because you get the benefit
of checking shadow vars for the rest of the tree and still get no
warnings if everything is clean.

Meh --- people purposefully using old Perls are likely using old
compilers too. Let's wait and see if any devs actually complain.

regards, tom lane

#52David Rowley
dgrowleyml@gmail.com
In reply to: Tom Lane (#51)
Re: shadow variables - pg15 edition

On Tue, 11 Oct 2022 at 06:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Alvaro Herrera <alvherre@alvh.no-ip.org> writes:

On 2022-Oct-10, Andres Freund wrote:

We could, but is it really a useful thing for something fixed 6 years ago?

Well, for people purposefully installing using older installs of Perl
(not me, admittedly), it does seem useful, because you get the benefit
of checking shadow vars for the rest of the tree and still get no
warnings if everything is clean.

Meh --- people purposefully using old Perls are likely using old
compilers too. Let's wait and see if any devs actually complain.

I can't really add much here, apart from I think it would be a shame
if some 3rd party 6 year old code was to hold us back on this.

I'm also keen to wait for complaints and only if we really have to,
remove the shadow flag from being used only in the places where we
need to.

Aside from this issue, if anything I'd be keen to go a little further
with this and upgrade to -Wshadow=local. The reason being is that I
noticed that the const qualifier is not classed as "compatible" with
the equivalently named and typed variable without the const qualifier.
ISTM that there's close to as much opportunity to mix up two variables
with the same name that are const and non-const as there are two
variables with the same const qualifier. However, I'll be waiting for
the dust to settle on the current flags before thinking any more about
that.

David

#53Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#52)
1 attachment(s)
Re: shadow variables - pg15 edition

On Tue, Oct 11, 2022 at 01:16:50PM +1300, David Rowley wrote:

Aside from this issue, if anything I'd be keen to go a little further
with this and upgrade to -Wshadow=local. The reason being is that I
noticed that the const qualifier is not classed as "compatible" with
the equivalently named and typed variable without the const qualifier.
ISTM that there's close to as much opportunity to mix up two variables
with the same name that are const and non-const as there are two
variables with the same const qualifier. However, I'll be waiting for
the dust to settle on the current flags before thinking any more about
that.

-Wshadow=compatible-local causes one extra warning in postgres.c with
-DWRITE_READ_PARSE_PLAN_TREES:
postgres.c: In function ‘pg_rewrite_query’:
postgres.c:818:37: warning: declaration of ‘query’ shadows a parameter [-Wshadow=compatible-local]
818 | Query *query = lfirst_node(Query, lc);
| ^~~~~
postgres.c:771:25: note: shadowed declaration is here
771 | pg_rewrite_query(Query *query)
| ~~~~~~~^~~~~

Something like the patch attached would deal with this one.
--
Michael

Attachments:

shadow-warning.patchtext/x-diff; charset=us-asciiDownload
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 5352d5f4c6..27dee29f42 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -815,15 +815,15 @@ pg_rewrite_query(Query *query)
 
 		foreach(lc, querytree_list)
 		{
-			Query	   *query = lfirst_node(Query, lc);
-			char	   *str = nodeToString(query);
+			Query	   *curr_query = lfirst_node(Query, lc);
+			char	   *str = nodeToString(curr_query);
 			Query	   *new_query = stringToNodeWithLocations(str);
 
 			/*
 			 * queryId is not saved in stored rules, but we must preserve it
 			 * here to avoid breaking pg_stat_statements.
 			 */
-			new_query->queryId = query->queryId;
+			new_query->queryId = curr_query->queryId;
 
 			new_list = lappend(new_list, new_query);
 			pfree(str);
#54David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#53)
Re: shadow variables - pg15 edition

On Wed, 12 Oct 2022 at 14:39, Michael Paquier <michael@paquier.xyz> wrote:

-Wshadow=compatible-local causes one extra warning in postgres.c with
-DWRITE_READ_PARSE_PLAN_TREES:
postgres.c: In function ‘pg_rewrite_query’:
postgres.c:818:37: warning: declaration of ‘query’ shadows a parameter [-Wshadow=compatible-local]
818 | Query *query = lfirst_node(Query, lc);
| ^~~~~
postgres.c:771:25: note: shadowed declaration is here
771 | pg_rewrite_query(Query *query)
| ~~~~~~~^~~~~

Something like the patch attached would deal with this one.

Thanks for finding that and coming up with the patch. It looks fine to
me. Do you want to push it?

David

#55Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#54)
Re: shadow variables - pg15 edition

On Wed, Oct 12, 2022 at 02:50:58PM +1300, David Rowley wrote:

Thanks for finding that and coming up with the patch. It looks fine to
me. Do you want to push it?

Thanks for double-checking. I'll do so shortly, I just got annoyed by
that for a few days :)

Thanks for your work on this thread to be able to push the switch by
default, by the way.
--
Michael

#56Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: David Rowley (#52)
Re: shadow variables - pg15 edition

On 2022-Oct-11, David Rowley wrote:

I'm also keen to wait for complaints and only if we really have to,
remove the shadow flag from being used only in the places where we
need to.

+1

--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"The problem with the future is that it keeps turning into the present"
(Hobbes)