shadow variables - pg15 edition
There's been no progress on this in the past discussions.
/messages/by-id/877k1psmpf.fsf@mailbox.samurai.com
/messages/by-id/CAApHDvpqBR7u9yzW4yggjG=QfN=FZsc8Wo2ckokpQtif-+iQ2A@mail.gmail.com
/messages/by-id/MN2PR18MB2927F7B5F690065E1194B258E35D0@MN2PR18MB2927.namprd18.prod.outlook.com
But an unfortunate consequence of not fixing the historic issues is that it
precludes the possibility that anyone could be expected to notice if they
introduce more instances of the same problem (as in the first half of these
patches). Then the hole which has already been dug becomes deeper, further
increasing the burden of fixing the historic issues before being able to use
-Wshadow.
The first half of the patches fix shadow variables newly-introduced in v15
(including one of my own patches), the rest are fixing the lowest hanging fruit
of the "short list" from COPT=-Wshadow=compatible-local
I can't see that any of these are bugs, but it seems like a good goal to move
towards allowing use of the -Wshadow* options to help avoid future errors, as
well as cleanliness and readability (rather than allowing it to get harder to
use -Wshadow).
--
Justin
Attachments:
0001-avoid-shadow-vars-pg_dump.c-i_oid.patchtext/x-diff; charset=us-asciiDownload
From 0b05b375a87d89f5d88e87d11956cf2ac15ea00f Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:38:57 -0500
Subject: [PATCH 01/17] avoid shadow vars: pg_dump.c: i_oid
backpatch to v15
commit d498e052b4b84ae21b3b68d5b3fda6ead65d1d4d
Author: Robert Haas <rhaas@postgresql.org>
Date: Fri Jul 8 10:15:19 2022 -0400
Preserve relfilenode of pg_largeobject and its index across pg_upgrade.
---
src/bin/pg_dump/pg_dump.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index da6605175a0..322947c5609 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3144,7 +3144,6 @@ dumpDatabase(Archive *fout)
PQExpBuffer loHorizonQry = createPQExpBuffer();
int i_relfrozenxid,
i_relfilenode,
- i_oid,
i_relminmxid;
/*
--
2.17.1
0002-avoid-shadow-vars-pg_dump.c-tbinfo.patchtext/x-diff; charset=us-asciiDownload
From a76bac21fe428cdd6241bff6827e08d9d71e1bdf Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 15:55:13 -0500
Subject: [PATCH 02/17] avoid shadow vars: pg_dump.c: tbinfo
backpatch to v15
commit 9895961529ef8ff3fc12b39229f9a93e08bca7b7
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Dec 6 13:07:31 2021 -0500
Avoid per-object queries in performance-critical paths in pg_dump.
---
src/bin/pg_dump/pg_dump.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 322947c5609..5c196d66985 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -7080,21 +7080,21 @@ getConstraints(Archive *fout, TableInfo tblinfo[], int numTables)
appendPQExpBufferChar(tbloids, '{');
for (int i = 0; i < numTables; i++)
{
- TableInfo *tbinfo = &tblinfo[i];
+ TableInfo *mytbinfo = &tblinfo[i];
/*
* For partitioned tables, foreign keys have no triggers so they must
* be included anyway in case some foreign keys are defined.
*/
- if ((!tbinfo->hastriggers &&
- tbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
- !(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
+ if ((!mytbinfo->hastriggers &&
+ mytbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
+ !(mytbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
continue;
/* OK, we need info for this table */
if (tbloids->len > 1) /* do we have more than the '{'? */
appendPQExpBufferChar(tbloids, ',');
- appendPQExpBuffer(tbloids, "%u", tbinfo->dobj.catId.oid);
+ appendPQExpBuffer(tbloids, "%u", mytbinfo->dobj.catId.oid);
}
appendPQExpBufferChar(tbloids, '}');
--
2.17.1
0003-avoid-shadow-vars-pg_dump.c-owning_tab.patchtext/x-diff; charset=us-asciiDownload
From f6a814fd50800942081250b05f8e6d143b8d8266 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 16:22:52 -0500
Subject: [PATCH 03/17] avoid shadow vars: pg_dump.c: owning_tab
backpatch to v15
commit 344d62fb9a978a72cf8347f0369b9ee643fd0b31
Author: Peter Eisentraut <peter@eisentraut.org>
Date: Thu Apr 7 16:13:23 2022 +0200
Unlogged sequences
---
src/bin/pg_dump/pg_dump.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 5c196d66985..ecf29f3c52a 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -16799,7 +16799,7 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
*/
if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
{
- TableInfo *owning_tab = findTableByOid(tbinfo->owning_tab);
+ owning_tab = findTableByOid(tbinfo->owning_tab);
if (owning_tab == NULL)
pg_fatal("failed sanity check, parent table with OID %u of sequence with OID %u not found",
--
2.17.1
0004-avoid-shadow-vars-tablesync.c-first.patchtext/x-diff; charset=us-asciiDownload
From 1a979be65baab871754f86669c5f0327fad6cab5 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 08:52:03 -0500
Subject: [PATCH 04/17] avoid shadow vars: tablesync.c: first
backpatch to v15
commit 923def9a533a7d986acfb524139d8b9e5466d0a5
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sat Mar 26 00:45:21 2022 +0100
Allow specifying column lists for logical replication
---
src/backend/replication/logical/tablesync.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 6a01ffd273f..95d1081f4ec 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -762,8 +762,8 @@ fetch_remote_table_info(char *nspname, char *relname,
TupleTableSlot *slot;
Oid attrsRow[] = {INT2VECTOROID};
StringInfoData pub_names;
- bool first = true;
+ first = true;
initStringInfo(&pub_names);
foreach(lc, MySubscription->publications)
{
--
2.17.1
0005-avoid-shadow-vars-tablesync.c-slot.patchtext/x-diff; charset=us-asciiDownload
From 555a4545460f3086fd69ca95ac41f18c6ceaab80 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:01:16 -0500
Subject: [PATCH 05/17] avoid shadow vars: tablesync.c: slot
backpatch to v15
commit 923def9a533a7d986acfb524139d8b9e5466d0a5
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sat Mar 26 00:45:21 2022 +0100
Allow specifying column lists for logical replication
---
src/backend/replication/logical/tablesync.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 95d1081f4ec..5bb9b545e9a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -759,7 +759,6 @@ fetch_remote_table_info(char *nspname, char *relname,
if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
{
WalRcvExecResult *pubres;
- TupleTableSlot *slot;
Oid attrsRow[] = {INT2VECTOROID};
StringInfoData pub_names;
--
2.17.1
0006-avoid-shadow-vars-basebackup_target.c-ttype.patchtext/x-diff; charset=us-asciiDownload
From 5ac6d302f769db6f4625be0cf6a5bae4aa60de40 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 18:51:10 -0500
Subject: [PATCH 06/17] avoid shadow vars: basebackup_target.c: ttype
backpatch to v15
commit e4ba69f3f4a1b997aa493cc02e563a91c0f35b87
Author: Robert Haas <rhaas@postgresql.org>
Date: Tue Mar 15 13:22:04 2022 -0400
Allow extensions to add new backup targets.
---
src/backend/backup/basebackup_target.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/backend/backup/basebackup_target.c b/src/backend/backup/basebackup_target.c
index 83928e32055..8d10fe15530 100644
--- a/src/backend/backup/basebackup_target.c
+++ b/src/backend/backup/basebackup_target.c
@@ -73,7 +73,7 @@ BaseBackupAddTarget(char *name,
/* Search the target type list for an existing entry with this name. */
foreach(lc, BaseBackupTargetTypeList)
{
- BaseBackupTargetType *ttype = lfirst(lc);
+ ttype = lfirst(lc);
if (strcmp(ttype->name, name) == 0)
{
--
2.17.1
0007-avoid-shadow-vars-parse_jsontable.c-jtc.patchtext/x-diff; charset=us-asciiDownload
From 744cb8dd010d61bef46f9623511a253429bb46cb Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:45:28 -0500
Subject: [PATCH 07/17] avoid shadow vars: parse_jsontable.c: jtc
backpatch to v15
commit fadb48b00e02ccfd152baa80942de30205ab3c4f
Author: Andrew Dunstan <andrew@dunslane.net>
Date: Tue Apr 5 14:09:04 2022 -0400
PLAN clauses for JSON_TABLE
---
src/backend/parser/parse_jsontable.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/backend/parser/parse_jsontable.c b/src/backend/parser/parse_jsontable.c
index bc3272017ef..c2318b126f2 100644
--- a/src/backend/parser/parse_jsontable.c
+++ b/src/backend/parser/parse_jsontable.c
@@ -341,9 +341,9 @@ transformJsonTableChildPlan(JsonTableContext *cxt, JsonTablePlan *plan,
/* transform all nested columns into cross/union join */
foreach(lc, columns)
{
- JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
Node *node;
+ jtc = castNode(JsonTableColumn, lfirst(lc));
if (jtc->coltype != JTC_NESTED)
continue;
--
2.17.1
0008-avoid-shadow-vars-res.patchtext/x-diff; charset=us-asciiDownload
From 660a31762a9122c240227f1f542ac4e284b5e4c5 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 00:22:45 -0500
Subject: [PATCH 08/17] avoid shadow vars: res
backpatch to v15
commit 1a36bc9dba8eae90963a586d37b6457b32b2fed4
Author: Andrew Dunstan <andrew@dunslane.net>
Date: Thu Mar 3 13:11:14 2022 -0500
SQL/JSON query functions
---
src/backend/utils/adt/jsonpath_exec.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index 10c7e64aab3..d1e3385975a 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32 typmod, JsonbValue *res)
if (JsonContainerIsScalar(&jb->root))
{
- bool res PG_USED_FOR_ASSERTS_ONLY;
+ bool tmp PG_USED_FOR_ASSERTS_ONLY;
- res = JsonbExtractScalar(&jb->root, jbv);
- Assert(res);
+ tmp = JsonbExtractScalar(&jb->root, jbv);
+ Assert(tmp);
}
else
JsonbInitBinary(jbv, jb);
--
2.17.1
0009-avoid-shadow-vars-clauses.c-querytree_list.patchtext/x-diff; charset=us-asciiDownload
From ba98717eba1ffa94dd2dd23a0dd29f30b035f56b Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:43:15 -0500
Subject: [PATCH 09/17] avoid shadow vars: clauses.c: querytree_list
commit e717a9a18b2e34c9c40e5259ad4d31cd7e420750
Author: Peter Eisentraut <peter@eisentraut.org>
Date: Wed Apr 7 21:30:08 2021 +0200
SQL-standard function body
---
src/backend/optimizer/util/clauses.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 533df86ff77..e846d414f00 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4540,7 +4540,6 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
if (!isNull)
{
Node *n;
- List *querytree_list;
n = stringToNode(TextDatumGetCString(tmp));
if (IsA(n, List))
--
2.17.1
0010-avoid-shadow-vars-tablecmds.c-constraintOid.patchtext/x-diff; charset=us-asciiDownload
From a20657e50017676f04d11a24cbd046ce768af248 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:28:02 -0500
Subject: [PATCH 10/17] avoid shadow vars: tablecmds.c: constraintOid
commit eb7ed3f3063401496e4aa4bd68fa33f0be31a72f
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Mon Feb 19 16:59:37 2018 -0300
Allow UNIQUE indexes on partitioned tables
---
src/backend/commands/tablecmds.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 70b94bbb397..1c0cf7c1a06 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -18098,7 +18098,6 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel)
if (!found)
{
IndexStmt *stmt;
- Oid constraintOid;
stmt = generateClonedIndexStmt(NULL,
idxRel, attmap,
--
2.17.1
0011-avoid-shadow-vars-tablecmds.c-copyTuple.patchtext/x-diff; charset=us-asciiDownload
From 6866edb3c2a738cf14abf3df79db4fc4ce8ec1e4 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:17:46 -0500
Subject: [PATCH 11/17] avoid shadow vars: tablecmds.c: copyTuple
commit 6f70d7ca1d1937a9f7b79eff6fb18ed1bb2a4c47
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Wed May 5 12:14:21 2021 -0400
Have ALTER CONSTRAINT recurse on partitioned tables
---
src/backend/commands/tablecmds.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 1c0cf7c1a06..d6483cf1f9a 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10865,7 +10865,6 @@ ATExecAlterConstrRecurse(Constraint *cmdcon, Relation conrel, Relation tgrel,
{
Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tgtuple);
Form_pg_trigger copy_tg;
- HeapTuple copyTuple;
/*
* Remember OIDs of other relation(s) involved in FK constraint.
--
2.17.1
0012-avoid-shadow-vars-copyfrom.c-attnum.patchtext/x-diff; charset=us-asciiDownload
From 639a1b6bc67c52e242f5cbe4f14070fdce1d5497 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 16:55:10 -0500
Subject: [PATCH 12/17] avoid shadow vars: copyfrom.c: attnum
commit 3a1433674696fbb968bc2120ebd36d9766f49af5
Author: Bruce Momjian <bruce@momjian.us>
Date: Thu Apr 15 22:36:03 2004 +0000
Modify COPY for() loop to use attnum as a variable name, not 'i'.
---
src/backend/commands/copyfrom.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index a976008b3d4..e8bb168aea8 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -1202,7 +1202,6 @@ BeginCopyFrom(ParseState *pstate,
num_defaults;
FmgrInfo *in_functions;
Oid *typioparams;
- int attnum;
Oid in_func_oid;
int *defmap;
ExprState **defexprs;
@@ -1401,7 +1400,7 @@ BeginCopyFrom(ParseState *pstate,
defmap = (int *) palloc(num_phys_attrs * sizeof(int));
defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
- for (attnum = 1; attnum <= num_phys_attrs; attnum++)
+ for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
{
Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1);
--
2.17.1
0013-avoid-shadow-vars-nodeAgg-transno.patchtext/x-diff; charset=us-asciiDownload
From f4eb4dab974b60e62f2444cc19bb50eeb1933018 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 18:36:13 -0500
Subject: [PATCH 13/17] avoid shadow vars: nodeAgg: transno
commit db80acfc9d50ac56811d22802ab3d822ab313055
Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Date: Tue Dec 20 09:20:17 2016 +0200
Fix sharing Agg transition state of DISTINCT or ordered aggs.
---
src/backend/executor/nodeAgg.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 96d200e4461..933c3049016 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -1296,13 +1296,12 @@ finalize_aggregates(AggState *aggstate,
Datum *aggvalues = econtext->ecxt_aggvalues;
bool *aggnulls = econtext->ecxt_aggnulls;
int aggno;
- int transno;
/*
* If there were any DISTINCT and/or ORDER BY aggregates, sort their
* inputs and run the transition functions.
*/
- for (transno = 0; transno < aggstate->numtrans; transno++)
+ for (int transno = 0; transno < aggstate->numtrans; transno++)
{
AggStatePerTrans pertrans = &aggstate->pertrans[transno];
AggStatePerGroup pergroupstate;
--
2.17.1
0014-avoid-shadow-vars-trigger.c-partitionId.patchtext/x-diff; charset=us-asciiDownload
From d8467edb33575a56ae522e97ecfb329a27b1f462 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:36:12 -0500
Subject: [PATCH 14/17] avoid shadow vars: trigger.c: partitionId
commit 80ba4bb383538a2ee846fece6a7b8da9518b6866
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Thu Jul 22 18:33:47 2021 -0400
Make ALTER TRIGGER RENAME consistent for partitioned tables
---
src/backend/commands/trigger.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 62a09fb131b..bb4385c6ea9 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1726,9 +1726,9 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
for (int i = 0; i < partdesc->nparts; i++)
{
- Oid partitionId = partdesc->oids[i];
+ Oid partid = partdesc->oids[i];
- renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+ renametrig_partition(tgrel, partid, tgform->oid, newname,
NameStr(tgform->tgname));
}
}
--
2.17.1
0015-avoid-shadow-vars-execPartition.c-found_whole_row.patchtext/x-diff; charset=us-asciiDownload
From f24e62293892170cc500907b15e70d75b2503ae1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:20:38 -0500
Subject: [PATCH 15/17] avoid shadow vars: execPartition.c: found_whole_row
commit 158b7bc6d77948d2f474dc9f2777c87f81d1365a
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Mon Apr 16 15:50:57 2018 -0300
Ignore whole-rows in INSERT/CONFLICT with partitioned tables
See also:
commit 555ee77a9668e3f1b03307055b5027e13bf1a715
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Mon Mar 26 10:43:54 2018 -0300
Handle INSERT .. ON CONFLICT with partitioned tables
---
src/backend/executor/execPartition.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index eb491061024..6998ba8ae23 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
{
List *onconflset;
List *onconflcols;
- bool found_whole_row;
/*
* Translate expressions in onConflictSet to account for
--
2.17.1
0016-avoid-shadow-vars-brin-keyno.patchtext/x-diff; charset=us-asciiDownload
From 79fe22270a9ab91c7a561c2bff2a64b20c1797e7 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 17:10:55 -0500
Subject: [PATCH 16/17] avoid shadow vars: brin keyno
commit a681e3c107aa97eb554f118935c4d2278892c3dd
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri Mar 26 13:17:56 2021 +0100
Support the old signature of BRIN consistent function
---
src/backend/access/brin/brin.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index e88f7efa7e4..69f21abfb59 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -372,7 +372,6 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
**nullkeys;
int *nkeys,
*nnullkeys;
- int keyno;
char *ptr;
Size len;
char *tmp PG_USED_FOR_ASSERTS_ONLY;
@@ -454,7 +453,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
memset(nnullkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
/* Preprocess the scan keys - split them into per-attribute arrays. */
- for (keyno = 0; keyno < scan->numberOfKeys; keyno++)
+ for (int keyno = 0; keyno < scan->numberOfKeys; keyno++)
{
ScanKey key = &scan->keyData[keyno];
AttrNumber keyattno = key->sk_attno;
--
2.17.1
0017-avoid-shadow-vars-bufmgr.c-j.patchtext/x-diff; charset=us-asciiDownload
From 50ded6f49a3f2e7ce4b221201ea6d38a5bda83c5 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 23:52:21 -0500
Subject: [PATCH 17/17] avoid shadow vars: bufmgr.c: j
commit bea449c635c0e68e21610593594c1e5d52842cdd
Author: Amit Kapila <akapila@postgresql.org>
Date: Wed Jan 13 07:46:11 2021 +0530
Optimize DropRelFileNodesAllBuffers() for recovery.
---
src/backend/storage/buffer/bufmgr.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 9c1bd508d36..a748efdb942 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -3183,7 +3183,6 @@ void
DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
{
int i;
- int j;
int n = 0;
SMgrRelation *rels;
BlockNumber (*block)[MAX_FORKNUM + 1];
@@ -3232,7 +3231,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
*/
for (i = 0; i < n && cached; i++)
{
- for (j = 0; j <= MAX_FORKNUM; j++)
+ for (int j = 0; j <= MAX_FORKNUM; j++)
{
/* Get the number of blocks for a relation's fork. */
block[i][j] = smgrnblocks_cached(rels[i], j);
@@ -3259,7 +3258,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
{
for (i = 0; i < n; i++)
{
- for (j = 0; j <= MAX_FORKNUM; j++)
+ for (int j = 0; j <= MAX_FORKNUM; j++)
{
/* ignore relation forks that doesn't exist */
if (!BlockNumberIsValid(block[i][j]))
--
2.17.1
On Thu, Aug 18, 2022 at 12:54 AM Justin Pryzby <pryzby@telsasoft.com> wrote:
There's been no progress on this in the past discussions.
/messages/by-id/877k1psmpf.fsf@mailbox.samurai.com
/messages/by-id/CAApHDvpqBR7u9yzW4yggjG=QfN=FZsc8Wo2ckokpQtif-+iQ2A@mail.gmail.com
/messages/by-id/MN2PR18MB2927F7B5F690065E1194B258E35D0@MN2PR18MB2927.namprd18.prod.outlook.comBut an unfortunate consequence of not fixing the historic issues is that it
precludes the possibility that anyone could be expected to notice if they
introduce more instances of the same problem (as in the first half of these
patches). Then the hole which has already been dug becomes deeper, further
increasing the burden of fixing the historic issues before being able to use
-Wshadow.The first half of the patches fix shadow variables newly-introduced in v15
(including one of my own patches), the rest are fixing the lowest hanging fruit
of the "short list" from COPT=-Wshadow=compatible-localI can't see that any of these are bugs, but it seems like a good goal to move
towards allowing use of the -Wshadow* options to help avoid future errors, as
well as cleanliness and readability (rather than allowing it to get harder to
use -Wshadow).
Hey, thanks for picking this up!
I'd started looking at these [1]/messages/by-id/CAHut+Puv4LaQKVQSErtV_=3MezUdpipVOMt7tJ3fXHxt_YK-Zw@mail.gmail.com last year and spent a day trying to
categorise them all in a spreadsheet (shadows a global, shadows a
parameter, shadows a local var etc) but I became swamped by the
volume, and then other work/life got in the way.
+1 from me.
------
[1]: /messages/by-id/CAHut+Puv4LaQKVQSErtV_=3MezUdpipVOMt7tJ3fXHxt_YK-Zw@mail.gmail.com
Kind Regards,
Peter Smith.
Fujitsu Australia
On Thu, Aug 18, 2022 at 08:49:14AM +1000, Peter Smith wrote:
I'd started looking at these [1] last year and spent a day trying to
categorise them all in a spreadsheet (shadows a global, shadows a
parameter, shadows a local var etc) but I became swamped by the
volume, and then other work/life got in the way.+1 from me.
A lot of the changes proposed here update the code so as the same
variable gets used across more code paths by removing declarations,
but we have two variables defined because both are aimed to be used in
a different context (see AttachPartitionEnsureIndexes() in tablecmds.c
for example).
Wouldn't it be a saner approach in a lot of cases to rename the
shadowed variables (aka the ones getting removed in your patches) and
keep them local to the code paths where we use them?
--
Michael
On Thu, Aug 18, 2022 at 09:39:02AM +0900, Michael Paquier wrote:
On Thu, Aug 18, 2022 at 08:49:14AM +1000, Peter Smith wrote:
I'd started looking at these [1] last year and spent a day trying to
categorise them all in a spreadsheet (shadows a global, shadows a
parameter, shadows a local var etc) but I became swamped by the
volume, and then other work/life got in the way.+1 from me.
A lot of the changes proposed here update the code so as the same
variable gets used across more code paths by removing declarations,
but we have two variables defined because both are aimed to be used in
a different context (see AttachPartitionEnsureIndexes() in tablecmds.c
for example).
Wouldn't it be a saner approach in a lot of cases to rename the
shadowed variables (aka the ones getting removed in your patches) and
keep them local to the code paths where we use them?
The cases where I removed a declaration are ones where the variable either
hasn't yet been assigned in the outer scope (so it's safe to use first in the
inner scope, since its value is later overwriten in the outer scope). Or it's
no longer used in the outer scope, so it's safe to re-use it in the inner scope
(as in AttachPartitionEnsureIndexes). Since you think it's saner, I changed to
rename them.
In the case of "first", the var is used in two independent loops, the same way,
and re-initialized. In the case of found_whole_row, the var is ignored, as the
comments say, so it would be silly to declare more vars to be additionally
ignored.
--
Justin
PS. I hadn't sent the other patches which rename the variables, having assumed
that the discussion would be bikeshedded to death and derail without having
fixed the lowest hanging fruits. I'm attaching them those now to see what
happens.
Attachments:
0001-avoid-shadow-vars-pg_dump.c-i_oid.patchtext/x-diff; charset=us-asciiDownload
From 97768e5a439bef016e6ebd5221ed148f076c6e3f Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:38:57 -0500
Subject: [PATCH 01/26] avoid shadow vars: pg_dump.c: i_oid
backpatch to v15
commit d498e052b4b84ae21b3b68d5b3fda6ead65d1d4d
Author: Robert Haas <rhaas@postgresql.org>
Date: Fri Jul 8 10:15:19 2022 -0400
Preserve relfilenode of pg_largeobject and its index across pg_upgrade.
---
src/bin/pg_dump/pg_dump.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index da6605175a0..322947c5609 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3144,7 +3144,6 @@ dumpDatabase(Archive *fout)
PQExpBuffer loHorizonQry = createPQExpBuffer();
int i_relfrozenxid,
i_relfilenode,
- i_oid,
i_relminmxid;
/*
--
2.17.1
0002-avoid-shadow-vars-pg_dump.c-tbinfo.patchtext/x-diff; charset=us-asciiDownload
From ce729535c47d72db775ebcf1f185799c78615148 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 15:55:13 -0500
Subject: [PATCH 02/26] avoid shadow vars: pg_dump.c: tbinfo
backpatch to v15
commit 9895961529ef8ff3fc12b39229f9a93e08bca7b7
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Dec 6 13:07:31 2021 -0500
Avoid per-object queries in performance-critical paths in pg_dump.
---
src/bin/pg_dump/pg_dump.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 322947c5609..5c196d66985 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -7080,21 +7080,21 @@ getConstraints(Archive *fout, TableInfo tblinfo[], int numTables)
appendPQExpBufferChar(tbloids, '{');
for (int i = 0; i < numTables; i++)
{
- TableInfo *tbinfo = &tblinfo[i];
+ TableInfo *mytbinfo = &tblinfo[i];
/*
* For partitioned tables, foreign keys have no triggers so they must
* be included anyway in case some foreign keys are defined.
*/
- if ((!tbinfo->hastriggers &&
- tbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
- !(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
+ if ((!mytbinfo->hastriggers &&
+ mytbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
+ !(mytbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
continue;
/* OK, we need info for this table */
if (tbloids->len > 1) /* do we have more than the '{'? */
appendPQExpBufferChar(tbloids, ',');
- appendPQExpBuffer(tbloids, "%u", tbinfo->dobj.catId.oid);
+ appendPQExpBuffer(tbloids, "%u", mytbinfo->dobj.catId.oid);
}
appendPQExpBufferChar(tbloids, '}');
--
2.17.1
0003-avoid-shadow-vars-pg_dump.c-owning_tab.patchtext/x-diff; charset=us-asciiDownload
From 478fa745d4ddc38fe15f54d7d396ebf7a106772b Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 16:22:52 -0500
Subject: [PATCH 03/26] avoid shadow vars: pg_dump.c: owning_tab
backpatch to v15
commit 344d62fb9a978a72cf8347f0369b9ee643fd0b31
Author: Peter Eisentraut <peter@eisentraut.org>
Date: Thu Apr 7 16:13:23 2022 +0200
Unlogged sequences
---
src/bin/pg_dump/pg_dump.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 5c196d66985..4b5d8df1e4e 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -16799,21 +16799,21 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
*/
if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
{
- TableInfo *owning_tab = findTableByOid(tbinfo->owning_tab);
+ TableInfo *this_owning_tab = findTableByOid(tbinfo->owning_tab);
- if (owning_tab == NULL)
+ if (this_owning_tab == NULL)
pg_fatal("failed sanity check, parent table with OID %u of sequence with OID %u not found",
tbinfo->owning_tab, tbinfo->dobj.catId.oid);
- if (owning_tab->dobj.dump & DUMP_COMPONENT_DEFINITION)
+ if (this_owning_tab->dobj.dump & DUMP_COMPONENT_DEFINITION)
{
resetPQExpBuffer(query);
appendPQExpBuffer(query, "ALTER SEQUENCE %s",
fmtQualifiedDumpable(tbinfo));
appendPQExpBuffer(query, " OWNED BY %s",
- fmtQualifiedDumpable(owning_tab));
+ fmtQualifiedDumpable(this_owning_tab));
appendPQExpBuffer(query, ".%s;\n",
- fmtId(owning_tab->attnames[tbinfo->owning_col - 1]));
+ fmtId(this_owning_tab->attnames[tbinfo->owning_col - 1]));
if (tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
ArchiveEntry(fout, nilCatalogId, createDumpId(),
--
2.17.1
0004-avoid-shadow-vars-tablesync.c-first.patchtext/x-diff; charset=us-asciiDownload
From f67d6fe9b9bca6334f596478fb0317025ae51226 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 08:52:03 -0500
Subject: [PATCH 04/26] avoid shadow vars: tablesync.c: first
backpatch to v15
commit 923def9a533a7d986acfb524139d8b9e5466d0a5
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sat Mar 26 00:45:21 2022 +0100
Allow specifying column lists for logical replication
---
src/backend/replication/logical/tablesync.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index bfcb80b4955..71b503f4217 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -762,8 +762,8 @@ fetch_remote_table_info(char *nspname, char *relname,
TupleTableSlot *slot;
Oid attrsRow[] = {INT2VECTOROID};
StringInfoData pub_names;
- bool first = true;
+ first = true;
initStringInfo(&pub_names);
foreach(lc, MySubscription->publications)
{
--
2.17.1
0005-avoid-shadow-vars-tablesync.c-slot.patchtext/x-diff; charset=us-asciiDownload
From 51c4c49e81c802d74e348222df66fcff3b841814 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:01:16 -0500
Subject: [PATCH 05/26] avoid shadow vars: tablesync.c: slot
backpatch to v15
commit 923def9a533a7d986acfb524139d8b9e5466d0a5
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sat Mar 26 00:45:21 2022 +0100
Allow specifying column lists for logical replication
---
src/backend/replication/logical/tablesync.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 71b503f4217..cfc47dc8df0 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -759,7 +759,7 @@ fetch_remote_table_info(char *nspname, char *relname,
if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
{
WalRcvExecResult *pubres;
- TupleTableSlot *slot;
+ TupleTableSlot *thisslot;
Oid attrsRow[] = {INT2VECTOROID};
StringInfoData pub_names;
@@ -819,10 +819,10 @@ fetch_remote_table_info(char *nspname, char *relname,
* If we find a NULL value, it means all the columns should be
* replicated.
*/
- slot = MakeSingleTupleTableSlot(pubres->tupledesc, &TTSOpsMinimalTuple);
- if (tuplestore_gettupleslot(pubres->tuplestore, true, false, slot))
+ thisslot = MakeSingleTupleTableSlot(pubres->tupledesc, &TTSOpsMinimalTuple);
+ if (tuplestore_gettupleslot(pubres->tuplestore, true, false, thisslot))
{
- Datum cfval = slot_getattr(slot, 1, &isnull);
+ Datum cfval = slot_getattr(thisslot, 1, &isnull);
if (!isnull)
{
@@ -838,9 +838,9 @@ fetch_remote_table_info(char *nspname, char *relname,
included_cols = bms_add_member(included_cols, elems[natt]);
}
- ExecClearTuple(slot);
+ ExecClearTuple(thisslot);
}
- ExecDropSingleTupleTableSlot(slot);
+ ExecDropSingleTupleTableSlot(thisslot);
walrcv_clear_result(pubres);
--
2.17.1
0006-avoid-shadow-vars-basebackup_target.c-ttype.patchtext/x-diff; charset=us-asciiDownload
From ac32d509971319d8804b184d014730384a7c93ae Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 18:51:10 -0500
Subject: [PATCH 06/26] avoid shadow vars: basebackup_target.c: ttype
backpatch to v15
commit e4ba69f3f4a1b997aa493cc02e563a91c0f35b87
Author: Robert Haas <rhaas@postgresql.org>
Date: Tue Mar 15 13:22:04 2022 -0400
Allow extensions to add new backup targets.
---
src/backend/backup/basebackup_target.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/backend/backup/basebackup_target.c b/src/backend/backup/basebackup_target.c
index 83928e32055..1553568a37d 100644
--- a/src/backend/backup/basebackup_target.c
+++ b/src/backend/backup/basebackup_target.c
@@ -73,9 +73,9 @@ BaseBackupAddTarget(char *name,
/* Search the target type list for an existing entry with this name. */
foreach(lc, BaseBackupTargetTypeList)
{
- BaseBackupTargetType *ttype = lfirst(lc);
+ BaseBackupTargetType *this_ttype = lfirst(lc);
- if (strcmp(ttype->name, name) == 0)
+ if (strcmp(this_ttype->name, name) == 0)
{
/*
* We found one, so update it.
@@ -84,8 +84,8 @@ BaseBackupAddTarget(char *name,
* the same name multiple times, but if it happens, this seems
* like the sanest behavior.
*/
- ttype->check_detail = check_detail;
- ttype->get_sink = get_sink;
+ this_ttype->check_detail = check_detail;
+ this_ttype->get_sink = get_sink;
return;
}
}
--
2.17.1
0007-avoid-shadow-vars-parse_jsontable.c-jtc.patchtext/x-diff; charset=us-asciiDownload
From 180081aac947f65bf87c22a9da68b9383e521cd4 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:45:28 -0500
Subject: [PATCH 07/26] avoid shadow vars: parse_jsontable.c: jtc
backpatch to v15
commit fadb48b00e02ccfd152baa80942de30205ab3c4f
Author: Andrew Dunstan <andrew@dunslane.net>
Date: Tue Apr 5 14:09:04 2022 -0400
PLAN clauses for JSON_TABLE
---
src/backend/parser/parse_jsontable.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/backend/parser/parse_jsontable.c b/src/backend/parser/parse_jsontable.c
index bc3272017ef..84ff3fac140 100644
--- a/src/backend/parser/parse_jsontable.c
+++ b/src/backend/parser/parse_jsontable.c
@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext *cxt, JsonTablePlan *plan,
/* transform all nested columns into cross/union join */
foreach(lc, columns)
{
- JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
+ JsonTableColumn *thisjtc = castNode(JsonTableColumn, lfirst(lc));
Node *node;
- if (jtc->coltype != JTC_NESTED)
+ if (thisjtc->coltype != JTC_NESTED)
continue;
- node = transformNestedJsonTableColumn(cxt, jtc, plan);
+ node = transformNestedJsonTableColumn(cxt, thisjtc, plan);
/* join transformed node with previous sibling nodes */
res = res ? makeJsonTableSiblingJoin(cross, res, node) : node;
--
2.17.1
0008-avoid-shadow-vars-res.patchtext/x-diff; charset=us-asciiDownload
From 2fc3e45ae6c9d40de537577c75a32ec9e766ac76 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 00:22:45 -0500
Subject: [PATCH 08/26] avoid shadow vars: res
backpatch to v15
commit 1a36bc9dba8eae90963a586d37b6457b32b2fed4
Author: Andrew Dunstan <andrew@dunslane.net>
Date: Thu Mar 3 13:11:14 2022 -0500
SQL/JSON query functions
---
src/backend/utils/adt/jsonpath_exec.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index 5b6a4805721..ff1ec607eb1 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32 typmod, JsonbValue *res)
if (JsonContainerIsScalar(&jb->root))
{
- bool res PG_USED_FOR_ASSERTS_ONLY;
+ bool tmp PG_USED_FOR_ASSERTS_ONLY;
- res = JsonbExtractScalar(&jb->root, jbv);
- Assert(res);
+ tmp = JsonbExtractScalar(&jb->root, jbv);
+ Assert(tmp);
}
else
JsonbInitBinary(jbv, jb);
--
2.17.1
0009-avoid-shadow-vars-clauses.c-querytree_list.patchtext/x-diff; charset=us-asciiDownload
From 058d0ccbb7553def2ee3cfdc36b18ac49bfe81f3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:43:15 -0500
Subject: [PATCH 09/26] avoid shadow vars: clauses.c: querytree_list
commit e717a9a18b2e34c9c40e5259ad4d31cd7e420750
Author: Peter Eisentraut <peter@eisentraut.org>
Date: Wed Apr 7 21:30:08 2021 +0200
SQL-standard function body
---
src/backend/optimizer/util/clauses.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 533df86ff77..cfccdd08b5a 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4540,16 +4540,16 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
if (!isNull)
{
Node *n;
- List *querytree_list;
+ List *this_querytree_list;
n = stringToNode(TextDatumGetCString(tmp));
if (IsA(n, List))
- querytree_list = linitial_node(List, castNode(List, n));
+ this_querytree_list = linitial_node(List, castNode(List, n));
else
- querytree_list = list_make1(n);
- if (list_length(querytree_list) != 1)
+ this_querytree_list = list_make1(n);
+ if (list_length(this_querytree_list) != 1)
goto fail;
- querytree = linitial(querytree_list);
+ querytree = linitial(this_querytree_list);
/*
* Because we'll insist below that the querytree have an empty rtable
--
2.17.1
0010-avoid-shadow-vars-tablecmds.c-constraintOid.patchtext/x-diff; charset=us-asciiDownload
From dc97085ee1fe2a4a8146ceb99a2829667a55b6d8 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:28:02 -0500
Subject: [PATCH 10/26] avoid shadow vars: tablecmds.c: constraintOid
commit eb7ed3f3063401496e4aa4bd68fa33f0be31a72f
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Mon Feb 19 16:59:37 2018 -0300
Allow UNIQUE indexes on partitioned tables
---
src/backend/commands/tablecmds.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 8d7c68b8b3c..93b217ee14d 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -18098,14 +18098,14 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel)
if (!found)
{
IndexStmt *stmt;
- Oid constraintOid;
+ Oid this_conid;
stmt = generateClonedIndexStmt(NULL,
idxRel, attmap,
- &constraintOid);
+ &this_conid);
DefineIndex(RelationGetRelid(attachrel), stmt, InvalidOid,
RelationGetRelid(idxRel),
- constraintOid,
+ this_conid,
true, false, false, false, false);
}
--
2.17.1
0011-avoid-shadow-vars-tablecmds.c-copyTuple.patchtext/x-diff; charset=us-asciiDownload
From 0ff0dad2688b130b91cf0d532e042338544266d6 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:17:46 -0500
Subject: [PATCH 11/26] avoid shadow vars: tablecmds.c: copyTuple
commit 6f70d7ca1d1937a9f7b79eff6fb18ed1bb2a4c47
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Wed May 5 12:14:21 2021 -0400
Have ALTER CONSTRAINT recurse on partitioned tables
---
src/backend/commands/tablecmds.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 93b217ee14d..3403307c893 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10865,7 +10865,7 @@ ATExecAlterConstrRecurse(Constraint *cmdcon, Relation conrel, Relation tgrel,
{
Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tgtuple);
Form_pg_trigger copy_tg;
- HeapTuple copyTuple;
+ HeapTuple this_copyTuple;
/*
* Remember OIDs of other relation(s) involved in FK constraint.
@@ -10889,16 +10889,16 @@ ATExecAlterConstrRecurse(Constraint *cmdcon, Relation conrel, Relation tgrel,
tgform->tgfoid != F_RI_FKEY_CHECK_UPD)
continue;
- copyTuple = heap_copytuple(tgtuple);
- copy_tg = (Form_pg_trigger) GETSTRUCT(copyTuple);
+ this_copyTuple = heap_copytuple(tgtuple);
+ copy_tg = (Form_pg_trigger) GETSTRUCT(this_copyTuple);
copy_tg->tgdeferrable = cmdcon->deferrable;
copy_tg->tginitdeferred = cmdcon->initdeferred;
- CatalogTupleUpdate(tgrel, ©Tuple->t_self, copyTuple);
+ CatalogTupleUpdate(tgrel, &this_copyTuple->t_self, this_copyTuple);
InvokeObjectPostAlterHook(TriggerRelationId, tgform->oid, 0);
- heap_freetuple(copyTuple);
+ heap_freetuple(this_copyTuple);
}
systable_endscan(tgscan);
--
2.17.1
0012-avoid-shadow-vars-heap.c-rel-tuple.patchtext/x-diff; charset=us-asciiDownload
From 8de2ba6b4b419280ccc6a6b4ea17e34d09458fbc Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 21:02:16 -0500
Subject: [PATCH 12/26] avoid shadow vars: heap.c: rel, tuple
commit 0d692a0dc9f0e532c67c577187fe5d7d323cb95b
Author: Robert Haas <rhaas@postgresql.org>
Date: Sat Jan 1 23:48:11 2011 -0500
Basic foreign table support.
commit 258cef12540fa1cb244881a0f019cefd698c809e
Author: Robert Haas <rhaas@postgresql.org>
Date: Tue Apr 11 09:08:36 2017 -0400
Fix possibile deadlock when dropping partitions.
---
src/backend/catalog/heap.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9b03579e6e0..9a83ebf3231 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1818,19 +1818,19 @@ heap_drop_with_catalog(Oid relid)
*/
if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
{
- Relation rel;
- HeapTuple tuple;
+ Relation pg_foreign_table;
+ HeapTuple foreigntuple;
- rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+ pg_foreign_table = table_open(ForeignTableRelationId, RowExclusiveLock);
- tuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
- if (!HeapTupleIsValid(tuple))
+ foreigntuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
+ if (!HeapTupleIsValid(foreigntuple))
elog(ERROR, "cache lookup failed for foreign table %u", relid);
- CatalogTupleDelete(rel, &tuple->t_self);
+ CatalogTupleDelete(pg_foreign_table, &foreigntuple->t_self);
- ReleaseSysCache(tuple);
- table_close(rel, RowExclusiveLock);
+ ReleaseSysCache(foreigntuple);
+ table_close(pg_foreign_table, RowExclusiveLock);
}
/*
--
2.17.1
0013-avoid-shadow-vars-copyfrom.c-attnum.patchtext/x-diff; charset=us-asciiDownload
From d6d3eca1d77b40bb18328c0ecf78966d7abd27bb Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 16:55:10 -0500
Subject: [PATCH 13/26] avoid shadow vars: copyfrom.c: attnum
commit 3a1433674696fbb968bc2120ebd36d9766f49af5
Author: Bruce Momjian <bruce@momjian.us>
Date: Thu Apr 15 22:36:03 2004 +0000
Modify COPY for() loop to use attnum as a variable name, not 'i'.
---
src/backend/commands/copyfrom.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index a976008b3d4..e8bb168aea8 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -1202,7 +1202,6 @@ BeginCopyFrom(ParseState *pstate,
num_defaults;
FmgrInfo *in_functions;
Oid *typioparams;
- int attnum;
Oid in_func_oid;
int *defmap;
ExprState **defexprs;
@@ -1401,7 +1400,7 @@ BeginCopyFrom(ParseState *pstate,
defmap = (int *) palloc(num_phys_attrs * sizeof(int));
defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
- for (attnum = 1; attnum <= num_phys_attrs; attnum++)
+ for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
{
Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1);
--
2.17.1
0014-avoid-shadow-vars-nodeAgg-transno.patchtext/x-diff; charset=us-asciiDownload
From a8d6f5b7831964a23c9b2a3fc1e0d0a4d71f74c3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 18:36:13 -0500
Subject: [PATCH 14/26] avoid shadow vars: nodeAgg: transno
commit db80acfc9d50ac56811d22802ab3d822ab313055
Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Date: Tue Dec 20 09:20:17 2016 +0200
Fix sharing Agg transition state of DISTINCT or ordered aggs.
---
src/backend/executor/nodeAgg.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 96d200e4461..933c3049016 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -1296,13 +1296,12 @@ finalize_aggregates(AggState *aggstate,
Datum *aggvalues = econtext->ecxt_aggvalues;
bool *aggnulls = econtext->ecxt_aggnulls;
int aggno;
- int transno;
/*
* If there were any DISTINCT and/or ORDER BY aggregates, sort their
* inputs and run the transition functions.
*/
- for (transno = 0; transno < aggstate->numtrans; transno++)
+ for (int transno = 0; transno < aggstate->numtrans; transno++)
{
AggStatePerTrans pertrans = &aggstate->pertrans[transno];
AggStatePerGroup pergroupstate;
--
2.17.1
0015-avoid-shadow-vars-trigger.c-partitionId.patchtext/x-diff; charset=us-asciiDownload
From 25e68782b1686f8e164174cc03be04deb1950920 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:36:12 -0500
Subject: [PATCH 15/26] avoid shadow vars: trigger.c: partitionId
commit 80ba4bb383538a2ee846fece6a7b8da9518b6866
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Thu Jul 22 18:33:47 2021 -0400
Make ALTER TRIGGER RENAME consistent for partitioned tables
---
src/backend/commands/trigger.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 62a09fb131b..bb4385c6ea9 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1726,9 +1726,9 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
for (int i = 0; i < partdesc->nparts; i++)
{
- Oid partitionId = partdesc->oids[i];
+ Oid partid = partdesc->oids[i];
- renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+ renametrig_partition(tgrel, partid, tgform->oid, newname,
NameStr(tgform->tgname));
}
}
--
2.17.1
0016-avoid-shadow-vars-execPartition.c-found_whole_row.patchtext/x-diff; charset=us-asciiDownload
From 4a5814a70946496eba42f9ac43e9cafac314cc3e Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 20:20:38 -0500
Subject: [PATCH 16/26] avoid shadow vars: execPartition.c: found_whole_row
commit 158b7bc6d77948d2f474dc9f2777c87f81d1365a
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Mon Apr 16 15:50:57 2018 -0300
Ignore whole-rows in INSERT/CONFLICT with partitioned tables
See also:
commit 555ee77a9668e3f1b03307055b5027e13bf1a715
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Mon Mar 26 10:43:54 2018 -0300
Handle INSERT .. ON CONFLICT with partitioned tables
---
src/backend/executor/execPartition.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index ac03271882f..901dd435efd 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
{
List *onconflset;
List *onconflcols;
- bool found_whole_row;
/*
* Translate expressions in onConflictSet to account for
--
2.17.1
0017-avoid-shadow-vars-brin-keyno.patchtext/x-diff; charset=us-asciiDownload
From 89289938e3b6aa46d4dcc387da841d1ba10b2787 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 17:10:55 -0500
Subject: [PATCH 17/26] avoid shadow vars: brin keyno
commit a681e3c107aa97eb554f118935c4d2278892c3dd
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri Mar 26 13:17:56 2021 +0100
Support the old signature of BRIN consistent function
---
src/backend/access/brin/brin.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index e88f7efa7e4..69f21abfb59 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -372,7 +372,6 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
**nullkeys;
int *nkeys,
*nnullkeys;
- int keyno;
char *ptr;
Size len;
char *tmp PG_USED_FOR_ASSERTS_ONLY;
@@ -454,7 +453,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
memset(nnullkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
/* Preprocess the scan keys - split them into per-attribute arrays. */
- for (keyno = 0; keyno < scan->numberOfKeys; keyno++)
+ for (int keyno = 0; keyno < scan->numberOfKeys; keyno++)
{
ScanKey key = &scan->keyData[keyno];
AttrNumber keyattno = key->sk_attno;
--
2.17.1
0018-avoid-shadow-vars-bufmgr.c-j.patchtext/x-diff; charset=us-asciiDownload
From 0ff18c1521f4057e2dc0affb7321ae1da747586b Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 23:52:21 -0500
Subject: [PATCH 18/26] avoid shadow vars: bufmgr.c: j
commit bea449c635c0e68e21610593594c1e5d52842cdd
Author: Amit Kapila <akapila@postgresql.org>
Date: Wed Jan 13 07:46:11 2021 +0530
Optimize DropRelFileNodesAllBuffers() for recovery.
---
src/backend/storage/buffer/bufmgr.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 9c1bd508d36..a748efdb942 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -3183,7 +3183,6 @@ void
DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
{
int i;
- int j;
int n = 0;
SMgrRelation *rels;
BlockNumber (*block)[MAX_FORKNUM + 1];
@@ -3232,7 +3231,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
*/
for (i = 0; i < n && cached; i++)
{
- for (j = 0; j <= MAX_FORKNUM; j++)
+ for (int j = 0; j <= MAX_FORKNUM; j++)
{
/* Get the number of blocks for a relation's fork. */
block[i][j] = smgrnblocks_cached(rels[i], j);
@@ -3259,7 +3258,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
{
for (i = 0; i < n; i++)
{
- for (j = 0; j <= MAX_FORKNUM; j++)
+ for (int j = 0; j <= MAX_FORKNUM; j++)
{
/* ignore relation forks that doesn't exist */
if (!BlockNumberIsValid(block[i][j]))
--
2.17.1
0019-avoid-shadow-vars-psql-command.c-host.patchtext/x-diff; charset=us-asciiDownload
From aeb44609d8734ac799bdb941addac0fb82f7e549 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 08:49:46 -0500
Subject: [PATCH 19/26] avoid shadow vars: psql/command.c: host
commit 87e0b7422d70ff4fb69612ef7ba3cbee6ed8d2ae
Author: Robert Haas <rhaas@postgresql.org>
Date: Fri Jul 23 14:56:54 2010 +0000
Have psql avoid describing local sockets as host names.
---
src/bin/psql/command.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index a81bd3307b4..09950cac60a 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -3551,27 +3551,27 @@ do_connect(enum trivalue reuse_previous_specification,
param_is_newly_set(PQhost(o_conn), PQhost(pset.db)) ||
param_is_newly_set(PQport(o_conn), PQport(pset.db)))
{
- char *host = PQhost(pset.db);
- char *hostaddr = PQhostaddr(pset.db);
+ char *dbhost = PQhost(pset.db);
+ char *dbhostaddr = PQhostaddr(pset.db);
- if (is_unixsock_path(host))
+ if (is_unixsock_path(dbhost))
{
/* hostaddr overrides host */
- if (hostaddr && *hostaddr)
+ if (dbhostaddr && *dbhostaddr)
printf(_("You are now connected to database \"%s\" as user \"%s\" on address \"%s\" at port \"%s\".\n"),
- PQdb(pset.db), PQuser(pset.db), hostaddr, PQport(pset.db));
+ PQdb(pset.db), PQuser(pset.db), dbhostaddr, PQport(pset.db));
else
printf(_("You are now connected to database \"%s\" as user \"%s\" via socket in \"%s\" at port \"%s\".\n"),
- PQdb(pset.db), PQuser(pset.db), host, PQport(pset.db));
+ PQdb(pset.db), PQuser(pset.db), dbhost, PQport(pset.db));
}
else
{
- if (hostaddr && *hostaddr && strcmp(host, hostaddr) != 0)
+ if (dbhostaddr && *dbhostaddr && strcmp(dbhost, dbhostaddr) != 0)
printf(_("You are now connected to database \"%s\" as user \"%s\" on host \"%s\" (address \"%s\") at port \"%s\".\n"),
- PQdb(pset.db), PQuser(pset.db), host, hostaddr, PQport(pset.db));
+ PQdb(pset.db), PQuser(pset.db), dbhost, dbhostaddr, PQport(pset.db));
else
printf(_("You are now connected to database \"%s\" as user \"%s\" on host \"%s\" at port \"%s\".\n"),
- PQdb(pset.db), PQuser(pset.db), host, PQport(pset.db));
+ PQdb(pset.db), PQuser(pset.db), dbhost, PQport(pset.db));
}
}
else
--
2.17.1
0020-avoid-shadow-vars-ruleutils-dpns.patchtext/x-diff; charset=us-asciiDownload
From c0a638c97f05e493bf6fee86895d746db89bcbb1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 08:56:42 -0500
Subject: [PATCH 20/26] avoid shadow vars: ruleutils: dpns
commit e717a9a18b2e34c9c40e5259ad4d31cd7e420750
Author: Peter Eisentraut <peter@eisentraut.org>
Date: Wed Apr 7 21:30:08 2021 +0200
SQL-standard function body
---
src/backend/utils/adt/ruleutils.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 8964f73b929..44a3b064cad 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -8112,9 +8112,9 @@ get_parameter(Param *param, deparse_context *context)
*/
foreach(lc, context->namespaces)
{
- deparse_namespace *dpns = lfirst(lc);
+ deparse_namespace *tmp = lfirst(lc);
- if (dpns->rtable_names != NIL)
+ if (tmp->rtable_names != NIL)
{
should_qualify = true;
break;
--
2.17.1
0021-avoid-shadow-vars-costsize.c-subpath.patchtext/x-diff; charset=us-asciiDownload
From c95f78e6a9b0e29fe195fb1cad34309eb5f5a8b3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 15:54:19 -0500
Subject: [PATCH 21/26] avoid shadow vars: costsize.c: subpath
commit 959d00e9dbe4cfcf4a63bb655ac2c29a5e579246
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Fri Apr 5 19:20:30 2019 -0400
Use Append rather than MergeAppend for scanning ordered partitions.
---
src/backend/optimizer/path/costsize.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 1e94c5aa7c4..504b13da7be 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2545,10 +2545,10 @@ cost_append(AppendPath *apath, PlannerInfo *root)
/* Compute rows and costs as sums of subplan rows and costs. */
foreach(l, apath->subpaths)
{
- Path *subpath = (Path *) lfirst(l);
+ Path *sub = (Path *) lfirst(l);
- apath->path.rows += subpath->rows;
- apath->path.total_cost += subpath->total_cost;
+ apath->path.rows += sub->rows;
+ apath->path.total_cost += sub->total_cost;
}
}
else
--
2.17.1
0022-avoid-shadow-vars-partitionfuncs.c-relid.patchtext/x-diff; charset=us-asciiDownload
From a9c0e3e1b79a961b3386e0cc489966fad820bbb1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 15:59:47 -0500
Subject: [PATCH 22/26] avoid shadow vars: partitionfuncs.c: relid
commit b96f6b19487fb9802216311b242c01c27c1938de
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Mon Mar 4 16:14:29 2019 -0300
pg_partition_ancestors
---
src/backend/utils/adt/partitionfuncs.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/backend/utils/adt/partitionfuncs.c b/src/backend/utils/adt/partitionfuncs.c
index 109dc8023e1..59983381924 100644
--- a/src/backend/utils/adt/partitionfuncs.c
+++ b/src/backend/utils/adt/partitionfuncs.c
@@ -238,9 +238,9 @@ pg_partition_ancestors(PG_FUNCTION_ARGS)
if (funcctx->call_cntr < list_length(ancestors))
{
- Oid relid = list_nth_oid(ancestors, funcctx->call_cntr);
+ Oid thisrelid = list_nth_oid(ancestors, funcctx->call_cntr);
- SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(thisrelid));
}
SRF_RETURN_DONE(funcctx);
--
2.17.1
0023-avoid-shadow-vars-rangetypes_gist.c-range.patchtext/x-diff; charset=us-asciiDownload
From ab02a73d6d100100cd5a87073e5cdc3c9a0865d1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:19:50 -0500
Subject: [PATCH 23/26] avoid shadow vars: rangetypes_gist.c: range
commit 80da9e68fdd70b796b3a7de3821589513596c0f7
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Sun Mar 4 22:50:06 2012 -0500
Rewrite GiST support code for rangetypes.
---
src/backend/utils/adt/rangetypes_gist.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/backend/utils/adt/rangetypes_gist.c b/src/backend/utils/adt/rangetypes_gist.c
index fbf39dbf303..a14b8261fdb 100644
--- a/src/backend/utils/adt/rangetypes_gist.c
+++ b/src/backend/utils/adt/rangetypes_gist.c
@@ -1350,10 +1350,10 @@ range_gist_double_sorting_split(TypeCacheEntry *typcache,
/* Fill arrays of bounds */
for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
{
- RangeType *range = DatumGetRangeTypeP(entryvec->vector[i].key);
+ RangeType *thisrange = DatumGetRangeTypeP(entryvec->vector[i].key);
bool empty;
- range_deserialize(typcache, range,
+ range_deserialize(typcache, thisrange,
&by_lower[i - FirstOffsetNumber].lower,
&by_lower[i - FirstOffsetNumber].upper,
&empty);
--
2.17.1
0024-avoid-shadow-vars-ecpglib-execute.c-len.patchtext/x-diff; charset=us-asciiDownload
From d35ec6a168342182db48442545dd58649ac85094 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 19:21:49 -0500
Subject: [PATCH 24/26] avoid shadow vars: ecpglib/execute.c: len
commit a4f25b6a9c2dbf5f38e498922e3761cb3bf46ba0
Author: Michael Meskes <meskes@postgresql.org>
Date: Sun Mar 16 10:42:54 2003 +0000
Started working on a seperate pgtypes library. First test work. PLEASE test compilation on iother systems.
---
src/interfaces/ecpg/ecpglib/execute.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/interfaces/ecpg/ecpglib/execute.c b/src/interfaces/ecpg/ecpglib/execute.c
index bd94bd4e6c6..2d34f76cd73 100644
--- a/src/interfaces/ecpg/ecpglib/execute.c
+++ b/src/interfaces/ecpg/ecpglib/execute.c
@@ -367,10 +367,10 @@ ecpg_store_result(const PGresult *results, int act_field,
/* check strlen for each tuple */
for (act_tuple = 0; act_tuple < ntuples; act_tuple++)
{
- int len = strlen(PQgetvalue(results, act_tuple, act_field)) + 1;
+ int thislen = strlen(PQgetvalue(results, act_tuple, act_field)) + 1;
- if (len > var->varcharsize)
- var->varcharsize = len;
+ if (thislen > var->varcharsize)
+ var->varcharsize = thislen;
}
var->offset *= var->varcharsize;
len = var->offset * ntuples;
--
2.17.1
0025-avoid-shadow-vars-autovacuum.c-db.patchtext/x-diff; charset=us-asciiDownload
From 2faa21ca588f0af51fbd69b4d37dd5c82f8bb5cd Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 16 Aug 2022 21:05:47 -0500
Subject: [PATCH 25/26] avoid shadow vars: autovacuum.c: db
commit e2a186b03cc1a87cf26644db18f28a20f10bd739
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Mon Apr 16 18:30:04 2007 +0000
Add a multi-worker capability to autovacuum. This allows multiple worker
---
src/backend/postmaster/autovacuum.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 70a9176c54c..9d5ddb80974 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1102,14 +1102,14 @@ rebuild_database_list(Oid newdb)
*/
for (i = 0; i < nelems; i++)
{
- avl_dbase *db = &(dbary[i]);
+ avl_dbase *thisdb = &(dbary[i]);
current_time = TimestampTzPlusMilliseconds(current_time,
millis_increment);
- db->adl_next_worker = current_time;
+ thisdb->adl_next_worker = current_time;
/* later elements should go closer to the head of the list */
- dlist_push_head(&DatabaseList, &db->adl_node);
+ dlist_push_head(&DatabaseList, &thisdb->adl_node);
}
}
--
2.17.1
0026-avoid-shadow-vars-basebackup.c-ti.patchtext/x-diff; charset=us-asciiDownload
From a1c1323ca209e7de7198066843d854cbc9fab127 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Aug 2022 00:05:30 -0500
Subject: [PATCH 26/26] avoid shadow vars: basebackup.c: ti
commit 3866ff6149a3b072561e65b3f71f63498e77b6b2
Author: Magnus Hagander <magnus@hagander.net>
Date: Sat Jan 15 19:18:14 2011 +0100
Enumerate available tablespaces after starting the backup
---
src/backend/backup/basebackup.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index 715428029b3..b3367dccf74 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -306,9 +306,9 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)
/* Send off our tablespaces one by one */
foreach(lc, state.tablespaces)
{
- tablespaceinfo *ti = (tablespaceinfo *) lfirst(lc);
+ tablespaceinfo *thisti = (tablespaceinfo *) lfirst(lc);
- if (ti->path == NULL)
+ if (thisti->path == NULL)
{
struct stat statbuf;
bool sendtblspclinks = true;
@@ -342,11 +342,11 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)
}
else
{
- char *archive_name = psprintf("%s.tar", ti->oid);
+ char *archive_name = psprintf("%s.tar", thisti->oid);
bbsink_begin_archive(sink, archive_name);
- sendTablespace(sink, ti->path, ti->oid, false, &manifest);
+ sendTablespace(sink, thisti->path, thisti->oid, false, &manifest);
}
/*
@@ -355,7 +355,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)
* include the xlog files below and stop afterwards. This is safe
* since the main data directory is always sent *last*.
*/
- if (opt->includewal && ti->path == NULL)
+ if (opt->includewal && thisti->path == NULL)
{
Assert(lnext(state.tablespaces, lc) == NULL);
}
--
2.17.1
On Thu, 18 Aug 2022 at 02:54, Justin Pryzby <pryzby@telsasoft.com> wrote:
The first half of the patches fix shadow variables newly-introduced in v15
(including one of my own patches), the rest are fixing the lowest hanging fruit
of the "short list" from COPT=-Wshadow=compatible-local
I wonder if it's better to fix the "big hitters" first. The idea
there would be to try to reduce the number of these warnings as
quickly and easily as possible. If we can get the numbers down fairly
significantly without too much effort, then that should provide us
with a bit more motivation to get rid of the remaining ones.
Here are the warnings grouped by the name of the variable:
$ make -s 2>&1 | grep "warning: declaration of" | grep -oP
"‘([_a-zA-Z]{1}[_a-zA-Z0-9]*)’" | sort | uniq -c
2 ‘aclresult’
3 ‘attnum’
1 ‘cell’
1 ‘cell__state’
2 ‘cmp’
2 ‘command’
1 ‘constraintOid’
1 ‘copyTuple’
1 ‘data’
1 ‘db’
1 ‘_do_rethrow’
1 ‘dpns’
1 ‘econtext’
1 ‘entry’
36 ‘expected’
1 ‘first’
1 ‘found_whole_row’
1 ‘host’
20 ‘i’
1 ‘iclause’
1 ‘idxs’
1 ‘i_oid’
4 ‘isnull’
1 ‘it’
2 ‘item’
1 ‘itemno’
1 ‘j’
1 ‘jtc’
1 ‘k’
1 ‘keyno’
7 ‘l’
13 ‘lc’
4 ‘lc__state’
1 ‘len’
1 ‘_local_sigjmp_buf’
1 ‘name’
2 ‘now’
1 ‘owning_tab’
1 ‘page’
1 ‘partitionId’
2 ‘path’
3 ‘proc’
1 ‘proclock’
1 ‘querytree_list’
1 ‘range’
1 ‘rel’
1 ‘relation’
1 ‘relid’
1 ‘rightop’
2 ‘rinfo’
1 ‘_save_context_stack’
1 ‘save_errno’
1 ‘_save_exception_stack’
1 ‘slot’
1 ‘sqlca’
9 ‘startelem’
1 ‘stmt_list’
2 ‘str’
1 ‘subpath’
1 ‘tbinfo’
1 ‘ti’
1 ‘transno’
1 ‘ttype’
1 ‘tuple’
5 ‘val’
1 ‘value2’
1 ‘wco’
1 ‘xid’
1 ‘xlogfname’
The top 5 by count here account for about half of the warnings, so
maybe is best to start with those? Likely the ones ending in __state
will fix themselves when you fix the variable with the same name
without that suffix.
The attached patch targets fixing the "expected" variable.
$ ./configure --prefix=/home/drowley/pg
CFLAGS="-Wshadow=compatible-local" > /dev/null
$ make clean -s
$ make -j -s 2>&1 | grep "warning: declaration of" | wc -l
153
$ make clean -s
$ patch -p1 < reduce_local_variable_shadow_warnings_in_regress.c.patch
$ make -j -s 2>&1 | grep "warning: declaration of" | wc -l
117
So 36 fewer warnings with the attached.
I'm probably not the only committer to want to run a mile when they
see someone posting 17 or 26 patches in an email. So maybe "bang for
buck" is a better method for getting the ball rolling here. As you
know, I was recently bitten by local shadows in af7d270dd, so I do
believe in the cause.
What do you think?
David
Attachments:
reduce_local_variable_shadow_warnings_in_regress.c.patchtext/plain; charset=US-ASCII; name=reduce_local_variable_shadow_warnings_in_regress.c.patchDownload
diff --git a/src/test/regress/regress.c b/src/test/regress/regress.c
index ba3532a51e..6d285255dd 100644
--- a/src/test/regress/regress.c
+++ b/src/test/regress/regress.c
@@ -56,22 +56,22 @@
#define EXPECT_EQ_U32(result_expr, expected_expr) \
do { \
- uint32 result = (result_expr); \
- uint32 expected = (expected_expr); \
- if (result != expected) \
+ uint32 actual_result = (result_expr); \
+ uint32 expected_result = (expected_expr); \
+ if (actual_result != expected_result) \
elog(ERROR, \
"%s yielded %u, expected %s in file \"%s\" line %u", \
- #result_expr, result, #expected_expr, __FILE__, __LINE__); \
+ #result_expr, actual_result, #expected_expr, __FILE__, __LINE__); \
} while (0)
#define EXPECT_EQ_U64(result_expr, expected_expr) \
do { \
- uint64 result = (result_expr); \
- uint64 expected = (expected_expr); \
- if (result != expected) \
+ uint64 actual_result = (result_expr); \
+ uint64 expected_result = (expected_expr); \
+ if (actual_result != expected_result) \
elog(ERROR, \
"%s yielded " UINT64_FORMAT ", expected %s in file \"%s\" line %u", \
- #result_expr, result, #expected_expr, __FILE__, __LINE__); \
+ #result_expr, actual_result, #expected_expr, __FILE__, __LINE__); \
} while (0)
#define LDELIM '('
Michael Paquier <michael@paquier.xyz> writes:
A lot of the changes proposed here update the code so as the same
variable gets used across more code paths by removing declarations,
but we have two variables defined because both are aimed to be used in
a different context (see AttachPartitionEnsureIndexes() in tablecmds.c
for example).
Wouldn't it be a saner approach in a lot of cases to rename the
shadowed variables (aka the ones getting removed in your patches) and
keep them local to the code paths where we use them?
Yeah. I do not think a patch of this sort has any business changing
the scopes of variables. That moves it out of "cosmetic cleanup"
and into "hm, I wonder if this introduces any bugs". Most hackers
are going to decide that they have better ways to spend their time
than doing that level of analysis for a very noncritical patch.
regards, tom lane
On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:
I'm probably not the only committer to want to run a mile when they
see someone posting 17 or 26 patches in an email. So maybe "bang for
buck" is a better method for getting the ball rolling here. As you
know, I was recently bitten by local shadows in af7d270dd, so I do
believe in the cause.What do you think?
You already fixed the shadow var introduced in master/pg16, and I sent patches
for the shadow vars added in pg15 (marked as such and presented as 001-008), so
perhaps it's okay to start with that ?
BTW, one of the remaining warnings seems to be another buglet, which I'll write
about at a later date.
--
Justin
On Thu, 18 Aug 2022 at 17:16, Justin Pryzby <pryzby@telsasoft.com> wrote:
On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:
I'm probably not the only committer to want to run a mile when they
see someone posting 17 or 26 patches in an email. So maybe "bang for
buck" is a better method for getting the ball rolling here. As you
know, I was recently bitten by local shadows in af7d270dd, so I do
believe in the cause.What do you think?
You already fixed the shadow var introduced in master/pg16, and I sent patches
for the shadow vars added in pg15 (marked as such and presented as 001-008), so
perhaps it's okay to start with that ?
Alright, I made a pass over the 0001-0008 patches.
0001. I'd also rather see these 4 renamed:
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3144,7 +3144,6 @@ dumpDatabase(Archive *fout)
PQExpBuffer loHorizonQry = createPQExpBuffer();
int i_relfrozenxid,
i_relfilenode,
- i_oid,
i_relminmxid;
Adding an extra 'i' (for inner) on the front seems fine to me.
0002. I don't really like the "my" name. I also see you've added the
word "this" to many other variables that are shadowing. It feels kinda
like you're missing a "self" and a "me" in there somewhere! :)
@@ -7080,21 +7080,21 @@ getConstraints(Archive *fout, TableInfo
tblinfo[], int numTables)
appendPQExpBufferChar(tbloids, '{');
for (int i = 0; i < numTables; i++)
{
- TableInfo *tbinfo = &tblinfo[i];
+ TableInfo *mytbinfo = &tblinfo[i];
How about just "tinfo"?
0003. The following is used for the exact same purpose as its shadowed
counterpart. I suggest just using the variable from the outer scope.
@@ -16799,21 +16799,21 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
*/
if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
{
- TableInfo *owning_tab = findTableByOid(tbinfo->owning_tab);
+ TableInfo *this_owning_tab = findTableByOid(tbinfo->owning_tab);
0004. I would rather people used foreach_current_index(lc) > 0 to
determine when we're not doing the first iteration of a foreach loop.
I understand there are more complex cases with filtering that this
cannot be done, but these are highly simple and using
foreach_current_index() removes multiple lines of code and makes it
look nicer.
@@ -762,8 +762,8 @@ fetch_remote_table_info(char *nspname, char *relname,
TupleTableSlot *slot;
Oid attrsRow[] = {INT2VECTOROID};
StringInfoData pub_names;
- bool first = true;
+ first = true;
initStringInfo(&pub_names);
foreach(lc, MySubscription->publications)
0005. How about just "tslot". I'm not a fan of "this".
+++ b/src/backend/replication/logical/tablesync.c
@@ -759,7 +759,7 @@ fetch_remote_table_info(char *nspname, char *relname,
if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
{
WalRcvExecResult *pubres;
- TupleTableSlot *slot;
+ TupleTableSlot *thisslot;
0006. A see the outer shadowed counterpart is used to add a new backup
type. Since I'm not a fan of "this", how about the outer one gets
renamed to "newtype"?
+++ b/src/backend/backup/basebackup_target.c
@@ -73,9 +73,9 @@ BaseBackupAddTarget(char *name,
/* Search the target type list for an existing entry with this name. */
foreach(lc, BaseBackupTargetTypeList)
{
- BaseBackupTargetType *ttype = lfirst(lc);
+ BaseBackupTargetType *this_ttype = lfirst(lc);
0007. Meh, more "this". How about just "col".
+++ b/src/backend/parser/parse_jsontable.c
@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext
*cxt, JsonTablePlan *plan,
/* transform all nested columns into cross/union join */
foreach(lc, columns)
{
- JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
+ JsonTableColumn *thisjtc = castNode(JsonTableColumn, lfirst(lc));
There's a discussion about reverting this entire patch. Not sure if
patching master and not backpatching to pg15 would be useful to the
people who may be doing that revert.
0008. Sorry, I had to change this one too. I just have an aversion to
variables named "temp" or "tmp".
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32
typmod, JsonbValue *res)
if (JsonContainerIsScalar(&jb->root))
{
- bool res PG_USED_FOR_ASSERTS_ONLY;
+ bool tmp PG_USED_FOR_ASSERTS_ONLY;
- res = JsonbExtractScalar(&jb->root, jbv);
- Assert(res);
+ tmp = JsonbExtractScalar(&jb->root, jbv);
+ Assert(tmp);
I've attached a patch which does things more along the lines of how I
would have done it. I don't think we should be back patching this
stuff.
Any objections to pushing this to master only?
David
Attachments:
shadow_pg15.patchtext/plain; charset=US-ASCII; name=shadow_pg15.patchDownload
diff --git a/src/backend/backup/basebackup_target.c b/src/backend/backup/basebackup_target.c
index 83928e3205..f280660a03 100644
--- a/src/backend/backup/basebackup_target.c
+++ b/src/backend/backup/basebackup_target.c
@@ -62,7 +62,7 @@ BaseBackupAddTarget(char *name,
void *(*check_detail) (char *, char *),
bbsink *(*get_sink) (bbsink *, void *))
{
- BaseBackupTargetType *ttype;
+ BaseBackupTargetType *newtype;
MemoryContext oldcontext;
ListCell *lc;
@@ -96,11 +96,11 @@ BaseBackupAddTarget(char *name,
* name into a newly-allocated chunk of memory.
*/
oldcontext = MemoryContextSwitchTo(TopMemoryContext);
- ttype = palloc(sizeof(BaseBackupTargetType));
- ttype->name = pstrdup(name);
- ttype->check_detail = check_detail;
- ttype->get_sink = get_sink;
- BaseBackupTargetTypeList = lappend(BaseBackupTargetTypeList, ttype);
+ newtype = palloc(sizeof(BaseBackupTargetType));
+ newtype->name = pstrdup(name);
+ newtype->check_detail = check_detail;
+ newtype->get_sink = get_sink;
+ BaseBackupTargetTypeList = lappend(BaseBackupTargetTypeList, newtype);
MemoryContextSwitchTo(oldcontext);
}
diff --git a/src/backend/parser/parse_jsontable.c b/src/backend/parser/parse_jsontable.c
index bc3272017e..3e94071248 100644
--- a/src/backend/parser/parse_jsontable.c
+++ b/src/backend/parser/parse_jsontable.c
@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext *cxt, JsonTablePlan *plan,
/* transform all nested columns into cross/union join */
foreach(lc, columns)
{
- JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
+ JsonTableColumn *col = castNode(JsonTableColumn, lfirst(lc));
Node *node;
- if (jtc->coltype != JTC_NESTED)
+ if (col->coltype != JTC_NESTED)
continue;
- node = transformNestedJsonTableColumn(cxt, jtc, plan);
+ node = transformNestedJsonTableColumn(cxt, col, plan);
/* join transformed node with previous sibling nodes */
res = res ? makeJsonTableSiblingJoin(cross, res, node) : node;
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index bfcb80b495..d37d8a0d74 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -707,7 +707,6 @@ fetch_remote_table_info(char *nspname, char *relname,
bool isnull;
int natt;
ListCell *lc;
- bool first;
Bitmapset *included_cols = NULL;
lrel->nspname = nspname;
@@ -759,18 +758,15 @@ fetch_remote_table_info(char *nspname, char *relname,
if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
{
WalRcvExecResult *pubres;
- TupleTableSlot *slot;
+ TupleTableSlot *tslot;
Oid attrsRow[] = {INT2VECTOROID};
StringInfoData pub_names;
- bool first = true;
-
initStringInfo(&pub_names);
foreach(lc, MySubscription->publications)
{
- if (!first)
+ if (foreach_current_index(lc) > 0)
appendStringInfo(&pub_names, ", ");
appendStringInfoString(&pub_names, quote_literal_cstr(strVal(lfirst(lc))));
- first = false;
}
/*
@@ -819,10 +815,10 @@ fetch_remote_table_info(char *nspname, char *relname,
* If we find a NULL value, it means all the columns should be
* replicated.
*/
- slot = MakeSingleTupleTableSlot(pubres->tupledesc, &TTSOpsMinimalTuple);
- if (tuplestore_gettupleslot(pubres->tuplestore, true, false, slot))
+ tslot = MakeSingleTupleTableSlot(pubres->tupledesc, &TTSOpsMinimalTuple);
+ if (tuplestore_gettupleslot(pubres->tuplestore, true, false, tslot))
{
- Datum cfval = slot_getattr(slot, 1, &isnull);
+ Datum cfval = slot_getattr(tslot, 1, &isnull);
if (!isnull)
{
@@ -838,9 +834,9 @@ fetch_remote_table_info(char *nspname, char *relname,
included_cols = bms_add_member(included_cols, elems[natt]);
}
- ExecClearTuple(slot);
+ ExecClearTuple(tslot);
}
- ExecDropSingleTupleTableSlot(slot);
+ ExecDropSingleTupleTableSlot(tslot);
walrcv_clear_result(pubres);
@@ -950,14 +946,11 @@ fetch_remote_table_info(char *nspname, char *relname,
/* Build the pubname list. */
initStringInfo(&pub_names);
- first = true;
foreach(lc, MySubscription->publications)
{
char *pubname = strVal(lfirst(lc));
- if (first)
- first = false;
- else
+ if (foreach_current_index(lc) > 0)
appendStringInfoString(&pub_names, ", ");
appendStringInfoString(&pub_names, quote_literal_cstr(pubname));
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index 5b6a480572..9c381ae727 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32 typmod, JsonbValue *res)
if (JsonContainerIsScalar(&jb->root))
{
- bool res PG_USED_FOR_ASSERTS_ONLY;
+ bool result PG_USED_FOR_ASSERTS_ONLY;
- res = JsonbExtractScalar(&jb->root, jbv);
- Assert(res);
+ result = JsonbExtractScalar(&jb->root, jbv);
+ Assert(result);
}
else
JsonbInitBinary(jbv, jb);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index da6605175a..2c68915732 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3142,10 +3142,10 @@ dumpDatabase(Archive *fout)
PQExpBuffer loFrozenQry = createPQExpBuffer();
PQExpBuffer loOutQry = createPQExpBuffer();
PQExpBuffer loHorizonQry = createPQExpBuffer();
- int i_relfrozenxid,
- i_relfilenode,
- i_oid,
- i_relminmxid;
+ int ii_relfrozenxid,
+ ii_relfilenode,
+ ii_oid,
+ ii_relminmxid;
/*
* pg_largeobject
@@ -3163,10 +3163,10 @@ dumpDatabase(Archive *fout)
lo_res = ExecuteSqlQuery(fout, loFrozenQry->data, PGRES_TUPLES_OK);
- i_relfrozenxid = PQfnumber(lo_res, "relfrozenxid");
- i_relminmxid = PQfnumber(lo_res, "relminmxid");
- i_relfilenode = PQfnumber(lo_res, "relfilenode");
- i_oid = PQfnumber(lo_res, "oid");
+ ii_relfrozenxid = PQfnumber(lo_res, "relfrozenxid");
+ ii_relminmxid = PQfnumber(lo_res, "relminmxid");
+ ii_relfilenode = PQfnumber(lo_res, "relfilenode");
+ ii_oid = PQfnumber(lo_res, "oid");
appendPQExpBufferStr(loHorizonQry, "\n-- For binary upgrade, set pg_largeobject relfrozenxid and relminmxid\n");
appendPQExpBufferStr(loOutQry, "\n-- For binary upgrade, preserve pg_largeobject and index relfilenodes\n");
@@ -3178,12 +3178,12 @@ dumpDatabase(Archive *fout)
appendPQExpBuffer(loHorizonQry, "UPDATE pg_catalog.pg_class\n"
"SET relfrozenxid = '%u', relminmxid = '%u'\n"
"WHERE oid = %u;\n",
- atooid(PQgetvalue(lo_res, i, i_relfrozenxid)),
- atooid(PQgetvalue(lo_res, i, i_relminmxid)),
- atooid(PQgetvalue(lo_res, i, i_oid)));
+ atooid(PQgetvalue(lo_res, i, ii_relfrozenxid)),
+ atooid(PQgetvalue(lo_res, i, ii_relminmxid)),
+ atooid(PQgetvalue(lo_res, i, ii_oid)));
- oid = atooid(PQgetvalue(lo_res, i, i_oid));
- relfilenumber = atooid(PQgetvalue(lo_res, i, i_relfilenode));
+ oid = atooid(PQgetvalue(lo_res, i, ii_oid));
+ relfilenumber = atooid(PQgetvalue(lo_res, i, ii_relfilenode));
if (oid == LargeObjectRelationId)
appendPQExpBuffer(loOutQry,
@@ -7081,21 +7081,21 @@ getConstraints(Archive *fout, TableInfo tblinfo[], int numTables)
appendPQExpBufferChar(tbloids, '{');
for (int i = 0; i < numTables; i++)
{
- TableInfo *tbinfo = &tblinfo[i];
+ TableInfo *tinfo = &tblinfo[i];
/*
* For partitioned tables, foreign keys have no triggers so they must
* be included anyway in case some foreign keys are defined.
*/
- if ((!tbinfo->hastriggers &&
- tbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
- !(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
+ if ((!tinfo->hastriggers &&
+ tinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
+ !(tinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
continue;
/* OK, we need info for this table */
if (tbloids->len > 1) /* do we have more than the '{'? */
appendPQExpBufferChar(tbloids, ',');
- appendPQExpBuffer(tbloids, "%u", tbinfo->dobj.catId.oid);
+ appendPQExpBuffer(tbloids, "%u", tinfo->dobj.catId.oid);
}
appendPQExpBufferChar(tbloids, '}');
@@ -16800,7 +16800,7 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
*/
if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
{
- TableInfo *owning_tab = findTableByOid(tbinfo->owning_tab);
+ owning_tab = findTableByOid(tbinfo->owning_tab);
if (owning_tab == NULL)
pg_fatal("failed sanity check, parent table with OID %u of sequence with OID %u not found",
On Thu, Aug 18, 2022 at 5:27 PM David Rowley <dgrowleyml@gmail.com> wrote:
On Thu, 18 Aug 2022 at 17:16, Justin Pryzby <pryzby@telsasoft.com> wrote:
On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:
I'm probably not the only committer to want to run a mile when they
see someone posting 17 or 26 patches in an email. So maybe "bang for
buck" is a better method for getting the ball rolling here. As you
know, I was recently bitten by local shadows in af7d270dd, so I do
believe in the cause.What do you think?
You already fixed the shadow var introduced in master/pg16, and I sent patches
for the shadow vars added in pg15 (marked as such and presented as 001-008), so
perhaps it's okay to start with that ?Alright, I made a pass over the 0001-0008 patches.
...
0005. How about just "tslot". I'm not a fan of "this".
(I'm sure there are others like this; I just picked this one as an example)
AFAICT the offending 'slot' really should have never been declared at
all at the local scope in the first place - e.g. the other code in
this function seems happy enough with the pattern of just re-using the
function scoped 'slot'.
I understand that for this shadow patch changing the var-name is
considered the saner/safer way than tampering with the scope, but
perhaps it is still useful to include a comment when changing ones
like this?
e.g.
+ TupleTableSlot *tslot; /* TODO - Why declare this at all? Shouldn't
it just re-use the 'slot' at function scope? */
Otherwise, such knowledge will be lost, and nobody will ever know to
revisit them, which feels a bit more like *hiding* the mistake than
fixing it.
------
Kind Regards,
Peter Smith.
Fujitsu Australia
On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:
0001. I'd also rather see these 4 renamed:
..
0002. I don't really like the "my" name. I also see you've added the
..
How about just "tinfo"?
..
0005. How about just "tslot". I'm not a fan of "this".
..
Since I'm not a fan of "this", how about the outer one gets renamed
..
0007. Meh, more "this". How about just "col".
..
0008. Sorry, I had to change this one too.
I agree that ii_oid and newtype are better names (although it's a bit
unfortunate to rename the outer "ttype" var of wider scope).
0003. The following is used for the exact same purpose as its shadowed
counterpart. I suggest just using the variable from the outer scope.
And that's what my original patch did, before people insisted that the patches
shouldn't change variable scope. Now it's back to where I stared.
There's a discussion about reverting this entire patch. Not sure if
patching master and not backpatching to pg15 would be useful to the
people who may be doing that revert.
I think if it were reverted, it'd be in both branches.
I've attached a patch which does things more along the lines of how I
would have done it. I don't think we should be back patching this
stuff.Any objections to pushing this to master only?
I won't object, but some of your changes are what makes backpatching this less
reasonable (foreach_current_index and newtype). I had made these v15 patches
first to simplify backpatching, since having the same code in v15 means that
there's no backpatch hazard for this new-in-v15 code.
I am opened to presenting the patches differently, but we need to come up with
a better process than one person writing patches and someone else rewriting it.
I also don't see the value of debating which order to write the patches in.
Grouping by variable name or doing other statistical analysis doesn't change
the fact that there are 50+ issues to address to allow -Wshadow to be usable.
Maybe these would be helpful ?
- if I publish the patches on github;
- if I send the patches with more context;
- if you have an suggestion/objection/complaint with a patch, I can address it
and/or re-arrange the patchset so this is later, and all the polished
patches are presented first.
--
Justin
On Fri, Aug 19, 2022 at 9:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:
On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:
0001. I'd also rather see these 4 renamed:
..
0002. I don't really like the "my" name. I also see you've added the
..
How about just "tinfo"?
..
0005. How about just "tslot". I'm not a fan of "this".
..
Since I'm not a fan of "this", how about the outer one gets renamed
..
0007. Meh, more "this". How about just "col".
..
0008. Sorry, I had to change this one too.
I agree that ii_oid and newtype are better names (although it's a bit
unfortunate to rename the outer "ttype" var of wider scope).0003. The following is used for the exact same purpose as its shadowed
counterpart. I suggest just using the variable from the outer scope.And that's what my original patch did, before people insisted that the patches
shouldn't change variable scope. Now it's back to where I stared.There's a discussion about reverting this entire patch. Not sure if
patching master and not backpatching to pg15 would be useful to the
people who may be doing that revert.I think if it were reverted, it'd be in both branches.
I've attached a patch which does things more along the lines of how I
would have done it. I don't think we should be back patching this
stuff.Any objections to pushing this to master only?
I won't object, but some of your changes are what makes backpatching this less
reasonable (foreach_current_index and newtype). I had made these v15 patches
first to simplify backpatching, since having the same code in v15 means that
there's no backpatch hazard for this new-in-v15 code.I am opened to presenting the patches differently, but we need to come up with
a better process than one person writing patches and someone else rewriting it.
I also don't see the value of debating which order to write the patches in.
Grouping by variable name or doing other statistical analysis doesn't change
the fact that there are 50+ issues to address to allow -Wshadow to be usable.Maybe these would be helpful ?
- if I publish the patches on github;
- if I send the patches with more context;
- if you have an suggestion/objection/complaint with a patch, I can address it
and/or re-arrange the patchset so this is later, and all the polished
patches are presented first.
Starting off with patches might come to grief, and it won't be much
fun rearranging patches over and over.
Because there are so many changes, I think it would be better to
attack this task methodically:
STEP 1 - Capture every shadow warning and categorise exactly what kind
is it. e.g maybe do this as some XLS which can be shared. The last
time I looked there were hundreds of instances, but I expect there
will be less than a couple of dozen different *categories* of them.
e.g. shadow of a global var
e.g. shadow of a function param
e.g. shadow of a function var in a code block for the exact same usage
e.g. shadow of a function var in a code block for some 'tmp' var
e.g. shadow of a function var in a code block due to a mistake
e.g. shadow of a function var by some loop index
e.g. shadow of a function var for some loop 'first' handling
e.g. bug
etc...
STEP 2 - Define your rules for how intend to address each of these
kinds of shadows (e.g. just simple rename of the var, use
'foreach_current_index', ...). Hopefully, it will be easy to reach an
agreement now since all instances of the same kind will look pretty
much the same.
STEP 3 - Fix all of the same kinds of shadows per single patch (using
the already agreed fix approach from step 2).
REPEAT STEPS 2,3 until done.
------
Kind Regards,
Peter Smith.
Fujitsu Australia
On Fri, 19 Aug 2022 at 11:21, Justin Pryzby <pryzby@telsasoft.com> wrote:
On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:
Any objections to pushing this to master only?
I won't object, but some of your changes are what makes backpatching this less
reasonable (foreach_current_index and newtype). I had made these v15 patches
first to simplify backpatching, since having the same code in v15 means that
there's no backpatch hazard for this new-in-v15 code.
I spent a bit more time on this and I see that make check-world does
fail if I change either of the foreach_current_index() changes to be
incorrect. e.g change the condition from "> 0" to be "== 0", "> 1" or
"> -1".
As for the newtype change, I was inclined to give the variable name
with the most meaning to the one that's in scope for longer.
I'm starting to feel like it would be ok to backpatch these
new-to-pg-15 changes back into PG15. The reason I think this is that
they all seem low enough risk that it's probably more risky to not
backpatch and risk bugs being introduced due to mistakes being made in
conflict resolution when future patches don't apply. It was the
failing tests I mentioned above that swayed me on this.
I am opened to presenting the patches differently, but we need to come up with
a better process than one person writing patches and someone else rewriting it.
It wasn't my intention to purposefully rewrite everything. It's just
that in order to get the work into something I was willing to commit,
that's how it ended up. As for why I did that rather than ask you to
was the fact that doing it myself required fewer keystrokes, mental
effort and time than asking you to. It's not my intention to do that
for any personal credit. I'm happy for you to take that. I'd just
rather not be batting such trivial patches over the fence at each
other for days or weeks. The effort-to-reward ratio for that is
probably going to drop below my threshold after a few rounds.
David
On Fri, Aug 19, 2022 at 03:37:52PM +1200, David Rowley wrote:
I'm happy for you to take that. I'd just rather not be batting such trivial
patches over the fence at each other for days or weeks.
Yes, thanks for that.
I read through your patch, which looks fine.
Let me know what I can do when it's time for round two.
--
Justin
On Fri, 19 Aug 2022 at 16:28, Justin Pryzby <pryzby@telsasoft.com> wrote:
Let me know what I can do when it's time for round two.
I pushed the modified 0001-0008 patches earlier today and also the one
I wrote to fixup the 36 warnings about "expected" being shadowed.
I looked through a bunch of your remaining patches and was a bit
unexcited to see many more renaming such as:
- List *querytree_list;
+ List *this_querytree_list;
I don't think this sort of thing is an improvement.
However, one category of these changes that I do like are the ones
where we can move the variable into an inner scope. Out of your
renaming 0009-0026 patches, these are:
0013
0014
0017
0018
I feel like having the variable in scope for the minimal amount of
time makes the code cleaner and I feel like these are good next steps
because:
a) no variable needs to be renamed
b) any backpatching issues is more likely to lead to compilation
failure rather than using the wrong variable.
Likely 0016 is a subcategory of the above as if you modified that
patch to follow this rule then you'd have to declare the variable a
few times. I think that category is less interesting and we can maybe
consider those after we're done with the more simple ones.
Do you want to submit a series of patches that fixes all of the
remaining warnings that are in this category? Once these are done we
can consider the best ways to fix and if we want to fix any of the
remaining ones.
Feel free to gzip the patches up if the number is large.
David
On Sat, Aug 20, 2022 at 09:17:41PM +1200, David Rowley wrote:
On Fri, 19 Aug 2022 at 16:28, Justin Pryzby <pryzby@telsasoft.com> wrote:
Let me know what I can do when it's time for round two.
I pushed the modified 0001-0008 patches earlier today and also the one
I wrote to fixup the 36 warnings about "expected" being shadowed.
Thank you
I looked through a bunch of your remaining patches and was a bit
unexcited to see many more renaming such as:
Yes - after Michael said that was the sane procedure, I had rearranged the
patch series to present eariler those patches first which renamed variables ..
However, one category of these changes that I do like are the ones
where we can move the variable into an inner scope.
There are a lot of these, which ISTM is a good thing.
This fixes about half of the remaining warnings.
https://github.com/justinpryzby/postgres/tree/avoid-shadow-vars
You could review without applying the patches, on the webpage or (probably
better) by adding as a git remote. Attached is a squished version.
--
Justin
Attachments:
v2.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index e88f7efa7e4..69f21abfb59 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -372,7 +372,6 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
**nullkeys;
int *nkeys,
*nnullkeys;
- int keyno;
char *ptr;
Size len;
char *tmp PG_USED_FOR_ASSERTS_ONLY;
@@ -454,7 +453,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
memset(nnullkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
/* Preprocess the scan keys - split them into per-attribute arrays. */
- for (keyno = 0; keyno < scan->numberOfKeys; keyno++)
+ for (int keyno = 0; keyno < scan->numberOfKeys; keyno++)
{
ScanKey key = &scan->keyData[keyno];
AttrNumber keyattno = key->sk_attno;
diff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.c
index 10d4f17bc6f..524c1846b83 100644
--- a/src/backend/access/brin/brin_minmax_multi.c
+++ b/src/backend/access/brin/brin_minmax_multi.c
@@ -582,7 +582,6 @@ brin_range_serialize(Ranges *range)
int typlen;
bool typbyval;
- int i;
char *ptr;
/* simple sanity checks */
@@ -621,18 +620,14 @@ brin_range_serialize(Ranges *range)
*/
if (typlen == -1) /* varlena */
{
- int i;
-
- for (i = 0; i < nvalues; i++)
+ for (int i = 0; i < nvalues; i++)
{
len += VARSIZE_ANY(range->values[i]);
}
}
else if (typlen == -2) /* cstring */
{
- int i;
-
- for (i = 0; i < nvalues; i++)
+ for (int i = 0; i < nvalues; i++)
{
/* don't forget to include the null terminator ;-) */
len += strlen(DatumGetCString(range->values[i])) + 1;
@@ -662,7 +657,7 @@ brin_range_serialize(Ranges *range)
*/
ptr = serialized->data; /* start of the serialized data */
- for (i = 0; i < nvalues; i++)
+ for (int i = 0; i < nvalues; i++)
{
if (typbyval) /* simple by-value data types */
{
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 5866c6aaaf7..30069f139c7 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -234,7 +234,6 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
Page page = BufferGetPage(buffer);
bool is_leaf = (GistPageIsLeaf(page)) ? true : false;
XLogRecPtr recptr;
- int i;
bool is_split;
/*
@@ -420,7 +419,7 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
{
char *data = (char *) (ptr->list);
- for (i = 0; i < ptr->block.num; i++)
+ for (int i = 0; i < ptr->block.num; i++)
{
IndexTuple thistup = (IndexTuple) data;
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 87b243e0d4b..46e3bb55ebb 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -3036,8 +3036,6 @@ XLogFileInitInternal(XLogSegNo logsegno, TimeLineID logtli,
pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
if (pg_fsync(fd) != 0)
{
- int save_errno = errno;
-
close(fd);
errno = save_errno;
ereport(ERROR,
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9b03579e6e0..9a83ebf3231 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1818,19 +1818,19 @@ heap_drop_with_catalog(Oid relid)
*/
if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
{
- Relation rel;
- HeapTuple tuple;
+ Relation pg_foreign_table;
+ HeapTuple foreigntuple;
- rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+ pg_foreign_table = table_open(ForeignTableRelationId, RowExclusiveLock);
- tuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
- if (!HeapTupleIsValid(tuple))
+ foreigntuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
+ if (!HeapTupleIsValid(foreigntuple))
elog(ERROR, "cache lookup failed for foreign table %u", relid);
- CatalogTupleDelete(rel, &tuple->t_self);
+ CatalogTupleDelete(pg_foreign_table, &foreigntuple->t_self);
- ReleaseSysCache(tuple);
- table_close(rel, RowExclusiveLock);
+ ReleaseSysCache(foreigntuple);
+ table_close(pg_foreign_table, RowExclusiveLock);
}
/*
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index a976008b3d4..e8bb168aea8 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -1202,7 +1202,6 @@ BeginCopyFrom(ParseState *pstate,
num_defaults;
FmgrInfo *in_functions;
Oid *typioparams;
- int attnum;
Oid in_func_oid;
int *defmap;
ExprState **defexprs;
@@ -1401,7 +1400,7 @@ BeginCopyFrom(ParseState *pstate,
defmap = (int *) palloc(num_phys_attrs * sizeof(int));
defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
- for (attnum = 1; attnum <= num_phys_attrs; attnum++)
+ for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
{
Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1);
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 667f2a4cd16..3c6e09815e0 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -565,7 +565,6 @@ DefineIndex(Oid relationId,
Oid root_save_userid;
int root_save_sec_context;
int root_save_nestlevel;
- int i;
root_save_nestlevel = NewGUCNestLevel();
@@ -1047,7 +1046,7 @@ DefineIndex(Oid relationId,
* We disallow indexes on system columns. They would not necessarily get
* updated correctly, and they don't seem useful anyway.
*/
- for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+ for (int i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
{
AttrNumber attno = indexInfo->ii_IndexAttrNumbers[i];
@@ -1067,7 +1066,7 @@ DefineIndex(Oid relationId,
pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
- for (i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
+ for (int i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
{
if (bms_is_member(i - FirstLowInvalidHeapAttributeNumber,
indexattrs))
@@ -1243,7 +1242,7 @@ DefineIndex(Oid relationId,
* If none matches, build a new index by calling ourselves
* recursively with the same options (except for the index name).
*/
- for (i = 0; i < nparts; i++)
+ for (int i = 0; i < nparts; i++)
{
Oid childRelid = part_oids[i];
Relation childrel;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 8b574b86c47..f9366f588fb 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -106,7 +106,7 @@ parse_publication_options(ParseState *pstate,
{
char *publish;
List *publish_list;
- ListCell *lc;
+ ListCell *lc2;
if (*publish_given)
errorConflictingDefElem(defel, pstate);
@@ -129,9 +129,9 @@ parse_publication_options(ParseState *pstate,
errmsg("invalid list syntax for \"publish\" option")));
/* Process the option list. */
- foreach(lc, publish_list)
+ foreach(lc2, publish_list)
{
- char *publish_opt = (char *) lfirst(lc);
+ char *publish_opt = (char *) lfirst(lc2);
if (strcmp(publish_opt, "insert") == 0)
pubactions->pubinsert = true;
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 9be04c8a1e7..7535b86bcae 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10223,7 +10223,7 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
Oid constrOid;
ObjectAddress address,
referenced;
- ListCell *cell;
+ ListCell *lc;
Oid insertTriggerOid,
updateTriggerOid;
@@ -10276,9 +10276,9 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
* don't need to recurse to partitions for this constraint.
*/
attached = false;
- foreach(cell, partFKs)
+ foreach(lc, partFKs)
{
- ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, cell);
+ ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, lc);
if (tryAttachPartitionForeignKey(fk,
RelationGetRelid(partRel),
@@ -16796,7 +16796,6 @@ PreCommit_on_commit_actions(void)
if (oids_to_drop != NIL)
{
ObjectAddresses *targetObjects = new_object_addresses();
- ListCell *l;
foreach(l, oids_to_drop)
{
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 62a09fb131b..f1801a160ed 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1149,7 +1149,6 @@ CreateTriggerFiringOn(CreateTrigStmt *stmt, const char *queryString,
PartitionDesc partdesc = RelationGetPartitionDesc(rel, true);
List *idxs = NIL;
List *childTbls = NIL;
- ListCell *l;
int i;
MemoryContext oldcxt,
perChildCxt;
@@ -1181,7 +1180,8 @@ CreateTriggerFiringOn(CreateTrigStmt *stmt, const char *queryString,
for (i = 0; i < partdesc->nparts; i++)
{
Oid indexOnChild = InvalidOid;
- ListCell *l2;
+ ListCell *l,
+ *l2;
CreateTrigStmt *childStmt;
Relation childTbl;
Node *qual;
@@ -1726,9 +1726,9 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
for (int i = 0; i < partdesc->nparts; i++)
{
- Oid partitionId = partdesc->oids[i];
+ Oid partid = partdesc->oids[i];
- renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+ renametrig_partition(tgrel, partid, tgform->oid, newname,
NameStr(tgform->tgname));
}
}
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index dbdfe8bd2d4..3670d1f1861 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -233,8 +233,6 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
*/
if (!(params.options & VACOPT_ANALYZE))
{
- ListCell *lc;
-
foreach(lc, vacstmt->rels)
{
VacuumRelation *vrel = lfirst_node(VacuumRelation, lc);
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index ac03271882f..901dd435efd 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
{
List *onconflset;
List *onconflcols;
- bool found_whole_row;
/*
* Translate expressions in onConflictSet to account for
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 96d200e4461..736082c8fb3 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -1296,13 +1296,12 @@ finalize_aggregates(AggState *aggstate,
Datum *aggvalues = econtext->ecxt_aggvalues;
bool *aggnulls = econtext->ecxt_aggnulls;
int aggno;
- int transno;
/*
* If there were any DISTINCT and/or ORDER BY aggregates, sort their
* inputs and run the transition functions.
*/
- for (transno = 0; transno < aggstate->numtrans; transno++)
+ for (int transno = 0; transno < aggstate->numtrans; transno++)
{
AggStatePerTrans pertrans = &aggstate->pertrans[transno];
AggStatePerGroup pergroupstate;
@@ -3188,7 +3187,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
int numGroupingSets = 1;
int numPhases;
int numHashes;
- int i = 0;
int j = 0;
bool use_hashing = (node->aggstrategy == AGG_HASHED ||
node->aggstrategy == AGG_MIXED);
@@ -3279,7 +3277,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
ExecAssignExprContext(estate, &aggstate->ss.ps);
aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
- for (i = 0; i < numGroupingSets; ++i)
+ for (int i = 0; i < numGroupingSets; ++i)
{
ExecAssignExprContext(estate, &aggstate->ss.ps);
aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
@@ -3419,10 +3417,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
AggStatePerPhase phasedata = &aggstate->phases[0];
AggStatePerHash perhash;
Bitmapset *cols = NULL;
+ int setno = phasedata->numsets++;
Assert(phase == 0);
- i = phasedata->numsets++;
- perhash = &aggstate->perhash[i];
+ perhash = &aggstate->perhash[setno];
/* phase 0 always points to the "real" Agg in the hash case */
phasedata->aggnode = node;
@@ -3431,12 +3429,12 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
/* but the actual Agg node representing this hash is saved here */
perhash->aggnode = aggnode;
- phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;
+ phasedata->gset_lengths[setno] = perhash->numCols = aggnode->numCols;
for (j = 0; j < aggnode->numCols; ++j)
cols = bms_add_member(cols, aggnode->grpColIdx[j]);
- phasedata->grouped_cols[i] = cols;
+ phasedata->grouped_cols[setno] = cols;
all_grouped_cols = bms_add_members(all_grouped_cols, cols);
continue;
@@ -3450,6 +3448,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
if (num_sets)
{
+ int i;
phasedata->gset_lengths = palloc(num_sets * sizeof(int));
phasedata->grouped_cols = palloc(num_sets * sizeof(Bitmapset *));
@@ -3535,9 +3534,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
/*
* Convert all_grouped_cols to a descending-order list.
*/
- i = -1;
- while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
- aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+ {
+ int i = -1;
+ while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+ aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+ }
/*
* Set up aggregate-result storage in the output expr context, and also
@@ -3561,7 +3562,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
if (node->aggstrategy != AGG_HASHED)
{
- for (i = 0; i < numGroupingSets; i++)
+ for (int i = 0; i < numGroupingSets; i++)
{
pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
* numaggs);
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 1545ff9f161..f9d40fa1a0d 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1650,16 +1650,16 @@ interpret_ident_response(const char *ident_response,
return false;
else
{
- int i; /* Index into *ident_user */
+ int j; /* Index into *ident_user */
cursor++; /* Go over colon */
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
/* Rest of line is user name. Copy it over. */
- i = 0;
+ j = 0;
while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
- ident_user[i++] = *cursor++;
- ident_user[i] = '\0';
+ ident_user[j++] = *cursor++;
+ ident_user[j] = '\0';
return true;
}
}
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 1e94c5aa7c4..74adc4f3946 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2447,7 +2447,6 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
int arrlen;
ListCell *l;
ListCell *cell;
- int i;
int path_index;
int min_index;
int max_index;
@@ -2486,7 +2485,6 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
for_each_cell(l, subpaths, cell)
{
Path *subpath = (Path *) lfirst(l);
- int i;
/* Consider only the non-partial paths */
if (path_index++ == numpaths)
@@ -2495,7 +2493,8 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
costarr[min_index] += subpath->total_cost;
/* Update the new min cost array index */
- for (min_index = i = 0; i < arrlen; i++)
+ min_index = 0;
+ for (int i = 0; i < arrlen; i++)
{
if (costarr[i] < costarr[min_index])
min_index = i;
@@ -2503,7 +2502,8 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
}
/* Return the highest cost from the array */
- for (max_index = i = 0; i < arrlen; i++)
+ max_index = 0;
+ for (int i = 0; i < arrlen; i++)
{
if (costarr[i] > costarr[max_index])
max_index = i;
@@ -2545,10 +2545,10 @@ cost_append(AppendPath *apath, PlannerInfo *root)
/* Compute rows and costs as sums of subplan rows and costs. */
foreach(l, apath->subpaths)
{
- Path *subpath = (Path *) lfirst(l);
+ Path *sub = (Path *) lfirst(l);
- apath->path.rows += subpath->rows;
- apath->path.total_cost += subpath->total_cost;
+ apath->path.rows += sub->rows;
+ apath->path.total_cost += sub->total_cost;
}
}
else
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 7d176e7b00a..8ba27a98b42 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -361,7 +361,6 @@ create_index_paths(PlannerInfo *root, RelOptInfo *rel)
if (bitjoinpaths != NIL)
{
List *all_path_outers;
- ListCell *lc;
/* Identify each distinct parameterization seen in bitjoinpaths */
all_path_outers = NIL;
diff --git a/src/backend/optimizer/path/tidpath.c b/src/backend/optimizer/path/tidpath.c
index 279ca1f5b44..23194d6e007 100644
--- a/src/backend/optimizer/path/tidpath.c
+++ b/src/backend/optimizer/path/tidpath.c
@@ -305,10 +305,10 @@ TidQualFromRestrictInfoList(PlannerInfo *root, List *rlist, RelOptInfo *rel)
}
else
{
- RestrictInfo *rinfo = castNode(RestrictInfo, orarg);
+ RestrictInfo *list = castNode(RestrictInfo, orarg);
- Assert(!restriction_is_or_clause(rinfo));
- sublist = TidQualFromRestrictInfo(root, rinfo, rel);
+ Assert(!restriction_is_or_clause(list));
+ sublist = TidQualFromRestrictInfo(root, list, rel);
}
/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index cf9e0a74dbf..e969f2be3fe 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -1994,8 +1994,6 @@ preprocess_grouping_sets(PlannerInfo *root)
if (parse->groupClause)
{
- ListCell *lc;
-
foreach(lc, parse->groupClause)
{
SortGroupClause *gc = lfirst_node(SortGroupClause, lc);
@@ -3458,16 +3456,16 @@ get_number_of_groups(PlannerInfo *root,
foreach(lc, gd->rollups)
{
RollupData *rollup = lfirst_node(RollupData, lc);
- ListCell *lc;
+ ListCell *lc3;
groupExprs = get_sortgrouplist_exprs(rollup->groupClause,
target_list);
rollup->numGroups = 0.0;
- forboth(lc, rollup->gsets, lc2, rollup->gsets_data)
+ forboth(lc3, rollup->gsets, lc2, rollup->gsets_data)
{
- List *gset = (List *) lfirst(lc);
+ List *gset = (List *) lfirst(lc3);
GroupingSetData *gs = lfirst_node(GroupingSetData, lc2);
double numGroups = estimate_num_groups(root,
groupExprs,
@@ -3484,8 +3482,6 @@ get_number_of_groups(PlannerInfo *root,
if (gd->hash_sets_idx)
{
- ListCell *lc;
-
gd->dNumHashGroups = 0;
groupExprs = get_sortgrouplist_exprs(parse->groupClause,
@@ -5034,11 +5030,11 @@ create_ordered_paths(PlannerInfo *root,
*/
if (enable_incremental_sort && list_length(root->sort_pathkeys) > 1)
{
- ListCell *lc;
+ ListCell *lc2;
- foreach(lc, input_rel->partial_pathlist)
+ foreach(lc2, input_rel->partial_pathlist)
{
- Path *input_path = (Path *) lfirst(lc);
+ Path *input_path = (Path *) lfirst(lc2);
Path *sorted_path;
bool is_sorted;
int presorted_keys;
@@ -7607,7 +7603,7 @@ apply_scanjoin_target_to_paths(PlannerInfo *root,
AppendRelInfo **appinfos;
int nappinfos;
List *child_scanjoin_targets = NIL;
- ListCell *lc;
+ ListCell *lc2;
Assert(child_rel != NULL);
@@ -7618,9 +7614,9 @@ apply_scanjoin_target_to_paths(PlannerInfo *root,
/* Translate scan/join targets for this child. */
appinfos = find_appinfos_by_relids(root, child_rel->relids,
&nappinfos);
- foreach(lc, scanjoin_targets)
+ foreach(lc2, scanjoin_targets)
{
- PathTarget *target = lfirst_node(PathTarget, lc);
+ PathTarget *target = lfirst_node(PathTarget, lc2);
target = copy_pathtarget(target);
target->exprs = (List *)
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index df4ca129191..b15ecc83971 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2402,7 +2402,7 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_FunctionScan:
{
FunctionScan *fscan = (FunctionScan *) plan;
- ListCell *lc;
+ ListCell *lc; //
/*
* Call finalize_primnode independently on each function
@@ -2510,7 +2510,7 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_CustomScan:
{
CustomScan *cscan = (CustomScan *) plan;
- ListCell *lc;
+ ListCell *lc; //
finalize_primnode((Node *) cscan->custom_exprs,
&context);
@@ -2554,8 +2554,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_Append:
{
- ListCell *l;
-
foreach(l, ((Append *) plan)->appendplans)
{
context.paramids =
@@ -2571,8 +2569,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_MergeAppend:
{
- ListCell *l;
-
foreach(l, ((MergeAppend *) plan)->mergeplans)
{
context.paramids =
@@ -2588,8 +2584,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_BitmapAnd:
{
- ListCell *l;
-
foreach(l, ((BitmapAnd *) plan)->bitmapplans)
{
context.paramids =
@@ -2605,8 +2599,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_BitmapOr:
{
- ListCell *l;
-
foreach(l, ((BitmapOr *) plan)->bitmapplans)
{
context.paramids =
@@ -2622,8 +2614,6 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_NestLoop:
{
- ListCell *l;
-
finalize_primnode((Node *) ((Join *) plan)->joinqual,
&context);
/* collect set of params that will be passed to right child */
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 043181b586b..f97c2f5256c 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -653,15 +653,14 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
if (partial_paths_valid)
{
Path *ppath;
- ListCell *lc;
int parallel_workers = 0;
/* Find the highest number of workers requested for any subpath. */
foreach(lc, partial_pathlist)
{
- Path *path = lfirst(lc);
+ Path *partial_path = lfirst(lc);
- parallel_workers = Max(parallel_workers, path->parallel_workers);
+ parallel_workers = Max(parallel_workers, partial_path->parallel_workers);
}
Assert(parallel_workers > 0);
diff --git a/src/backend/optimizer/util/paramassign.c b/src/backend/optimizer/util/paramassign.c
index 8e2d4bf5158..933460989b3 100644
--- a/src/backend/optimizer/util/paramassign.c
+++ b/src/backend/optimizer/util/paramassign.c
@@ -437,16 +437,16 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
{
Var *var = (Var *) pitem->item;
NestLoopParam *nlp;
- ListCell *lc;
+ ListCell *lc2;
/* If not from a nestloop outer rel, complain */
if (!bms_is_member(var->varno, root->curOuterRels))
elog(ERROR, "non-LATERAL parameter required by subquery");
/* Is this param already listed in root->curOuterParams? */
- foreach(lc, root->curOuterParams)
+ foreach(lc2, root->curOuterParams)
{
- nlp = (NestLoopParam *) lfirst(lc);
+ nlp = (NestLoopParam *) lfirst(lc2);
if (nlp->paramno == pitem->paramId)
{
Assert(equal(var, nlp->paramval));
@@ -454,7 +454,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
break;
}
}
- if (lc == NULL)
+ if (lc2 == NULL)
{
/* No, so add it */
nlp = makeNode(NestLoopParam);
@@ -467,7 +467,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
{
PlaceHolderVar *phv = (PlaceHolderVar *) pitem->item;
NestLoopParam *nlp;
- ListCell *lc;
+ ListCell *lc2;
/* If not from a nestloop outer rel, complain */
if (!bms_is_subset(find_placeholder_info(root, phv)->ph_eval_at,
@@ -475,9 +475,9 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
elog(ERROR, "non-LATERAL parameter required by subquery");
/* Is this param already listed in root->curOuterParams? */
- foreach(lc, root->curOuterParams)
+ foreach(lc2, root->curOuterParams)
{
- nlp = (NestLoopParam *) lfirst(lc);
+ nlp = (NestLoopParam *) lfirst(lc2);
if (nlp->paramno == pitem->paramId)
{
Assert(equal(phv, nlp->paramval));
@@ -485,7 +485,7 @@ process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
break;
}
}
- if (lc == NULL)
+ if (lc2 == NULL)
{
/* No, so add it */
nlp = makeNode(NestLoopParam);
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index b85fbebd00e..53a17ac3f6a 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -539,11 +539,11 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
!fc->func_variadic &&
coldeflist == NIL)
{
- ListCell *lc;
+ ListCell *lc2;
- foreach(lc, fc->args)
+ foreach(lc2, fc->args)
{
- Node *arg = (Node *) lfirst(lc);
+ Node *arg = (Node *) lfirst(lc2);
FuncCall *newfc;
last_srf = pstate->p_last_srf;
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index c1c27e67d47..744bc512b65 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1265,7 +1265,6 @@ dependency_is_compatible_expression(Node *clause, Index relid, List *statlist, N
else if (is_orclause(clause))
{
BoolExpr *bool_expr = (BoolExpr *) clause;
- ListCell *lc;
/* start with no expression (we'll use the first match) */
*expr = NULL;
@@ -1693,7 +1692,6 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
{
int idx;
Node *expr;
- int k;
AttrNumber unique_attnum = InvalidAttrNumber;
AttrNumber attnum;
@@ -1741,15 +1739,15 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
expr = (Node *) list_nth(stat->exprs, idx);
/* try to find the expression in the unique list */
- for (k = 0; k < unique_exprs_cnt; k++)
+ for (int m = 0; m < unique_exprs_cnt; m++)
{
/*
* found a matching unique expression, use the attnum
* (derived from index of the unique expression)
*/
- if (equal(unique_exprs[k], expr))
+ if (equal(unique_exprs[m], expr))
{
- unique_attnum = -(k + 1) + attnum_offset;
+ unique_attnum = -(m + 1) + attnum_offset;
break;
}
}
diff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c
index 5410a68bc91..91b9635dc0a 100644
--- a/src/backend/statistics/mcv.c
+++ b/src/backend/statistics/mcv.c
@@ -1604,7 +1604,6 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
Bitmapset *keys, List *exprs,
MCVList *mcvlist, bool is_or)
{
- int i;
ListCell *l;
bool *matches;
@@ -1659,7 +1658,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
bool match = true;
MCVItem *item = &mcvlist->items[i];
@@ -1766,7 +1765,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
int j;
bool match = !expr->useOr;
@@ -1837,7 +1836,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
bool match = false; /* assume mismatch */
MCVItem *item = &mcvlist->items[i];
@@ -1862,7 +1861,6 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
{
/* AND/OR clause, with all subclauses being compatible */
- int i;
BoolExpr *bool_clause = ((BoolExpr *) clause);
List *bool_clauses = bool_clause->args;
@@ -1881,7 +1879,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* current one. We need to consider if we're evaluating AND or OR
* condition when merging the results.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
matches[i] = RESULT_MERGE(matches[i], is_or, bool_matches[i]);
pfree(bool_matches);
@@ -1890,7 +1888,6 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
{
/* NOT clause, with all subclauses compatible */
- int i;
BoolExpr *not_clause = ((BoolExpr *) clause);
List *not_args = not_clause->args;
@@ -1909,7 +1906,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* current one. We're handling a NOT clause, so invert the result
* before merging it into the global bitmap.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
matches[i] = RESULT_MERGE(matches[i], is_or, !not_matches[i]);
pfree(not_matches);
@@ -1930,7 +1927,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
MCVItem *item = &mcvlist->items[i];
bool match = false;
@@ -1956,7 +1953,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
bool match;
MCVItem *item = &mcvlist->items[i];
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 7a1202c6096..49d3b8c9dd0 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -3183,7 +3183,6 @@ void
DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
{
int i;
- int j;
int n = 0;
SMgrRelation *rels;
BlockNumber (*block)[MAX_FORKNUM + 1];
@@ -3232,7 +3231,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
*/
for (i = 0; i < n && cached; i++)
{
- for (j = 0; j <= MAX_FORKNUM; j++)
+ for (int j = 0; j <= MAX_FORKNUM; j++)
{
/* Get the number of blocks for a relation's fork. */
block[i][j] = smgrnblocks_cached(rels[i], j);
@@ -3259,7 +3258,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
{
for (i = 0; i < n; i++)
{
- for (j = 0; j <= MAX_FORKNUM; j++)
+ for (int j = 0; j <= MAX_FORKNUM; j++)
{
/* ignore relation forks that doesn't exist */
if (!BlockNumberIsValid(block[i][j]))
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 6b0a8652622..ba9a568389f 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -1087,6 +1087,23 @@ standard_ProcessUtility(PlannedStmt *pstmt,
CommandCounterIncrement();
}
+static ObjectAddress
+TryExecRefreshMatView(RefreshMatViewStmt *stmt, const char *queryString,
+ ParamListInfo params, QueryCompletion *qc)
+{
+ ObjectAddress address;
+ PG_TRY();
+ {
+ address = ExecRefreshMatView(stmt, queryString, params, qc);
+ }
+ PG_FINALLY();
+ {
+ EventTriggerUndoInhibitCommandCollection();
+ }
+ PG_END_TRY();
+ return address;
+}
+
/*
* The "Slow" variant of ProcessUtility should only receive statements
* supported by the event triggers facility. Therefore, we always
@@ -1678,16 +1695,10 @@ ProcessUtilitySlow(ParseState *pstate,
* command itself is queued, which is enough.
*/
EventTriggerInhibitCommandCollection();
- PG_TRY();
- {
- address = ExecRefreshMatView((RefreshMatViewStmt *) parsetree,
- queryString, params, qc);
- }
- PG_FINALLY();
- {
- EventTriggerUndoInhibitCommandCollection();
- }
- PG_END_TRY();
+
+ address = TryExecRefreshMatView((RefreshMatViewStmt *) parsetree,
+ queryString, params, qc);
+
break;
case T_CreateTrigStmt:
diff --git a/src/backend/utils/adt/levenshtein.c b/src/backend/utils/adt/levenshtein.c
index 3026cc24311..2e67a90e516 100644
--- a/src/backend/utils/adt/levenshtein.c
+++ b/src/backend/utils/adt/levenshtein.c
@@ -193,16 +193,16 @@ varstr_levenshtein(const char *source, int slen,
*/
if (m != slen || n != tlen)
{
- int i;
+ int k;
const char *cp = source;
s_char_len = (int *) palloc((m + 1) * sizeof(int));
- for (i = 0; i < m; ++i)
+ for (k = 0; k < m; ++k)
{
- s_char_len[i] = pg_mblen(cp);
- cp += s_char_len[i];
+ s_char_len[k] = pg_mblen(cp);
+ cp += s_char_len[k];
}
- s_char_len[i] = 0;
+ s_char_len[k] = 0;
}
/* One more cell for initialization column and row. */
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 8964f73b929..3f5683f70b5 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -1303,7 +1303,6 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
if (!heap_attisnull(ht_idx, Anum_pg_index_indexprs, NULL))
{
Datum exprsDatum;
- bool isnull;
char *exprsString;
exprsDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
@@ -1500,7 +1499,6 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
{
Node *node;
Datum predDatum;
- bool isnull;
char *predString;
/* Convert text string to node tree */
@@ -1648,7 +1646,6 @@ pg_get_statisticsobj_worker(Oid statextid, bool columns_only, bool missing_ok)
if (has_exprs)
{
Datum exprsDatum;
- bool isnull;
char *exprsString;
exprsDatum = SysCacheGetAttr(STATEXTOID, statexttup,
@@ -1944,7 +1941,6 @@ pg_get_partkeydef_worker(Oid relid, int prettyFlags,
if (!heap_attisnull(tuple, Anum_pg_partitioned_table_partexprs, NULL))
{
Datum exprsDatum;
- bool isnull;
char *exprsString;
exprsDatum = SysCacheGetAttr(PARTRELID, tuple,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2c689157329..c0d09edf9d0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -11576,7 +11576,6 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
char **configitems = NULL;
int nconfigitems = 0;
const char *keyword;
- int i;
/* Do nothing in data-only dump */
if (dopt->dataOnly)
@@ -11776,11 +11775,10 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
if (*protrftypes)
{
Oid *typeids = palloc(FUNC_MAX_ARGS * sizeof(Oid));
- int i;
appendPQExpBufferStr(q, " TRANSFORM ");
parseOidArray(protrftypes, typeids, FUNC_MAX_ARGS);
- for (i = 0; typeids[i]; i++)
+ for (int i = 0; typeids[i]; i++)
{
if (i != 0)
appendPQExpBufferStr(q, ", ");
@@ -11853,7 +11851,7 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
finfo->dobj.name);
}
- for (i = 0; i < nconfigitems; i++)
+ for (int i = 0; i < nconfigitems; i++)
{
/* we feel free to scribble on configitems[] here */
char *configitem = configitems[i];
diff --git a/src/interfaces/ecpg/pgtypeslib/numeric.c b/src/interfaces/ecpg/pgtypeslib/numeric.c
index a97b3300cb8..b666c909084 100644
--- a/src/interfaces/ecpg/pgtypeslib/numeric.c
+++ b/src/interfaces/ecpg/pgtypeslib/numeric.c
@@ -1062,7 +1062,6 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
int weight_tmp;
int rscale_tmp;
int ri;
- int i;
long guess;
long first_have;
long first_div;
@@ -1109,7 +1108,7 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
* Initialize local variables
*/
init_var(÷nd);
- for (i = 1; i < 10; i++)
+ for (int i = 1; i < 10; i++)
init_var(&divisor[i]);
/*
@@ -1181,7 +1180,6 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
{
if (divisor[guess].buf == NULL)
{
- int i;
long sum = 0;
memcpy(&divisor[guess], &divisor[1], sizeof(numeric));
@@ -1189,7 +1187,7 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
if (divisor[guess].buf == NULL)
goto done;
divisor[guess].digits = divisor[guess].buf;
- for (i = divisor[1].ndigits - 1; i >= 0; i--)
+ for (int i = divisor[1].ndigits - 1; i >= 0; i--)
{
sum += divisor[1].digits[i] * guess;
divisor[guess].digits[i] = sum % 10;
@@ -1268,7 +1266,7 @@ done:
if (dividend.buf != NULL)
digitbuf_free(dividend.buf);
- for (i = 1; i < 10; i++)
+ for (int i = 1; i < 10; i++)
{
if (divisor[i].buf != NULL)
digitbuf_free(divisor[i].buf);
diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c
index 93d9cef06ba..7e6169fc203 100644
--- a/src/pl/plpgsql/src/pl_funcs.c
+++ b/src/pl/plpgsql/src/pl_funcs.c
@@ -1597,14 +1597,13 @@ dump_expr(PLpgSQL_expr *expr)
void
plpgsql_dumptree(PLpgSQL_function *func)
{
- int i;
PLpgSQL_datum *d;
printf("\nExecution tree of successfully compiled PL/pgSQL function %s:\n",
func->fn_signature);
printf("\nFunction's data area:\n");
- for (i = 0; i < func->ndatums; i++)
+ for (int i = 0; i < func->ndatums; i++)
{
d = func->datums[i];
@@ -1647,13 +1646,12 @@ plpgsql_dumptree(PLpgSQL_function *func)
case PLPGSQL_DTYPE_ROW:
{
PLpgSQL_row *row = (PLpgSQL_row *) d;
- int i;
printf("ROW %-16s fields", row->refname);
- for (i = 0; i < row->nfields; i++)
+ for (int j = 0; j < row->nfields; j++)
{
- printf(" %s=var %d", row->fieldnames[i],
- row->varnos[i]);
+ printf(" %s=var %d", row->fieldnames[j],
+ row->varnos[j]);
}
printf("\n");
}
On Tue, 23 Aug 2022 at 13:17, Justin Pryzby <pryzby@telsasoft.com> wrote:
Attached is a squished version.
I see there's some renaming ones snuck in there. e.g:
- Relation rel;
- HeapTuple tuple;
+ Relation pg_foreign_table;
+ HeapTuple foreigntuple;
This one does not seem to be in the category I mentioned:
@@ -3036,8 +3036,6 @@ XLogFileInitInternal(XLogSegNo logsegno,
TimeLineID logtli,
pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
if (pg_fsync(fd) != 0)
{
- int save_errno = errno;
-
More renaming:
+++ b/src/backend/catalog/heap.c
@@ -1818,19 +1818,19 @@ heap_drop_with_catalog(Oid relid)
*/
if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
{
- Relation rel;
- HeapTuple tuple;
+ Relation pg_foreign_table;
+ HeapTuple foreigntuple;
More renaming:
+++ b/src/backend/commands/publicationcmds.c
@@ -106,7 +106,7 @@ parse_publication_options(ParseState *pstate,
{
char *publish;
List *publish_list;
- ListCell *lc;
+ ListCell *lc2;
and again:
+++ b/src/backend/commands/tablecmds.c
@@ -10223,7 +10223,7 @@ CloneFkReferencing(List **wqueue, Relation
parentRel, Relation partRel)
Oid constrOid;
ObjectAddress address,
referenced;
- ListCell *cell;
+ ListCell *lc;
I've not checked the context one this, but this does not appear to
meet the category of moving to an inner scope:
+++ b/src/backend/executor/execPartition.c
@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate,
EState *estate,
{
List *onconflset;
List *onconflcols;
- bool found_whole_row;
Looks like you're just using the one from the wider scope. That's not
the category we're after for now.
You've also got some renaming going on in ExecInitAgg()
- phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;
+ phasedata->gset_lengths[setno] = perhash->numCols = aggnode->numCols;
I wondered about this one too:
- i = -1;
- while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
- aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+ {
+ int i = -1;
+ while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+ aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+ }
I had in mind that maybe we should switch those to be something more like:
for (int i = -1; (i = bms_next_member(all_grouped_cols, i)) >= 0;)
But I had 2nd thoughts as the "while" version has become the standard method.
(Really that code should be using bms_prev_member() and lappend_int()
so we don't have to memmove() the entire list each lcons_int() call.
(not for this patch though))
More renaming being done here:
- int i; /* Index into *ident_user */
+ int j; /* Index into *ident_user */
... in fact, there's lots of renaming, so I'll just stop looking.
Can you just send a patch that only changes the cases where you can
remove a variable declaration from an outer scope into a single inner
scope, or multiple inner scope when the variable can be declared
inside a for() loop? The mcv_get_match_bitmap() change is an example
of this. There's still a net reduction in lines of code, so I think
the mcv_get_match_bitmap(), and any like it are ok for this next step.
A counter example is ExecInitPartitionInfo() where the way to do this
would be to move the found_whole_row declaration into multiple inner
scopes. That's a net increase in code lines, for which I think
requires more careful thought if we want that or not.
David
On Tue, Aug 23, 2022 at 01:38:40PM +1200, David Rowley wrote:
On Tue, 23 Aug 2022 at 13:17, Justin Pryzby <pryzby@telsasoft.com> wrote:
Attached is a squished version.
I see there's some renaming ones snuck in there. e.g:
... in fact, there's lots of renaming, so I'll just stop looking.
Actually, they didn't sneak in - what I sent are the patches which are ready to
be reviewed, excluding the set of "this" and "tmp" and other renames which you
disliked. In the branch (not the squished patch) the first ~15 patches were
mostly for C99 for loops - I presented them this way deliberately, so you could
review and comment on whatever you're able to bite off, or run with whatever
parts you think are ready. I rewrote it now to be more bite sized by
truncating off the 2nd half of the patches.
Can you just send a patch that only changes the cases where you can
remove a variable declaration from an outer scope into a single inner
scope, or multiple inner scope when the variable can be declared
inside a for() loop?
would be to move the found_whole_row declaration into multiple inner
scopes. That's a net increase in code lines, for which I think
requires more careful thought if we want that or not.
IMO it doesn't make sense to declare multiple integers for something like this
whether they're all ignored. Nor for "save_errno" nor the third, similar case,
for the reason in the commit message.
--
Justin
Attachments:
v2-truncated.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index e88f7efa7e4..69f21abfb59 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -353,45 +353,44 @@ brinbeginscan(Relation r, int nkeys, int norderbys)
int64
bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
{
Relation idxRel = scan->indexRelation;
Buffer buf = InvalidBuffer;
BrinDesc *bdesc;
Oid heapOid;
Relation heapRel;
BrinOpaque *opaque;
BlockNumber nblocks;
BlockNumber heapBlk;
int totalpages = 0;
FmgrInfo *consistentFn;
MemoryContext oldcxt;
MemoryContext perRangeCxt;
BrinMemTuple *dtup;
BrinTuple *btup = NULL;
Size btupsz = 0;
ScanKey **keys,
**nullkeys;
int *nkeys,
*nnullkeys;
- int keyno;
char *ptr;
Size len;
char *tmp PG_USED_FOR_ASSERTS_ONLY;
opaque = (BrinOpaque *) scan->opaque;
bdesc = opaque->bo_bdesc;
pgstat_count_index_scan(idxRel);
/*
* We need to know the size of the table so that we know how long to
* iterate on the revmap.
*/
heapOid = IndexGetRelation(RelationGetRelid(idxRel), false);
heapRel = table_open(heapOid, AccessShareLock);
nblocks = RelationGetNumberOfBlocks(heapRel);
table_close(heapRel, AccessShareLock);
/*
* Make room for the consistent support procedures of indexed columns. We
* don't look them up here; we do that lazily the first time we see a scan
* key reference each of them. We rely on zeroing fn_oid to InvalidOid.
*/
@@ -435,45 +434,45 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
nkeys = (int *) ptr;
ptr += MAXALIGN(sizeof(int) * bdesc->bd_tupdesc->natts);
nnullkeys = (int *) ptr;
ptr += MAXALIGN(sizeof(int) * bdesc->bd_tupdesc->natts);
for (int i = 0; i < bdesc->bd_tupdesc->natts; i++)
{
keys[i] = (ScanKey *) ptr;
ptr += MAXALIGN(sizeof(ScanKey) * scan->numberOfKeys);
nullkeys[i] = (ScanKey *) ptr;
ptr += MAXALIGN(sizeof(ScanKey) * scan->numberOfKeys);
}
Assert(tmp + len == ptr);
/* zero the number of keys */
memset(nkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
memset(nnullkeys, 0, sizeof(int) * bdesc->bd_tupdesc->natts);
/* Preprocess the scan keys - split them into per-attribute arrays. */
- for (keyno = 0; keyno < scan->numberOfKeys; keyno++)
+ for (int keyno = 0; keyno < scan->numberOfKeys; keyno++)
{
ScanKey key = &scan->keyData[keyno];
AttrNumber keyattno = key->sk_attno;
/*
* The collation of the scan key must match the collation used in the
* index column (but only if the search is not IS NULL/ IS NOT NULL).
* Otherwise we shouldn't be using this index ...
*/
Assert((key->sk_flags & SK_ISNULL) ||
(key->sk_collation ==
TupleDescAttr(bdesc->bd_tupdesc,
keyattno - 1)->attcollation));
/*
* First time we see this index attribute, so init as needed.
*
* This is a bit of an overkill - we don't know how many scan keys are
* there for this attribute, so we simply allocate the largest number
* possible (as if all keys were for this attribute). This may waste a
* bit of memory, but we only expect small number of scan keys in
* general, so this should be negligible, and repeated repalloc calls
diff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.c
index 10d4f17bc6f..524c1846b83 100644
--- a/src/backend/access/brin/brin_minmax_multi.c
+++ b/src/backend/access/brin/brin_minmax_multi.c
@@ -563,125 +563,120 @@ range_deduplicate_values(Ranges *range)
AssertCheckRanges(range, range->cmp, range->colloid);
}
/*
* brin_range_serialize
* Serialize the in-memory representation into a compact varlena value.
*
* Simply copy the header and then also the individual values, as stored
* in the in-memory value array.
*/
static SerializedRanges *
brin_range_serialize(Ranges *range)
{
Size len;
int nvalues;
SerializedRanges *serialized;
Oid typid;
int typlen;
bool typbyval;
- int i;
char *ptr;
/* simple sanity checks */
Assert(range->nranges >= 0);
Assert(range->nsorted >= 0);
Assert(range->nvalues >= 0);
Assert(range->maxvalues > 0);
Assert(range->target_maxvalues > 0);
/* at this point the range should be compacted to the target size */
Assert(2 * range->nranges + range->nvalues <= range->target_maxvalues);
Assert(range->target_maxvalues <= range->maxvalues);
/* range boundaries are always sorted */
Assert(range->nvalues >= range->nsorted);
/* deduplicate values, if there's unsorted part */
range_deduplicate_values(range);
/* see how many Datum values we actually have */
nvalues = 2 * range->nranges + range->nvalues;
typid = range->typid;
typbyval = get_typbyval(typid);
typlen = get_typlen(typid);
/* header is always needed */
len = offsetof(SerializedRanges, data);
/*
* The space needed depends on data type - for fixed-length data types
* (by-value and some by-reference) it's pretty simple, just multiply
* (attlen * nvalues) and we're done. For variable-length by-reference
* types we need to actually walk all the values and sum the lengths.
*/
if (typlen == -1) /* varlena */
{
- int i;
-
- for (i = 0; i < nvalues; i++)
+ for (int i = 0; i < nvalues; i++)
{
len += VARSIZE_ANY(range->values[i]);
}
}
else if (typlen == -2) /* cstring */
{
- int i;
-
- for (i = 0; i < nvalues; i++)
+ for (int i = 0; i < nvalues; i++)
{
/* don't forget to include the null terminator ;-) */
len += strlen(DatumGetCString(range->values[i])) + 1;
}
}
else /* fixed-length types (even by-reference) */
{
Assert(typlen > 0);
len += nvalues * typlen;
}
/*
* Allocate the serialized object, copy the basic information. The
* serialized object is a varlena, so update the header.
*/
serialized = (SerializedRanges *) palloc0(len);
SET_VARSIZE(serialized, len);
serialized->typid = typid;
serialized->nranges = range->nranges;
serialized->nvalues = range->nvalues;
serialized->maxvalues = range->target_maxvalues;
/*
* And now copy also the boundary values (like the length calculation this
* depends on the particular data type).
*/
ptr = serialized->data; /* start of the serialized data */
- for (i = 0; i < nvalues; i++)
+ for (int i = 0; i < nvalues; i++)
{
if (typbyval) /* simple by-value data types */
{
Datum tmp;
/*
* For byval types, we need to copy just the significant bytes -
* we can't use memcpy directly, as that assumes little-endian
* behavior. store_att_byval does almost what we need, but it
* requires a properly aligned buffer - the output buffer does not
* guarantee that. So we simply use a local Datum variable (which
* guarantees proper alignment), and then copy the value from it.
*/
store_att_byval(&tmp, range->values[i], typlen);
memcpy(ptr, &tmp, typlen);
ptr += typlen;
}
else if (typlen > 0) /* fixed-length by-ref types */
{
memcpy(ptr, DatumGetPointer(range->values[i]), typlen);
ptr += typlen;
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 5866c6aaaf7..30069f139c7 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -215,45 +215,44 @@ gistinsert(Relation r, Datum *values, bool *isnull,
*
* If 'newblkno' is not NULL, returns the block number of page the first
* new/updated tuple was inserted to. Usually it's the given page, but could
* be its right sibling if the page was split.
*
* Returns 'true' if the page was split, 'false' otherwise.
*/
bool
gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
Buffer buffer,
IndexTuple *itup, int ntup, OffsetNumber oldoffnum,
BlockNumber *newblkno,
Buffer leftchildbuf,
List **splitinfo,
bool markfollowright,
Relation heapRel,
bool is_build)
{
BlockNumber blkno = BufferGetBlockNumber(buffer);
Page page = BufferGetPage(buffer);
bool is_leaf = (GistPageIsLeaf(page)) ? true : false;
XLogRecPtr recptr;
- int i;
bool is_split;
/*
* Refuse to modify a page that's incompletely split. This should not
* happen because we finish any incomplete splits while we walk down the
* tree. However, it's remotely possible that another concurrent inserter
* splits a parent page, and errors out before completing the split. We
* will just throw an error in that case, and leave any split we had in
* progress unfinished too. The next insert that comes along will clean up
* the mess.
*/
if (GistFollowRight(page))
elog(ERROR, "concurrent GiST page split was incomplete");
/* should never try to insert to a deleted page */
Assert(!GistPageIsDeleted(page));
*splitinfo = NIL;
/*
* if isupdate, remove old key: This node's key has been modified, either
* because a child split occurred or because we needed to adjust our key
@@ -401,45 +400,45 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
}
else
{
/* Prepare split-info to be returned to caller */
for (ptr = dist; ptr; ptr = ptr->next)
{
GISTPageSplitInfo *si = palloc(sizeof(GISTPageSplitInfo));
si->buf = ptr->buffer;
si->downlink = ptr->itup;
*splitinfo = lappend(*splitinfo, si);
}
}
/*
* Fill all pages. All the pages are new, ie. freshly allocated empty
* pages, or a temporary copy of the old page.
*/
for (ptr = dist; ptr; ptr = ptr->next)
{
char *data = (char *) (ptr->list);
- for (i = 0; i < ptr->block.num; i++)
+ for (int i = 0; i < ptr->block.num; i++)
{
IndexTuple thistup = (IndexTuple) data;
if (PageAddItem(ptr->page, (Item) data, IndexTupleSize(thistup), i + FirstOffsetNumber, false, false) == InvalidOffsetNumber)
elog(ERROR, "failed to add item to index page in \"%s\"", RelationGetRelationName(rel));
/*
* If this is the first inserted/updated tuple, let the caller
* know which page it landed on.
*/
if (newblkno && ItemPointerEquals(&thistup->t_tid, &(*itup)->t_tid))
*newblkno = ptr->block.blkno;
data += IndexTupleSize(thistup);
}
/* Set up rightlinks */
if (ptr->next && ptr->block.blkno != GIST_ROOT_BLKNO)
GistPageGetOpaque(ptr->page)->rightlink =
ptr->next->block.blkno;
else
GistPageGetOpaque(ptr->page)->rightlink = oldrlink;
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index a976008b3d4..e8bb168aea8 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -1183,45 +1183,44 @@ CopyFrom(CopyFromState cstate)
* 'attnamelist': List of char *, columns to include. NIL selects all cols.
* 'options': List of DefElem. See copy_opt_item in gram.y for selections.
*
* Returns a CopyFromState, to be passed to NextCopyFrom and related functions.
*/
CopyFromState
BeginCopyFrom(ParseState *pstate,
Relation rel,
Node *whereClause,
const char *filename,
bool is_program,
copy_data_source_cb data_source_cb,
List *attnamelist,
List *options)
{
CopyFromState cstate;
bool pipe = (filename == NULL);
TupleDesc tupDesc;
AttrNumber num_phys_attrs,
num_defaults;
FmgrInfo *in_functions;
Oid *typioparams;
- int attnum;
Oid in_func_oid;
int *defmap;
ExprState **defexprs;
MemoryContext oldcontext;
bool volatile_defexprs;
const int progress_cols[] = {
PROGRESS_COPY_COMMAND,
PROGRESS_COPY_TYPE,
PROGRESS_COPY_BYTES_TOTAL
};
int64 progress_vals[] = {
PROGRESS_COPY_COMMAND_FROM,
0,
0
};
/* Allocate workspace and zero all fields */
cstate = (CopyFromStateData *) palloc0(sizeof(CopyFromStateData));
/*
* We allocate everything used by a cstate in a new memory context. This
* avoids memory leaks during repeated use of COPY in a query.
@@ -1382,45 +1381,45 @@ BeginCopyFrom(ParseState *pstate,
initStringInfo(&cstate->attribute_buf);
/* Assign range table, we'll need it in CopyFrom. */
if (pstate)
cstate->range_table = pstate->p_rtable;
tupDesc = RelationGetDescr(cstate->rel);
num_phys_attrs = tupDesc->natts;
num_defaults = 0;
volatile_defexprs = false;
/*
* Pick up the required catalog information for each attribute in the
* relation, including the input function, the element type (to pass to
* the input function), and info about defaults and constraints. (Which
* input function we use depends on text/binary format choice.)
*/
in_functions = (FmgrInfo *) palloc(num_phys_attrs * sizeof(FmgrInfo));
typioparams = (Oid *) palloc(num_phys_attrs * sizeof(Oid));
defmap = (int *) palloc(num_phys_attrs * sizeof(int));
defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
- for (attnum = 1; attnum <= num_phys_attrs; attnum++)
+ for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
{
Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1);
/* We don't need info for dropped attributes */
if (att->attisdropped)
continue;
/* Fetch the input function and typioparam info */
if (cstate->opts.binary)
getTypeBinaryInputInfo(att->atttypid,
&in_func_oid, &typioparams[attnum - 1]);
else
getTypeInputInfo(att->atttypid,
&in_func_oid, &typioparams[attnum - 1]);
fmgr_info(in_func_oid, &in_functions[attnum - 1]);
/* Get default info if needed */
if (!list_member_int(cstate->attnumlist, attnum) && !att->attgenerated)
{
/* attribute is NOT to be copied from input */
/* use default value if one exists */
Expr *defexpr = (Expr *) build_column_default(cstate->rel,
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 667f2a4cd16..3c6e09815e0 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -546,45 +546,44 @@ DefineIndex(Oid relationId,
Form_pg_am accessMethodForm;
IndexAmRoutine *amRoutine;
bool amcanorder;
amoptions_function amoptions;
bool partitioned;
bool safe_index;
Datum reloptions;
int16 *coloptions;
IndexInfo *indexInfo;
bits16 flags;
bits16 constr_flags;
int numberOfAttributes;
int numberOfKeyAttributes;
TransactionId limitXmin;
ObjectAddress address;
LockRelId heaprelid;
LOCKTAG heaplocktag;
LOCKMODE lockmode;
Snapshot snapshot;
Oid root_save_userid;
int root_save_sec_context;
int root_save_nestlevel;
- int i;
root_save_nestlevel = NewGUCNestLevel();
/*
* Some callers need us to run with an empty default_tablespace; this is a
* necessary hack to be able to reproduce catalog state accurately when
* recreating indexes after table-rewriting ALTER TABLE.
*/
if (stmt->reset_default_tblspc)
(void) set_config_option("default_tablespace", "",
PGC_USERSET, PGC_S_SESSION,
GUC_ACTION_SAVE, true, 0, false);
/*
* Force non-concurrent build on temporary relations, even if CONCURRENTLY
* was requested. Other backends can't access a temporary relation, so
* there's no harm in grabbing a stronger lock, and a non-concurrent DROP
* is more efficient. Do this before any use of the concurrent option is
* done.
*/
if (stmt->concurrent && get_rel_persistence(relationId) != RELPERSISTENCE_TEMP)
concurrent = true;
@@ -1028,65 +1027,65 @@ DefineIndex(Oid relationId,
if (!found)
{
Form_pg_attribute att;
att = TupleDescAttr(RelationGetDescr(rel),
key->partattrs[i] - 1);
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("unique constraint on partitioned table must include all partitioning columns"),
errdetail("%s constraint on table \"%s\" lacks column \"%s\" which is part of the partition key.",
constraint_type, RelationGetRelationName(rel),
NameStr(att->attname))));
}
}
}
/*
* We disallow indexes on system columns. They would not necessarily get
* updated correctly, and they don't seem useful anyway.
*/
- for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+ for (int i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
{
AttrNumber attno = indexInfo->ii_IndexAttrNumbers[i];
if (attno < 0)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("index creation on system columns is not supported")));
}
/*
* Also check for system columns used in expressions or predicates.
*/
if (indexInfo->ii_Expressions || indexInfo->ii_Predicate)
{
Bitmapset *indexattrs = NULL;
pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
- for (i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
+ for (int i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
{
if (bms_is_member(i - FirstLowInvalidHeapAttributeNumber,
indexattrs))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("index creation on system columns is not supported")));
}
}
/* Is index safe for others to ignore? See set_indexsafe_procflags() */
safe_index = indexInfo->ii_Expressions == NIL &&
indexInfo->ii_Predicate == NIL;
/*
* Report index creation if appropriate (delay this till after most of the
* error checks)
*/
if (stmt->isconstraint && !quiet)
{
const char *constraint_type;
if (stmt->primary)
@@ -1224,45 +1223,45 @@ DefineIndex(Oid relationId,
/*
* We'll need an IndexInfo describing the parent index. The one
* built above is almost good enough, but not quite, because (for
* example) its predicate expression if any hasn't been through
* expression preprocessing. The most reliable way to get an
* IndexInfo that will match those for child indexes is to build
* it the same way, using BuildIndexInfo().
*/
parentIndex = index_open(indexRelationId, lockmode);
indexInfo = BuildIndexInfo(parentIndex);
parentDesc = RelationGetDescr(rel);
/*
* For each partition, scan all existing indexes; if one matches
* our index definition and is not already attached to some other
* parent index, attach it to the one we just created.
*
* If none matches, build a new index by calling ourselves
* recursively with the same options (except for the index name).
*/
- for (i = 0; i < nparts; i++)
+ for (int i = 0; i < nparts; i++)
{
Oid childRelid = part_oids[i];
Relation childrel;
Oid child_save_userid;
int child_save_sec_context;
int child_save_nestlevel;
List *childidxs;
ListCell *cell;
AttrMap *attmap;
bool found = false;
childrel = table_open(childRelid, lockmode);
GetUserIdAndSecContext(&child_save_userid,
&child_save_sec_context);
SetUserIdAndSecContext(childrel->rd_rel->relowner,
child_save_sec_context | SECURITY_RESTRICTED_OPERATION);
child_save_nestlevel = NewGUCNestLevel();
/*
* Don't try to create indexes on foreign tables, though. Skip
* those if a regular index, or fail if trying to create a
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 96d200e4461..933c3049016 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -1277,51 +1277,50 @@ prepare_projection_slot(AggState *aggstate, TupleTableSlot *slot, int currentSet
}
}
}
/*
* Compute the final value of all aggregates for one group.
*
* This function handles only one grouping set at a time, which the caller must
* have selected. It's also the caller's responsibility to adjust the supplied
* pergroup parameter to point to the current set's transvalues.
*
* Results are stored in the output econtext aggvalues/aggnulls.
*/
static void
finalize_aggregates(AggState *aggstate,
AggStatePerAgg peraggs,
AggStatePerGroup pergroup)
{
ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
Datum *aggvalues = econtext->ecxt_aggvalues;
bool *aggnulls = econtext->ecxt_aggnulls;
int aggno;
- int transno;
/*
* If there were any DISTINCT and/or ORDER BY aggregates, sort their
* inputs and run the transition functions.
*/
- for (transno = 0; transno < aggstate->numtrans; transno++)
+ for (int transno = 0; transno < aggstate->numtrans; transno++)
{
AggStatePerTrans pertrans = &aggstate->pertrans[transno];
AggStatePerGroup pergroupstate;
pergroupstate = &pergroup[transno];
if (pertrans->aggsortrequired)
{
Assert(aggstate->aggstrategy != AGG_HASHED &&
aggstate->aggstrategy != AGG_MIXED);
if (pertrans->numInputs == 1)
process_ordered_aggregate_single(aggstate,
pertrans,
pergroupstate);
else
process_ordered_aggregate_multi(aggstate,
pertrans,
pergroupstate);
}
else if (pertrans->numDistinctCols > 0 && pertrans->haslast)
{
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 1545ff9f161..f9d40fa1a0d 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1631,54 +1631,54 @@ interpret_ident_response(const char *ident_response,
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
if (strcmp(response_type, "USERID") != 0)
return false;
else
{
/*
* It's a USERID response. Good. "cursor" should be pointing
* to the colon that precedes the operating system type.
*/
if (*cursor != ':')
return false;
else
{
cursor++; /* Go over colon */
/* Skip over operating system field. */
while (*cursor != ':' && *cursor != '\r')
cursor++;
if (*cursor != ':')
return false;
else
{
- int i; /* Index into *ident_user */
+ int j; /* Index into *ident_user */
cursor++; /* Go over colon */
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
/* Rest of line is user name. Copy it over. */
- i = 0;
+ j = 0;
while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
- ident_user[i++] = *cursor++;
- ident_user[i] = '\0';
+ ident_user[j++] = *cursor++;
+ ident_user[j] = '\0';
return true;
}
}
}
}
}
}
/*
* Talk to the ident server on "remote_addr" and find out who
* owns the tcp connection to "local_addr"
* If the username is successfully retrieved, check the usermap.
*
* XXX: Using WaitLatchOrSocket() and doing a CHECK_FOR_INTERRUPTS() if the
* latch was set would improve the responsiveness to timeouts/cancellations.
*/
static int
ident_inet(hbaPort *port)
{
const SockAddr remote_addr = port->raddr;
const SockAddr local_addr = port->laddr;
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 1e94c5aa7c4..75acea149c7 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2428,101 +2428,101 @@ cost_sort(Path *path, PlannerInfo *root,
startup_cost += disable_cost;
startup_cost += input_cost;
path->rows = tuples;
path->startup_cost = startup_cost;
path->total_cost = startup_cost + run_cost;
}
/*
* append_nonpartial_cost
* Estimate the cost of the non-partial paths in a Parallel Append.
* The non-partial paths are assumed to be the first "numpaths" paths
* from the subpaths list, and to be in order of decreasing cost.
*/
static Cost
append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
{
Cost *costarr;
int arrlen;
ListCell *l;
ListCell *cell;
- int i;
int path_index;
int min_index;
int max_index;
if (numpaths == 0)
return 0;
/*
* Array length is number of workers or number of relevant paths,
* whichever is less.
*/
arrlen = Min(parallel_workers, numpaths);
costarr = (Cost *) palloc(sizeof(Cost) * arrlen);
/* The first few paths will each be claimed by a different worker. */
path_index = 0;
foreach(cell, subpaths)
{
Path *subpath = (Path *) lfirst(cell);
if (path_index == arrlen)
break;
costarr[path_index++] = subpath->total_cost;
}
/*
* Since subpaths are sorted by decreasing cost, the last one will have
* the minimum cost.
*/
min_index = arrlen - 1;
/*
* For each of the remaining subpaths, add its cost to the array element
* with minimum cost.
*/
for_each_cell(l, subpaths, cell)
{
Path *subpath = (Path *) lfirst(l);
- int i;
/* Consider only the non-partial paths */
if (path_index++ == numpaths)
break;
costarr[min_index] += subpath->total_cost;
/* Update the new min cost array index */
- for (min_index = i = 0; i < arrlen; i++)
+ min_index = 0;
+ for (int i = 0; i < arrlen; i++)
{
if (costarr[i] < costarr[min_index])
min_index = i;
}
}
/* Return the highest cost from the array */
- for (max_index = i = 0; i < arrlen; i++)
+ max_index = 0;
+ for (int i = 0; i < arrlen; i++)
{
if (costarr[i] > costarr[max_index])
max_index = i;
}
return costarr[max_index];
}
/*
* cost_append
* Determines and returns the cost of an Append node.
*/
void
cost_append(AppendPath *apath, PlannerInfo *root)
{
ListCell *l;
apath->path.startup_cost = 0;
apath->path.total_cost = 0;
apath->path.rows = 0;
if (apath->subpaths == NIL)
diff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c
index 5410a68bc91..91b9635dc0a 100644
--- a/src/backend/statistics/mcv.c
+++ b/src/backend/statistics/mcv.c
@@ -1585,45 +1585,44 @@ mcv_match_expression(Node *expr, Bitmapset *keys, List *exprs, Oid *collid)
* Evaluate clauses using the MCV list, and update the match bitmap.
*
* A match bitmap keeps match/mismatch status for each MCV item, and we
* update it based on additional clauses. We also use it to skip items
* that can't possibly match (e.g. item marked as "mismatch" can't change
* to "match" when evaluating AND clause list).
*
* The function also returns a flag indicating whether there was an
* equality condition for all attributes, the minimum frequency in the MCV
* list, and a total MCV frequency (sum of frequencies for all items).
*
* XXX Currently the match bitmap uses a bool for each MCV item, which is
* somewhat wasteful as we could do with just a single bit, thus reducing
* the size to ~1/8. It would also allow us to combine bitmaps simply using
* & and |, which should be faster than min/max. The bitmaps are fairly
* small, though (thanks to the cap on the MCV list size).
*/
static bool *
mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
Bitmapset *keys, List *exprs,
MCVList *mcvlist, bool is_or)
{
- int i;
ListCell *l;
bool *matches;
/* The bitmap may be partially built. */
Assert(clauses != NIL);
Assert(mcvlist != NULL);
Assert(mcvlist->nitems > 0);
Assert(mcvlist->nitems <= STATS_MCVLIST_MAX_ITEMS);
matches = palloc(sizeof(bool) * mcvlist->nitems);
memset(matches, !is_or, sizeof(bool) * mcvlist->nitems);
/*
* Loop through the list of clauses, and for each of them evaluate all the
* MCV items not yet eliminated by the preceding clauses.
*/
foreach(l, clauses)
{
Node *clause = (Node *) lfirst(l);
/* if it's a RestrictInfo, then extract the clause */
if (IsA(clause, RestrictInfo))
@@ -1640,45 +1639,45 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
/* valid only after examine_opclause_args returns true */
Node *clause_expr;
Const *cst;
bool expronleft;
int idx;
Oid collid;
fmgr_info(get_opcode(expr->opno), &opproc);
/* extract the var/expr and const from the expression */
if (!examine_opclause_args(expr->args, &clause_expr, &cst, &expronleft))
elog(ERROR, "incompatible clause");
/* match the attribute/expression to a dimension of the statistic */
idx = mcv_match_expression(clause_expr, keys, exprs, &collid);
/*
* Walk through the MCV items and evaluate the current clause. We
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
bool match = true;
MCVItem *item = &mcvlist->items[i];
Assert(idx >= 0);
/*
* When the MCV item or the Const value is NULL we can treat
* this as a mismatch. We must not call the operator because
* of strictness.
*/
if (item->isnull[idx] || cst->constisnull)
{
matches[i] = RESULT_MERGE(matches[i], is_or, false);
continue;
}
/*
* Skip MCV items that can't change result in the bitmap. Once
* the value gets false for AND-lists, or true for OR-lists,
* we don't need to look at more clauses.
*/
@@ -1747,45 +1746,45 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* Deconstruct the array constant, unless it's NULL (we'll cover
* that case below)
*/
if (!cst->constisnull)
{
arrayval = DatumGetArrayTypeP(cst->constvalue);
get_typlenbyvalalign(ARR_ELEMTYPE(arrayval),
&elmlen, &elmbyval, &elmalign);
deconstruct_array(arrayval,
ARR_ELEMTYPE(arrayval),
elmlen, elmbyval, elmalign,
&elem_values, &elem_nulls, &num_elems);
}
/* match the attribute/expression to a dimension of the statistic */
idx = mcv_match_expression(clause_expr, keys, exprs, &collid);
/*
* Walk through the MCV items and evaluate the current clause. We
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
int j;
bool match = !expr->useOr;
MCVItem *item = &mcvlist->items[i];
/*
* When the MCV item or the Const value is NULL we can treat
* this as a mismatch. We must not call the operator because
* of strictness.
*/
if (item->isnull[idx] || cst->constisnull)
{
matches[i] = RESULT_MERGE(matches[i], is_or, false);
continue;
}
/*
* Skip MCV items that can't change result in the bitmap. Once
* the value gets false for AND-lists, or true for OR-lists,
* we don't need to look at more clauses.
*/
if (RESULT_IS_FINAL(matches[i], is_or))
@@ -1818,164 +1817,162 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
elem_value));
match = RESULT_MERGE(match, expr->useOr, elem_match);
}
/* update the match bitmap with the result */
matches[i] = RESULT_MERGE(matches[i], is_or, match);
}
}
else if (IsA(clause, NullTest))
{
NullTest *expr = (NullTest *) clause;
Node *clause_expr = (Node *) (expr->arg);
/* match the attribute/expression to a dimension of the statistic */
int idx = mcv_match_expression(clause_expr, keys, exprs, NULL);
/*
* Walk through the MCV items and evaluate the current clause. We
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
bool match = false; /* assume mismatch */
MCVItem *item = &mcvlist->items[i];
/* if the clause mismatches the MCV item, update the bitmap */
switch (expr->nulltesttype)
{
case IS_NULL:
match = (item->isnull[idx]) ? true : match;
break;
case IS_NOT_NULL:
match = (!item->isnull[idx]) ? true : match;
break;
}
/* now, update the match bitmap, depending on OR/AND type */
matches[i] = RESULT_MERGE(matches[i], is_or, match);
}
}
else if (is_orclause(clause) || is_andclause(clause))
{
/* AND/OR clause, with all subclauses being compatible */
- int i;
BoolExpr *bool_clause = ((BoolExpr *) clause);
List *bool_clauses = bool_clause->args;
/* match/mismatch bitmap for each MCV item */
bool *bool_matches = NULL;
Assert(bool_clauses != NIL);
Assert(list_length(bool_clauses) >= 2);
/* build the match bitmap for the OR-clauses */
bool_matches = mcv_get_match_bitmap(root, bool_clauses, keys, exprs,
mcvlist, is_orclause(clause));
/*
* Merge the bitmap produced by mcv_get_match_bitmap into the
* current one. We need to consider if we're evaluating AND or OR
* condition when merging the results.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
matches[i] = RESULT_MERGE(matches[i], is_or, bool_matches[i]);
pfree(bool_matches);
}
else if (is_notclause(clause))
{
/* NOT clause, with all subclauses compatible */
- int i;
BoolExpr *not_clause = ((BoolExpr *) clause);
List *not_args = not_clause->args;
/* match/mismatch bitmap for each MCV item */
bool *not_matches = NULL;
Assert(not_args != NIL);
Assert(list_length(not_args) == 1);
/* build the match bitmap for the NOT-clause */
not_matches = mcv_get_match_bitmap(root, not_args, keys, exprs,
mcvlist, false);
/*
* Merge the bitmap produced by mcv_get_match_bitmap into the
* current one. We're handling a NOT clause, so invert the result
* before merging it into the global bitmap.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
matches[i] = RESULT_MERGE(matches[i], is_or, !not_matches[i]);
pfree(not_matches);
}
else if (IsA(clause, Var))
{
/* Var (has to be a boolean Var, possibly from below NOT) */
Var *var = (Var *) (clause);
/* match the attribute to a dimension of the statistic */
int idx = bms_member_index(keys, var->varattno);
Assert(var->vartype == BOOLOID);
/*
* Walk through the MCV items and evaluate the current clause. We
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
MCVItem *item = &mcvlist->items[i];
bool match = false;
/* if the item is NULL, it's a mismatch */
if (!item->isnull[idx] && DatumGetBool(item->values[idx]))
match = true;
/* update the result bitmap */
matches[i] = RESULT_MERGE(matches[i], is_or, match);
}
}
else
{
/* Otherwise, it must be a bare boolean-returning expression */
int idx;
/* match the expression to a dimension of the statistic */
idx = mcv_match_expression(clause, keys, exprs, NULL);
/*
* Walk through the MCV items and evaluate the current clause. We
* can skip items that were already ruled out, and terminate if
* there are no remaining MCV items that might possibly match.
*/
- for (i = 0; i < mcvlist->nitems; i++)
+ for (int i = 0; i < mcvlist->nitems; i++)
{
bool match;
MCVItem *item = &mcvlist->items[i];
/* "match" just means it's bool TRUE */
match = !item->isnull[idx] && DatumGetBool(item->values[idx]);
/* now, update the match bitmap, depending on OR/AND type */
matches[i] = RESULT_MERGE(matches[i], is_or, match);
}
}
}
return matches;
}
/*
* mcv_combine_selectivities
* Combine per-column and multi-column MCV selectivity estimates.
*
* simple_sel is a "simple" selectivity estimate (produced without using any
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 7a1202c6096..49d3b8c9dd0 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -3164,45 +3164,44 @@ DropRelationBuffers(SMgrRelation smgr_reln, ForkNumber *forkNum,
{
InvalidateBuffer(bufHdr); /* releases spinlock */
break;
}
}
if (j >= nforks)
UnlockBufHdr(bufHdr, buf_state);
}
}
/* ---------------------------------------------------------------------
* DropRelationsAllBuffers
*
* This function removes from the buffer pool all the pages of all
* forks of the specified relations. It's equivalent to calling
* DropRelationBuffers once per fork per relation with firstDelBlock = 0.
* --------------------------------------------------------------------
*/
void
DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
{
int i;
- int j;
int n = 0;
SMgrRelation *rels;
BlockNumber (*block)[MAX_FORKNUM + 1];
uint64 nBlocksToInvalidate = 0;
RelFileLocator *locators;
bool cached = true;
bool use_bsearch;
if (nlocators == 0)
return;
rels = palloc(sizeof(SMgrRelation) * nlocators); /* non-local relations */
/* If it's a local relation, it's localbuf.c's problem. */
for (i = 0; i < nlocators; i++)
{
if (RelFileLocatorBackendIsTemp(smgr_reln[i]->smgr_rlocator))
{
if (smgr_reln[i]->smgr_rlocator.backend == MyBackendId)
DropRelationAllLocalBuffers(smgr_reln[i]->smgr_rlocator.locator);
}
else
@@ -3213,72 +3212,72 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
* If there are no non-local relations, then we're done. Release the
* memory and return.
*/
if (n == 0)
{
pfree(rels);
return;
}
/*
* This is used to remember the number of blocks for all the relations
* forks.
*/
block = (BlockNumber (*)[MAX_FORKNUM + 1])
palloc(sizeof(BlockNumber) * n * (MAX_FORKNUM + 1));
/*
* We can avoid scanning the entire buffer pool if we know the exact size
* of each of the given relation forks. See DropRelationBuffers.
*/
for (i = 0; i < n && cached; i++)
{
- for (j = 0; j <= MAX_FORKNUM; j++)
+ for (int j = 0; j <= MAX_FORKNUM; j++)
{
/* Get the number of blocks for a relation's fork. */
block[i][j] = smgrnblocks_cached(rels[i], j);
/* We need to only consider the relation forks that exists. */
if (block[i][j] == InvalidBlockNumber)
{
if (!smgrexists(rels[i], j))
continue;
cached = false;
break;
}
/* calculate the total number of blocks to be invalidated */
nBlocksToInvalidate += block[i][j];
}
}
/*
* We apply the optimization iff the total number of blocks to invalidate
* is below the BUF_DROP_FULL_SCAN_THRESHOLD.
*/
if (cached && nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)
{
for (i = 0; i < n; i++)
{
- for (j = 0; j <= MAX_FORKNUM; j++)
+ for (int j = 0; j <= MAX_FORKNUM; j++)
{
/* ignore relation forks that doesn't exist */
if (!BlockNumberIsValid(block[i][j]))
continue;
/* drop all the buffers for a particular relation fork */
FindAndDropRelationBuffers(rels[i]->smgr_rlocator.locator,
j, block[i][j], 0);
}
}
pfree(block);
pfree(rels);
return;
}
pfree(block);
locators = palloc(sizeof(RelFileLocator) * n); /* non-local relations */
for (i = 0; i < n; i++)
locators[i] = rels[i]->smgr_rlocator.locator;
/*
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2c689157329..c0d09edf9d0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -11557,45 +11557,44 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
char *proretset;
char *prosrc;
char *probin;
char *prosqlbody;
char *funcargs;
char *funciargs;
char *funcresult;
char *protrftypes;
char *prokind;
char *provolatile;
char *proisstrict;
char *prosecdef;
char *proleakproof;
char *proconfig;
char *procost;
char *prorows;
char *prosupport;
char *proparallel;
char *lanname;
char **configitems = NULL;
int nconfigitems = 0;
const char *keyword;
- int i;
/* Do nothing in data-only dump */
if (dopt->dataOnly)
return;
query = createPQExpBuffer();
q = createPQExpBuffer();
delqry = createPQExpBuffer();
asPart = createPQExpBuffer();
if (!fout->is_prepared[PREPQUERY_DUMPFUNC])
{
/* Set up query for function-specific details */
appendPQExpBufferStr(query,
"PREPARE dumpFunc(pg_catalog.oid) AS\n");
appendPQExpBufferStr(query,
"SELECT\n"
"proretset,\n"
"prosrc,\n"
"probin,\n"
"provolatile,\n"
@@ -11757,49 +11756,48 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
appendPQExpBuffer(q, "CREATE %s %s.%s",
keyword,
fmtId(finfo->dobj.namespace->dobj.name),
funcfullsig ? funcfullsig :
funcsig);
if (prokind[0] == PROKIND_PROCEDURE)
/* no result type to output */ ;
else if (funcresult)
appendPQExpBuffer(q, " RETURNS %s", funcresult);
else
appendPQExpBuffer(q, " RETURNS %s%s",
(proretset[0] == 't') ? "SETOF " : "",
getFormattedTypeName(fout, finfo->prorettype,
zeroIsError));
appendPQExpBuffer(q, "\n LANGUAGE %s", fmtId(lanname));
if (*protrftypes)
{
Oid *typeids = palloc(FUNC_MAX_ARGS * sizeof(Oid));
- int i;
appendPQExpBufferStr(q, " TRANSFORM ");
parseOidArray(protrftypes, typeids, FUNC_MAX_ARGS);
- for (i = 0; typeids[i]; i++)
+ for (int i = 0; typeids[i]; i++)
{
if (i != 0)
appendPQExpBufferStr(q, ", ");
appendPQExpBuffer(q, "FOR TYPE %s",
getFormattedTypeName(fout, typeids[i], zeroAsNone));
}
}
if (prokind[0] == PROKIND_WINDOW)
appendPQExpBufferStr(q, " WINDOW");
if (provolatile[0] != PROVOLATILE_VOLATILE)
{
if (provolatile[0] == PROVOLATILE_IMMUTABLE)
appendPQExpBufferStr(q, " IMMUTABLE");
else if (provolatile[0] == PROVOLATILE_STABLE)
appendPQExpBufferStr(q, " STABLE");
else if (provolatile[0] != PROVOLATILE_VOLATILE)
pg_fatal("unrecognized provolatile value for function \"%s\"",
finfo->dobj.name);
}
@@ -11834,45 +11832,45 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
}
if (proretset[0] == 't' &&
strcmp(prorows, "0") != 0 && strcmp(prorows, "1000") != 0)
appendPQExpBuffer(q, " ROWS %s", prorows);
if (strcmp(prosupport, "-") != 0)
{
/* We rely on regprocout to provide quoting and qualification */
appendPQExpBuffer(q, " SUPPORT %s", prosupport);
}
if (proparallel[0] != PROPARALLEL_UNSAFE)
{
if (proparallel[0] == PROPARALLEL_SAFE)
appendPQExpBufferStr(q, " PARALLEL SAFE");
else if (proparallel[0] == PROPARALLEL_RESTRICTED)
appendPQExpBufferStr(q, " PARALLEL RESTRICTED");
else if (proparallel[0] != PROPARALLEL_UNSAFE)
pg_fatal("unrecognized proparallel value for function \"%s\"",
finfo->dobj.name);
}
- for (i = 0; i < nconfigitems; i++)
+ for (int i = 0; i < nconfigitems; i++)
{
/* we feel free to scribble on configitems[] here */
char *configitem = configitems[i];
char *pos;
pos = strchr(configitem, '=');
if (pos == NULL)
continue;
*pos++ = '\0';
appendPQExpBuffer(q, "\n SET %s TO ", fmtId(configitem));
/*
* Variables that are marked GUC_LIST_QUOTE were already fully quoted
* by flatten_set_variable_args() before they were put into the
* proconfig array. However, because the quoting rules used there
* aren't exactly like SQL's, we have to break the list value apart
* and then quote the elements as string literals. (The elements may
* be double-quoted as-is, but we can't just feed them to the SQL
* parser; it would do the wrong thing with elements that are
* zero-length or longer than NAMEDATALEN.)
*
* Variables that are not so marked should just be emitted as simple
diff --git a/src/interfaces/ecpg/pgtypeslib/numeric.c b/src/interfaces/ecpg/pgtypeslib/numeric.c
index a97b3300cb8..b666c909084 100644
--- a/src/interfaces/ecpg/pgtypeslib/numeric.c
+++ b/src/interfaces/ecpg/pgtypeslib/numeric.c
@@ -1043,45 +1043,44 @@ select_div_scale(numeric *var1, numeric *var2, int *rscale)
res_dscale = Max(res_dscale, NUMERIC_MIN_DISPLAY_SCALE);
res_dscale = Min(res_dscale, NUMERIC_MAX_DISPLAY_SCALE);
/* Select result scale */
*rscale = res_dscale + 4;
return res_dscale;
}
int
PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
{
NumericDigit *res_digits;
int res_ndigits;
int res_sign;
int res_weight;
numeric dividend;
numeric divisor[10];
int ndigits_tmp;
int weight_tmp;
int rscale_tmp;
int ri;
- int i;
long guess;
long first_have;
long first_div;
int first_nextdigit;
int stat = 0;
int rscale;
int res_dscale = select_div_scale(var1, var2, &rscale);
int err = -1;
NumericDigit *tmp_buf;
/*
* First of all division by zero check
*/
ndigits_tmp = var2->ndigits + 1;
if (ndigits_tmp == 1)
{
errno = PGTYPES_NUM_DIVIDE_ZERO;
return -1;
}
/*
* Determine the result sign, weight and number of digits to calculate
@@ -1090,45 +1089,45 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
res_sign = NUMERIC_POS;
else
res_sign = NUMERIC_NEG;
res_weight = var1->weight - var2->weight + 1;
res_ndigits = rscale + res_weight;
if (res_ndigits <= 0)
res_ndigits = 1;
/*
* Now result zero check
*/
if (var1->ndigits == 0)
{
zero_var(result);
result->rscale = rscale;
return 0;
}
/*
* Initialize local variables
*/
init_var(÷nd);
- for (i = 1; i < 10; i++)
+ for (int i = 1; i < 10; i++)
init_var(&divisor[i]);
/*
* Make a copy of the divisor which has one leading zero digit
*/
divisor[1].ndigits = ndigits_tmp;
divisor[1].rscale = var2->ndigits;
divisor[1].sign = NUMERIC_POS;
divisor[1].buf = digitbuf_alloc(ndigits_tmp);
if (divisor[1].buf == NULL)
goto done;
divisor[1].digits = divisor[1].buf;
divisor[1].digits[0] = 0;
memcpy(&(divisor[1].digits[1]), var2->digits, ndigits_tmp - 1);
/*
* Make a copy of the dividend
*/
dividend.ndigits = var1->ndigits;
dividend.weight = 0;
dividend.rscale = var1->ndigits;
dividend.sign = NUMERIC_POS;
@@ -1162,53 +1161,52 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
first_have = 0;
first_nextdigit = 0;
weight_tmp = 1;
rscale_tmp = divisor[1].rscale;
for (ri = 0; ri <= res_ndigits; ri++)
{
first_have = first_have * 10;
if (first_nextdigit >= 0 && first_nextdigit < dividend.ndigits)
first_have += dividend.digits[first_nextdigit];
first_nextdigit++;
guess = (first_have * 10) / first_div + 1;
if (guess > 9)
guess = 9;
while (guess > 0)
{
if (divisor[guess].buf == NULL)
{
- int i;
long sum = 0;
memcpy(&divisor[guess], &divisor[1], sizeof(numeric));
divisor[guess].buf = digitbuf_alloc(divisor[guess].ndigits);
if (divisor[guess].buf == NULL)
goto done;
divisor[guess].digits = divisor[guess].buf;
- for (i = divisor[1].ndigits - 1; i >= 0; i--)
+ for (int i = divisor[1].ndigits - 1; i >= 0; i--)
{
sum += divisor[1].digits[i] * guess;
divisor[guess].digits[i] = sum % 10;
sum /= 10;
}
}
divisor[guess].weight = weight_tmp;
divisor[guess].rscale = rscale_tmp;
stat = cmp_abs(÷nd, &divisor[guess]);
if (stat >= 0)
break;
guess--;
}
res_digits[ri + 1] = guess;
if (stat == 0)
{
ri++;
break;
@@ -1249,45 +1247,45 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
while (result->ndigits > 0 && *(result->digits) == 0)
{
(result->digits)++;
(result->weight)--;
(result->ndigits)--;
}
while (result->ndigits > 0 && result->digits[result->ndigits - 1] == 0)
(result->ndigits)--;
if (result->ndigits == 0)
result->sign = NUMERIC_POS;
result->dscale = res_dscale;
err = 0; /* if we've made it this far, return success */
done:
/*
* Tidy up
*/
if (dividend.buf != NULL)
digitbuf_free(dividend.buf);
- for (i = 1; i < 10; i++)
+ for (int i = 1; i < 10; i++)
{
if (divisor[i].buf != NULL)
digitbuf_free(divisor[i].buf);
}
return err;
}
int
PGTYPESnumeric_cmp(numeric *var1, numeric *var2)
{
/* use cmp_abs function to calculate the result */
/* both are positive: normal comparison with cmp_abs */
if (var1->sign == NUMERIC_POS && var2->sign == NUMERIC_POS)
return cmp_abs(var1, var2);
/* both are negative: return the inverse of the normal comparison */
if (var1->sign == NUMERIC_NEG && var2->sign == NUMERIC_NEG)
{
/*
On Tue, 23 Aug 2022 at 14:14, Justin Pryzby <pryzby@telsasoft.com> wrote:
Actually, they didn't sneak in - what I sent are the patches which are ready to
be reviewed, excluding the set of "this" and "tmp" and other renames which you
disliked. In the branch (not the squished patch) the first ~15 patches were
mostly for C99 for loops - I presented them this way deliberately, so you could
review and comment on whatever you're able to bite off, or run with whatever
parts you think are ready. I rewrote it now to be more bite sized by
truncating off the 2nd half of the patches.
Thanks for the updated patch.
I've now pushed it after making some small adjustments.
It seems there was one leftover rename still there, I removed that.
The only other changes I made were to just make the patch mode
consistent with what it was doing. There were a few cases where you
were doing:
if (typlen == -1) /* varlena */
{
- int i;
-
- for (i = 0; i < nvalues; i++)
+ for (int i = 0; i < nvalues; i++)
That wasn't really required to remove the warning as you'd already
adjusted the scope of the shadowed variable so there was no longer a
collision. The reason I adjusted these was because sometimes you were
doing that, and sometimes you were not. I wanted to be consistent, so
I opted for not doing it as it's not required for this effort. Maybe
one day those can be changed in some other unrelated effort to C99ify
our code.
The attached patch is just the portions I didn't commit.
Thanks for working on this.
David
Attachments:
v2_didnt_apply.patchtext/plain; charset=US-ASCII; name=v2_didnt_apply.patchDownload
diff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.c
index 524c1846b8..a581659fe2 100644
--- a/src/backend/access/brin/brin_minmax_multi.c
+++ b/src/backend/access/brin/brin_minmax_multi.c
@@ -620,14 +620,18 @@ brin_range_serialize(Ranges *range)
*/
if (typlen == -1) /* varlena */
{
- for (int i = 0; i < nvalues; i++)
+ int i;
+
+ for (i = 0; i < nvalues; i++)
{
len += VARSIZE_ANY(range->values[i]);
}
}
else if (typlen == -2) /* cstring */
{
- for (int i = 0; i < nvalues; i++)
+ int i;
+
+ for (i = 0; i < nvalues; i++)
{
/* don't forget to include the null terminator ;-) */
len += strlen(DatumGetCString(range->values[i])) + 1;
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index f9d40fa1a0..1545ff9f16 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1650,16 +1650,16 @@ interpret_ident_response(const char *ident_response,
return false;
else
{
- int j; /* Index into *ident_user */
+ int i; /* Index into *ident_user */
cursor++; /* Go over colon */
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
/* Rest of line is user name. Copy it over. */
- j = 0;
+ i = 0;
while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
- ident_user[j++] = *cursor++;
- ident_user[j] = '\0';
+ ident_user[i++] = *cursor++;
+ ident_user[i] = '\0';
return true;
}
}
diff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c
index 91b9635dc0..6eeacb0d47 100644
--- a/src/backend/statistics/mcv.c
+++ b/src/backend/statistics/mcv.c
@@ -1861,6 +1861,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
{
/* AND/OR clause, with all subclauses being compatible */
+ int i;
BoolExpr *bool_clause = ((BoolExpr *) clause);
List *bool_clauses = bool_clause->args;
@@ -1879,7 +1880,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* current one. We need to consider if we're evaluating AND or OR
* condition when merging the results.
*/
- for (int i = 0; i < mcvlist->nitems; i++)
+ for (i = 0; i < mcvlist->nitems; i++)
matches[i] = RESULT_MERGE(matches[i], is_or, bool_matches[i]);
pfree(bool_matches);
@@ -1888,6 +1889,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
{
/* NOT clause, with all subclauses compatible */
+ int i;
BoolExpr *not_clause = ((BoolExpr *) clause);
List *not_args = not_clause->args;
@@ -1906,7 +1908,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
* current one. We're handling a NOT clause, so invert the result
* before merging it into the global bitmap.
*/
- for (int i = 0; i < mcvlist->nitems; i++)
+ for (i = 0; i < mcvlist->nitems; i++)
matches[i] = RESULT_MERGE(matches[i], is_or, !not_matches[i]);
pfree(not_matches);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c0d09edf9d..ca4ad07004 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -11775,10 +11775,11 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
if (*protrftypes)
{
Oid *typeids = palloc(FUNC_MAX_ARGS * sizeof(Oid));
+ int i;
appendPQExpBufferStr(q, " TRANSFORM ");
parseOidArray(protrftypes, typeids, FUNC_MAX_ARGS);
- for (int i = 0; typeids[i]; i++)
+ for (i = 0; typeids[i]; i++)
{
if (i != 0)
appendPQExpBufferStr(q, ", ");
diff --git a/src/interfaces/ecpg/pgtypeslib/numeric.c b/src/interfaces/ecpg/pgtypeslib/numeric.c
index b666c90908..35e7b92da4 100644
--- a/src/interfaces/ecpg/pgtypeslib/numeric.c
+++ b/src/interfaces/ecpg/pgtypeslib/numeric.c
@@ -1180,6 +1180,7 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
{
if (divisor[guess].buf == NULL)
{
+ int i;
long sum = 0;
memcpy(&divisor[guess], &divisor[1], sizeof(numeric));
@@ -1187,7 +1188,7 @@ PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result)
if (divisor[guess].buf == NULL)
goto done;
divisor[guess].digits = divisor[guess].buf;
- for (int i = divisor[1].ndigits - 1; i >= 0; i--)
+ for (i = divisor[1].ndigits - 1; i >= 0; i--)
{
sum += divisor[1].digits[i] * guess;
divisor[guess].digits[i] = sum % 10;
On Wed, Aug 24, 2022 at 12:37:29PM +1200, David Rowley wrote:
On Tue, 23 Aug 2022 at 14:14, Justin Pryzby <pryzby@telsasoft.com> wrote:
Actually, they didn't sneak in - what I sent are the patches which are ready to
be reviewed, excluding the set of "this" and "tmp" and other renames which you
disliked. In the branch (not the squished patch) the first ~15 patches were
mostly for C99 for loops - I presented them this way deliberately, so you could
review and comment on whatever you're able to bite off, or run with whatever
parts you think are ready. I rewrote it now to be more bite sized by
truncating off the 2nd half of the patches.Thanks for the updated patch.
I've now pushed it after making some small adjustments.
Thanks for handling them.
Attached are half of the remainder of what I've written, ready for review.
I also put it here: https://github.com/justinpryzby/postgres/tree/avoid-shadow-vars
You may or may not find the associated commit messages to be useful.
Let me know if you'd like the individual patches included here, instead.
The first patch removes 2ndary, "inner" declarations, where that seems
reasonably safe and consistent with existing practice (and probably what the
original authors intended or would have written).
--
Justin
Attachments:
v3-remove-var-declarations.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 87b243e0d4b..a090cada400 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -3017,46 +3017,45 @@ XLogFileInitInternal(XLogSegNo logsegno, TimeLineID logtli,
}
pgstat_report_wait_end();
if (save_errno)
{
/*
* If we fail to make the file, delete it to release disk space
*/
unlink(tmppath);
close(fd);
errno = save_errno;
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not write to file \"%s\": %m", tmppath)));
}
pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
if (pg_fsync(fd) != 0)
{
- int save_errno = errno;
-
+ save_errno = errno;
close(fd);
errno = save_errno;
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not fsync file \"%s\": %m", tmppath)));
}
pgstat_report_wait_end();
if (close(fd) != 0)
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not close file \"%s\": %m", tmppath)));
/*
* Now move the segment into place with its final name. Cope with
* possibility that someone else has created the file while we were
* filling ours: if so, use ours to pre-create a future log segment.
*/
installed_segno = logsegno;
/*
* XXX: What should we use as max_segno? We used to use XLOGfileslop when
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 9be04c8a1e7..dacc989d855 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -16777,45 +16777,44 @@ PreCommit_on_commit_actions(void)
oids_to_truncate = lappend_oid(oids_to_truncate, oc->relid);
break;
case ONCOMMIT_DROP:
oids_to_drop = lappend_oid(oids_to_drop, oc->relid);
break;
}
}
/*
* Truncate relations before dropping so that all dependencies between
* relations are removed after they are worked on. Doing it like this
* might be a waste as it is possible that a relation being truncated will
* be dropped anyway due to its parent being dropped, but this makes the
* code more robust because of not having to re-check that the relation
* exists at truncation time.
*/
if (oids_to_truncate != NIL)
heap_truncate(oids_to_truncate);
if (oids_to_drop != NIL)
{
ObjectAddresses *targetObjects = new_object_addresses();
- ListCell *l;
foreach(l, oids_to_drop)
{
ObjectAddress object;
object.classId = RelationRelationId;
object.objectId = lfirst_oid(l);
object.objectSubId = 0;
Assert(!object_address_present(&object, targetObjects));
add_exact_object_address(&object, targetObjects);
}
/*
* Since this is an automatic drop, rather than one directly initiated
* by the user, we pass the PERFORM_DELETION_INTERNAL flag.
*/
performMultipleDeletions(targetObjects, DROP_CASCADE,
PERFORM_DELETION_INTERNAL | PERFORM_DELETION_QUIETLY);
#ifdef USE_ASSERT_CHECKING
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index dbdfe8bd2d4..3670d1f1861 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -214,46 +214,44 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
(skip_locked ? VACOPT_SKIP_LOCKED : 0) |
(analyze ? VACOPT_ANALYZE : 0) |
(freeze ? VACOPT_FREEZE : 0) |
(full ? VACOPT_FULL : 0) |
(disable_page_skipping ? VACOPT_DISABLE_PAGE_SKIPPING : 0) |
(process_toast ? VACOPT_PROCESS_TOAST : 0);
/* sanity checks on options */
Assert(params.options & (VACOPT_VACUUM | VACOPT_ANALYZE));
Assert((params.options & VACOPT_VACUUM) ||
!(params.options & (VACOPT_FULL | VACOPT_FREEZE)));
if ((params.options & VACOPT_FULL) && params.nworkers > 0)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("VACUUM FULL cannot be performed in parallel")));
/*
* Make sure VACOPT_ANALYZE is specified if any column lists are present.
*/
if (!(params.options & VACOPT_ANALYZE))
{
- ListCell *lc;
-
foreach(lc, vacstmt->rels)
{
VacuumRelation *vrel = lfirst_node(VacuumRelation, lc);
if (vrel->va_cols != NIL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("ANALYZE option must be specified when a column list is provided")));
}
}
/*
* All freeze ages are zero if the FREEZE option is given; otherwise pass
* them as -1 which means to use the default values.
*/
if (params.options & VACOPT_FREEZE)
{
params.freeze_min_age = 0;
params.freeze_table_age = 0;
params.multixact_freeze_min_age = 0;
params.multixact_freeze_table_age = 0;
}
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index ac03271882f..901dd435efd 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -749,45 +749,44 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
*/
if (map == NULL)
{
/*
* It's safe to reuse these from the partition root, as we
* only process one tuple at a time (therefore we won't
* overwrite needed data in slots), and the results of
* projections are independent of the underlying storage.
* Projections and where clauses themselves don't store state
* / are independent of the underlying storage.
*/
onconfl->oc_ProjSlot =
rootResultRelInfo->ri_onConflict->oc_ProjSlot;
onconfl->oc_ProjInfo =
rootResultRelInfo->ri_onConflict->oc_ProjInfo;
onconfl->oc_WhereClause =
rootResultRelInfo->ri_onConflict->oc_WhereClause;
}
else
{
List *onconflset;
List *onconflcols;
- bool found_whole_row;
/*
* Translate expressions in onConflictSet to account for
* different attribute numbers. For that, map partition
* varattnos twice: first to catch the EXCLUDED
* pseudo-relation (INNER_VAR), and second to handle the main
* target relation (firstVarno).
*/
onconflset = copyObject(node->onConflictSet);
if (part_attmap == NULL)
part_attmap =
build_attrmap_by_name(RelationGetDescr(partrel),
RelationGetDescr(firstResultRel));
onconflset = (List *)
map_variable_attnos((Node *) onconflset,
INNER_VAR, 0,
part_attmap,
RelationGetForm(partrel)->reltype,
&found_whole_row);
/* We ignore the value of found_whole_row. */
onconflset = (List *)
map_variable_attnos((Node *) onconflset,
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 7d176e7b00a..8ba27a98b42 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -342,45 +342,44 @@ create_index_paths(PlannerInfo *root, RelOptInfo *rel)
bpath = create_bitmap_heap_path(root, rel, bitmapqual,
rel->lateral_relids, 1.0, 0);
add_path(rel, (Path *) bpath);
/* create a partial bitmap heap path */
if (rel->consider_parallel && rel->lateral_relids == NULL)
create_partial_bitmap_paths(root, rel, bitmapqual);
}
/*
* Likewise, if we found anything usable, generate BitmapHeapPaths for the
* most promising combinations of join bitmap index paths. Our strategy
* is to generate one such path for each distinct parameterization seen
* among the available bitmap index paths. This may look pretty
* expensive, but usually there won't be very many distinct
* parameterizations. (This logic is quite similar to that in
* consider_index_join_clauses, but we're working with whole paths not
* individual clauses.)
*/
if (bitjoinpaths != NIL)
{
List *all_path_outers;
- ListCell *lc;
/* Identify each distinct parameterization seen in bitjoinpaths */
all_path_outers = NIL;
foreach(lc, bitjoinpaths)
{
Path *path = (Path *) lfirst(lc);
Relids required_outer = PATH_REQ_OUTER(path);
if (!bms_equal_any(required_outer, all_path_outers))
all_path_outers = lappend(all_path_outers, required_outer);
}
/* Now, for each distinct parameterization set ... */
foreach(lc, all_path_outers)
{
Relids max_outers = (Relids) lfirst(lc);
List *this_path_set;
Path *bitmapqual;
Relids required_outer;
double loop_count;
BitmapHeapPath *bpath;
ListCell *lcp;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index df4ca129191..b15ecc83971 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2383,45 +2383,45 @@ finalize_plan(PlannerInfo *root, Plan *plan,
/* We must run finalize_plan on the subquery */
rel = find_base_rel(root, sscan->scan.scanrelid);
subquery_params = rel->subroot->outer_params;
if (gather_param >= 0)
subquery_params = bms_add_member(bms_copy(subquery_params),
gather_param);
finalize_plan(rel->subroot, sscan->subplan, gather_param,
subquery_params, NULL);
/* Now we can add its extParams to the parent's params */
context.paramids = bms_add_members(context.paramids,
sscan->subplan->extParam);
/* We need scan_params too, though */
context.paramids = bms_add_members(context.paramids,
scan_params);
}
break;
case T_FunctionScan:
{
FunctionScan *fscan = (FunctionScan *) plan;
- ListCell *lc;
+ ListCell *lc; //
/*
* Call finalize_primnode independently on each function
* expression, so that we can record which params are
* referenced in each, in order to decide which need
* re-evaluating during rescan.
*/
foreach(lc, fscan->functions)
{
RangeTblFunction *rtfunc = (RangeTblFunction *) lfirst(lc);
finalize_primnode_context funccontext;
funccontext = context;
funccontext.paramids = NULL;
finalize_primnode(rtfunc->funcexpr, &funccontext);
/* remember results for execution */
rtfunc->funcparams = funccontext.paramids;
/* add the function's params to the overall set */
context.paramids = bms_add_members(context.paramids,
@@ -2491,158 +2491,148 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_NamedTuplestoreScan:
context.paramids = bms_add_members(context.paramids, scan_params);
break;
case T_ForeignScan:
{
ForeignScan *fscan = (ForeignScan *) plan;
finalize_primnode((Node *) fscan->fdw_exprs,
&context);
finalize_primnode((Node *) fscan->fdw_recheck_quals,
&context);
/* We assume fdw_scan_tlist cannot contain Params */
context.paramids = bms_add_members(context.paramids,
scan_params);
}
break;
case T_CustomScan:
{
CustomScan *cscan = (CustomScan *) plan;
- ListCell *lc;
+ ListCell *lc; //
finalize_primnode((Node *) cscan->custom_exprs,
&context);
/* We assume custom_scan_tlist cannot contain Params */
context.paramids =
bms_add_members(context.paramids, scan_params);
/* child nodes if any */
foreach(lc, cscan->custom_plans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(lc),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_ModifyTable:
{
ModifyTable *mtplan = (ModifyTable *) plan;
/* Force descendant scan nodes to reference epqParam */
locally_added_param = mtplan->epqParam;
valid_params = bms_add_member(bms_copy(valid_params),
locally_added_param);
scan_params = bms_add_member(bms_copy(scan_params),
locally_added_param);
finalize_primnode((Node *) mtplan->returningLists,
&context);
finalize_primnode((Node *) mtplan->onConflictSet,
&context);
finalize_primnode((Node *) mtplan->onConflictWhere,
&context);
/* exclRelTlist contains only Vars, doesn't need examination */
}
break;
case T_Append:
{
- ListCell *l;
-
foreach(l, ((Append *) plan)->appendplans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(l),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_MergeAppend:
{
- ListCell *l;
-
foreach(l, ((MergeAppend *) plan)->mergeplans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(l),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_BitmapAnd:
{
- ListCell *l;
-
foreach(l, ((BitmapAnd *) plan)->bitmapplans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(l),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_BitmapOr:
{
- ListCell *l;
-
foreach(l, ((BitmapOr *) plan)->bitmapplans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(l),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_NestLoop:
{
- ListCell *l;
-
finalize_primnode((Node *) ((Join *) plan)->joinqual,
&context);
/* collect set of params that will be passed to right child */
foreach(l, ((NestLoop *) plan)->nestParams)
{
NestLoopParam *nlp = (NestLoopParam *) lfirst(l);
nestloop_params = bms_add_member(nestloop_params,
nlp->paramno);
}
}
break;
case T_MergeJoin:
finalize_primnode((Node *) ((Join *) plan)->joinqual,
&context);
finalize_primnode((Node *) ((MergeJoin *) plan)->mergeclauses,
&context);
break;
case T_HashJoin:
finalize_primnode((Node *) ((Join *) plan)->joinqual,
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 043181b586b..71052c841d7 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -634,45 +634,44 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
* For UNION ALL, we just need the Append path. For UNION, need to add
* node(s) to remove duplicates.
*/
if (!op->all)
path = make_union_unique(op, path, tlist, root);
add_path(result_rel, path);
/*
* Estimate number of groups. For now we just assume the output is unique
* --- this is certainly true for the UNION case, and we want worst-case
* estimates anyway.
*/
result_rel->rows = path->rows;
/*
* Now consider doing the same thing using the partial paths plus Append
* plus Gather.
*/
if (partial_paths_valid)
{
Path *ppath;
- ListCell *lc;
int parallel_workers = 0;
/* Find the highest number of workers requested for any subpath. */
foreach(lc, partial_pathlist)
{
Path *path = lfirst(lc);
parallel_workers = Max(parallel_workers, path->parallel_workers);
}
Assert(parallel_workers > 0);
/*
* If the use of parallel append is permitted, always request at least
* log2(# of children) paths. We assume it can be useful to have
* extra workers in this case because they will be spread out across
* the children. The precise formula is just a guess; see
* add_paths_to_append_rel.
*/
if (enable_parallel_append)
{
parallel_workers = Max(parallel_workers,
pg_leftmost_one_pos32(list_length(partial_pathlist)) + 1);
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index c1c27e67d47..bf698c1fc3f 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1246,45 +1246,44 @@ dependency_is_compatible_expression(Node *clause, Index relid, List *statlist, N
* first argument, and pseudoconstant is the second one.
*/
if (!is_pseudo_constant_clause(lsecond(expr->args)))
return false;
clause_expr = linitial(expr->args);
/*
* If it's not an "=" operator, just ignore the clause, as it's not
* compatible with functional dependencies. The operator is identified
* simply by looking at which function it uses to estimate
* selectivity. That's a bit strange, but it's what other similar
* places do.
*/
if (get_oprrest(expr->opno) != F_EQSEL)
return false;
/* OK to proceed with checking "var" */
}
else if (is_orclause(clause))
{
BoolExpr *bool_expr = (BoolExpr *) clause;
- ListCell *lc;
/* start with no expression (we'll use the first match) */
*expr = NULL;
foreach(lc, bool_expr->args)
{
Node *or_expr = NULL;
/*
* Had we found incompatible expression in the arguments, treat
* the whole expression as incompatible.
*/
if (!dependency_is_compatible_expression((Node *) lfirst(lc), relid,
statlist, &or_expr))
return false;
if (*expr == NULL)
*expr = or_expr;
/* ensure all the expressions are the same */
if (!equal(or_expr, *expr))
return false;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 8964f73b929..3f5683f70b5 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -1284,45 +1284,44 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
idxrelrec = (Form_pg_class) GETSTRUCT(ht_idxrel);
/*
* Fetch the pg_am tuple of the index' access method
*/
ht_am = SearchSysCache1(AMOID, ObjectIdGetDatum(idxrelrec->relam));
if (!HeapTupleIsValid(ht_am))
elog(ERROR, "cache lookup failed for access method %u",
idxrelrec->relam);
amrec = (Form_pg_am) GETSTRUCT(ht_am);
/* Fetch the index AM's API struct */
amroutine = GetIndexAmRoutine(amrec->amhandler);
/*
* Get the index expressions, if any. (NOTE: we do not use the relcache
* versions of the expressions and predicate, because we want to display
* non-const-folded expressions.)
*/
if (!heap_attisnull(ht_idx, Anum_pg_index_indexprs, NULL))
{
Datum exprsDatum;
- bool isnull;
char *exprsString;
exprsDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
Anum_pg_index_indexprs, &isnull);
Assert(!isnull);
exprsString = TextDatumGetCString(exprsDatum);
indexprs = (List *) stringToNode(exprsString);
pfree(exprsString);
}
else
indexprs = NIL;
indexpr_item = list_head(indexprs);
context = deparse_context_for(get_relation_name(indrelid), indrelid);
/*
* Start the index definition. Note that the index's name should never be
* schema-qualified, but the indexed rel's name may be.
*/
initStringInfo(&buf);
@@ -1481,45 +1480,44 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
*/
if (showTblSpc)
{
Oid tblspc;
tblspc = get_rel_tablespace(indexrelid);
if (OidIsValid(tblspc))
{
if (isConstraint)
appendStringInfoString(&buf, " USING INDEX");
appendStringInfo(&buf, " TABLESPACE %s",
quote_identifier(get_tablespace_name(tblspc)));
}
}
/*
* If it's a partial index, decompile and append the predicate
*/
if (!heap_attisnull(ht_idx, Anum_pg_index_indpred, NULL))
{
Node *node;
Datum predDatum;
- bool isnull;
char *predString;
/* Convert text string to node tree */
predDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
Anum_pg_index_indpred, &isnull);
Assert(!isnull);
predString = TextDatumGetCString(predDatum);
node = (Node *) stringToNode(predString);
pfree(predString);
/* Deparse */
str = deparse_expression_pretty(node, context, false, false,
prettyFlags, 0);
if (isConstraint)
appendStringInfo(&buf, " WHERE (%s)", str);
else
appendStringInfo(&buf, " WHERE %s", str);
}
}
/* Clean up */
ReleaseSysCache(ht_idx);
@@ -1629,45 +1627,44 @@ pg_get_statisticsobj_worker(Oid statextid, bool columns_only, bool missing_ok)
statexttup = SearchSysCache1(STATEXTOID, ObjectIdGetDatum(statextid));
if (!HeapTupleIsValid(statexttup))
{
if (missing_ok)
return NULL;
elog(ERROR, "cache lookup failed for statistics object %u", statextid);
}
/* has the statistics expressions? */
has_exprs = !heap_attisnull(statexttup, Anum_pg_statistic_ext_stxexprs, NULL);
statextrec = (Form_pg_statistic_ext) GETSTRUCT(statexttup);
/*
* Get the statistics expressions, if any. (NOTE: we do not use the
* relcache versions of the expressions, because we want to display
* non-const-folded expressions.)
*/
if (has_exprs)
{
Datum exprsDatum;
- bool isnull;
char *exprsString;
exprsDatum = SysCacheGetAttr(STATEXTOID, statexttup,
Anum_pg_statistic_ext_stxexprs, &isnull);
Assert(!isnull);
exprsString = TextDatumGetCString(exprsDatum);
exprs = (List *) stringToNode(exprsString);
pfree(exprsString);
}
else
exprs = NIL;
/* count the number of columns (attributes and expressions) */
ncolumns = statextrec->stxkeys.dim1 + list_length(exprs);
initStringInfo(&buf);
if (!columns_only)
{
nsp = get_namespace_name_or_temp(statextrec->stxnamespace);
appendStringInfo(&buf, "CREATE STATISTICS %s",
quote_qualified_identifier(nsp,
@@ -1925,45 +1922,44 @@ pg_get_partkeydef_worker(Oid relid, int prettyFlags,
Assert(form->partrelid == relid);
/* Must get partclass and partcollation the hard way */
datum = SysCacheGetAttr(PARTRELID, tuple,
Anum_pg_partitioned_table_partclass, &isnull);
Assert(!isnull);
partclass = (oidvector *) DatumGetPointer(datum);
datum = SysCacheGetAttr(PARTRELID, tuple,
Anum_pg_partitioned_table_partcollation, &isnull);
Assert(!isnull);
partcollation = (oidvector *) DatumGetPointer(datum);
/*
* Get the expressions, if any. (NOTE: we do not use the relcache
* versions of the expressions, because we want to display
* non-const-folded expressions.)
*/
if (!heap_attisnull(tuple, Anum_pg_partitioned_table_partexprs, NULL))
{
Datum exprsDatum;
- bool isnull;
char *exprsString;
exprsDatum = SysCacheGetAttr(PARTRELID, tuple,
Anum_pg_partitioned_table_partexprs, &isnull);
Assert(!isnull);
exprsString = TextDatumGetCString(exprsDatum);
partexprs = (List *) stringToNode(exprsString);
if (!IsA(partexprs, List))
elog(ERROR, "unexpected node type found in partexprs: %d",
(int) nodeTag(partexprs));
pfree(exprsString);
}
else
partexprs = NIL;
partexpr_item = list_head(partexprs);
context = deparse_context_for(get_relation_name(relid), relid);
initStringInfo(&buf);
v3-renames.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9b03579e6e0..9a83ebf3231 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1799,57 +1799,57 @@ heap_drop_with_catalog(Oid relid)
rel = relation_open(relid, AccessExclusiveLock);
/*
* There can no longer be anyone *else* touching the relation, but we
* might still have open queries or cursors, or pending trigger events, in
* our own session.
*/
CheckTableNotInUse(rel, "DROP TABLE");
/*
* This effectively deletes all rows in the table, and may be done in a
* serializable transaction. In that case we must record a rw-conflict in
* to this transaction from each transaction holding a predicate lock on
* the table.
*/
CheckTableForSerializableConflictIn(rel);
/*
* Delete pg_foreign_table tuple first.
*/
if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
{
- Relation rel;
- HeapTuple tuple;
+ Relation pg_foreign_table;
+ HeapTuple foreigntuple;
- rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+ pg_foreign_table = table_open(ForeignTableRelationId, RowExclusiveLock);
- tuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
- if (!HeapTupleIsValid(tuple))
+ foreigntuple = SearchSysCache1(FOREIGNTABLEREL, ObjectIdGetDatum(relid));
+ if (!HeapTupleIsValid(foreigntuple))
elog(ERROR, "cache lookup failed for foreign table %u", relid);
- CatalogTupleDelete(rel, &tuple->t_self);
+ CatalogTupleDelete(pg_foreign_table, &foreigntuple->t_self);
- ReleaseSysCache(tuple);
- table_close(rel, RowExclusiveLock);
+ ReleaseSysCache(foreigntuple);
+ table_close(pg_foreign_table, RowExclusiveLock);
}
/*
* If a partitioned table, delete the pg_partitioned_table tuple.
*/
if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
RemovePartitionKeyByRelId(relid);
/*
* If the relation being dropped is the default partition itself,
* invalidate its entry in pg_partitioned_table.
*/
if (relid == defaultPartOid)
update_default_partition_oid(parentOid, InvalidOid);
/*
* Schedule unlinking of the relation's physical files at commit.
*/
if (RELKIND_HAS_STORAGE(rel->rd_rel->relkind))
RelationDropStorage(rel);
/* ensure that stats are dropped if transaction commits */
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 8b574b86c47..f9366f588fb 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -87,70 +87,70 @@ parse_publication_options(ParseState *pstate,
{
ListCell *lc;
*publish_given = false;
*publish_via_partition_root_given = false;
/* defaults */
pubactions->pubinsert = true;
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
*publish_via_partition_root = false;
/* Parse options */
foreach(lc, options)
{
DefElem *defel = (DefElem *) lfirst(lc);
if (strcmp(defel->defname, "publish") == 0)
{
char *publish;
List *publish_list;
- ListCell *lc;
+ ListCell *lc2;
if (*publish_given)
errorConflictingDefElem(defel, pstate);
/*
* If publish option was given only the explicitly listed actions
* should be published.
*/
pubactions->pubinsert = false;
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
*publish_given = true;
publish = defGetString(defel);
if (!SplitIdentifierString(publish, ',', &publish_list))
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("invalid list syntax for \"publish\" option")));
/* Process the option list. */
- foreach(lc, publish_list)
+ foreach(lc2, publish_list)
{
- char *publish_opt = (char *) lfirst(lc);
+ char *publish_opt = (char *) lfirst(lc2);
if (strcmp(publish_opt, "insert") == 0)
pubactions->pubinsert = true;
else if (strcmp(publish_opt, "update") == 0)
pubactions->pubupdate = true;
else if (strcmp(publish_opt, "delete") == 0)
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("unrecognized \"publish\" value: \"%s\"", publish_opt)));
}
}
else if (strcmp(defel->defname, "publish_via_partition_root") == 0)
{
if (*publish_via_partition_root_given)
errorConflictingDefElem(defel, pstate);
*publish_via_partition_root_given = true;
*publish_via_partition_root = defGetBoolean(defel);
}
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index dacc989d855..7535b86bcae 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10204,45 +10204,45 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
foreach(cell, clone)
{
Oid parentConstrOid = lfirst_oid(cell);
Form_pg_constraint constrForm;
Relation pkrel;
HeapTuple tuple;
int numfks;
AttrNumber conkey[INDEX_MAX_KEYS];
AttrNumber mapped_conkey[INDEX_MAX_KEYS];
AttrNumber confkey[INDEX_MAX_KEYS];
Oid conpfeqop[INDEX_MAX_KEYS];
Oid conppeqop[INDEX_MAX_KEYS];
Oid conffeqop[INDEX_MAX_KEYS];
int numfkdelsetcols;
AttrNumber confdelsetcols[INDEX_MAX_KEYS];
Constraint *fkconstraint;
bool attached;
Oid indexOid;
Oid constrOid;
ObjectAddress address,
referenced;
- ListCell *cell;
+ ListCell *lc;
Oid insertTriggerOid,
updateTriggerOid;
tuple = SearchSysCache1(CONSTROID, parentConstrOid);
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for constraint %u",
parentConstrOid);
constrForm = (Form_pg_constraint) GETSTRUCT(tuple);
/* Don't clone constraints whose parents are being cloned */
if (list_member_oid(clone, constrForm->conparentid))
{
ReleaseSysCache(tuple);
continue;
}
/*
* Need to prevent concurrent deletions. If pkrel is a partitioned
* relation, that means to lock all partitions.
*/
pkrel = table_open(constrForm->confrelid, ShareRowExclusiveLock);
if (pkrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
@@ -10257,47 +10257,47 @@ CloneFkReferencing(List **wqueue, Relation parentRel, Relation partRel)
/*
* Get the "check" triggers belonging to the constraint to pass as
* parent OIDs for similar triggers that will be created on the
* partition in addFkRecurseReferencing(). They are also passed to
* tryAttachPartitionForeignKey() below to simply assign as parents to
* the partition's existing "check" triggers, that is, if the
* corresponding constraints is deemed attachable to the parent
* constraint.
*/
GetForeignKeyCheckTriggers(trigrel, constrForm->oid,
constrForm->confrelid, constrForm->conrelid,
&insertTriggerOid, &updateTriggerOid);
/*
* Before creating a new constraint, see whether any existing FKs are
* fit for the purpose. If one is, attach the parent constraint to
* it, and don't clone anything. This way we avoid the expensive
* verification step and don't end up with a duplicate FK, and we
* don't need to recurse to partitions for this constraint.
*/
attached = false;
- foreach(cell, partFKs)
+ foreach(lc, partFKs)
{
- ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, cell);
+ ForeignKeyCacheInfo *fk = lfirst_node(ForeignKeyCacheInfo, lc);
if (tryAttachPartitionForeignKey(fk,
RelationGetRelid(partRel),
parentConstrOid,
numfks,
mapped_conkey,
confkey,
conpfeqop,
insertTriggerOid,
updateTriggerOid,
trigrel))
{
attached = true;
table_close(pkrel, NoLock);
break;
}
}
if (attached)
{
ReleaseSysCache(tuple);
continue;
}
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 62a09fb131b..f1801a160ed 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1130,77 +1130,77 @@ CreateTriggerFiringOn(CreateTrigStmt *stmt, const char *queryString,
}
/*
* If it has a WHEN clause, add dependencies on objects mentioned in the
* expression (eg, functions, as well as any columns used).
*/
if (whenRtable != NIL)
recordDependencyOnExpr(&myself, whenClause, whenRtable,
DEPENDENCY_NORMAL);
/* Post creation hook for new trigger */
InvokeObjectPostCreateHookArg(TriggerRelationId, trigoid, 0,
isInternal);
/*
* Lastly, create the trigger on child relations, if needed.
*/
if (partition_recurse)
{
PartitionDesc partdesc = RelationGetPartitionDesc(rel, true);
List *idxs = NIL;
List *childTbls = NIL;
- ListCell *l;
int i;
MemoryContext oldcxt,
perChildCxt;
perChildCxt = AllocSetContextCreate(CurrentMemoryContext,
"part trig clone",
ALLOCSET_SMALL_SIZES);
/*
* When a trigger is being created associated with an index, we'll
* need to associate the trigger in each child partition with the
* corresponding index on it.
*/
if (OidIsValid(indexOid))
{
ListCell *l;
List *idxs = NIL;
idxs = find_inheritance_children(indexOid, ShareRowExclusiveLock);
foreach(l, idxs)
childTbls = lappend_oid(childTbls,
IndexGetRelation(lfirst_oid(l),
false));
}
oldcxt = MemoryContextSwitchTo(perChildCxt);
/* Iterate to create the trigger on each existing partition */
for (i = 0; i < partdesc->nparts; i++)
{
Oid indexOnChild = InvalidOid;
- ListCell *l2;
+ ListCell *l,
+ *l2;
CreateTrigStmt *childStmt;
Relation childTbl;
Node *qual;
childTbl = table_open(partdesc->oids[i], ShareRowExclusiveLock);
/* Find which of the child indexes is the one on this partition */
if (OidIsValid(indexOid))
{
forboth(l, idxs, l2, childTbls)
{
if (lfirst_oid(l2) == partdesc->oids[i])
{
indexOnChild = lfirst_oid(l);
break;
}
}
if (!OidIsValid(indexOnChild))
elog(ERROR, "failed to find index matching index \"%s\" in partition \"%s\"",
get_rel_name(indexOid),
get_rel_name(partdesc->oids[i]));
}
@@ -1707,47 +1707,47 @@ renametrig_partition(Relation tgrel, Oid partitionId, Oid parentTriggerOid,
NULL, 1, &key);
while (HeapTupleIsValid(tuple = systable_getnext(tgscan)))
{
Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tuple);
Relation partitionRel;
if (tgform->tgparentid != parentTriggerOid)
continue; /* not our trigger */
partitionRel = table_open(partitionId, NoLock);
/* Rename the trigger on this partition */
renametrig_internal(tgrel, partitionRel, tuple, newname, expected_name);
/* And if this relation is partitioned, recurse to its partitions */
if (partitionRel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
{
PartitionDesc partdesc = RelationGetPartitionDesc(partitionRel,
true);
for (int i = 0; i < partdesc->nparts; i++)
{
- Oid partitionId = partdesc->oids[i];
+ Oid partid = partdesc->oids[i];
- renametrig_partition(tgrel, partitionId, tgform->oid, newname,
+ renametrig_partition(tgrel, partid, tgform->oid, newname,
NameStr(tgform->tgname));
}
}
table_close(partitionRel, NoLock);
/* There should be at most one matching tuple */
break;
}
systable_endscan(tgscan);
}
/*
* EnableDisableTrigger()
*
* Called by ALTER TABLE ENABLE/DISABLE [ REPLICA | ALWAYS ] TRIGGER
* to change 'tgenabled' field for the specified trigger(s)
*
* rel: relation to process (caller must hold suitable lock on it)
* tgname: trigger to process, or NULL to scan all triggers
* fires_when: new value for tgenabled field. In addition to generic
* enablement/disablement, this also defines when the trigger
* should be fired in session replication roles.
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 933c3049016..736082c8fb3 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -3168,45 +3168,44 @@ hashagg_reset_spill_state(AggState *aggstate)
AggState *
ExecInitAgg(Agg *node, EState *estate, int eflags)
{
AggState *aggstate;
AggStatePerAgg peraggs;
AggStatePerTrans pertransstates;
AggStatePerGroup *pergroups;
Plan *outerPlan;
ExprContext *econtext;
TupleDesc scanDesc;
int max_aggno;
int max_transno;
int numaggrefs;
int numaggs;
int numtrans;
int phase;
int phaseidx;
ListCell *l;
Bitmapset *all_grouped_cols = NULL;
int numGroupingSets = 1;
int numPhases;
int numHashes;
- int i = 0;
int j = 0;
bool use_hashing = (node->aggstrategy == AGG_HASHED ||
node->aggstrategy == AGG_MIXED);
/* check for unsupported flags */
Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
/*
* create state structure
*/
aggstate = makeNode(AggState);
aggstate->ss.ps.plan = (Plan *) node;
aggstate->ss.ps.state = estate;
aggstate->ss.ps.ExecProcNode = ExecAgg;
aggstate->aggs = NIL;
aggstate->numaggs = 0;
aggstate->numtrans = 0;
aggstate->aggstrategy = node->aggstrategy;
aggstate->aggsplit = node->aggsplit;
aggstate->maxsets = 0;
aggstate->projected_set = -1;
@@ -3259,45 +3258,45 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
aggstate->numphases = numPhases;
aggstate->aggcontexts = (ExprContext **)
palloc0(sizeof(ExprContext *) * numGroupingSets);
/*
* Create expression contexts. We need three or more, one for
* per-input-tuple processing, one for per-output-tuple processing, one
* for all the hashtables, and one for each grouping set. The per-tuple
* memory context of the per-grouping-set ExprContexts (aggcontexts)
* replaces the standalone memory context formerly used to hold transition
* values. We cheat a little by using ExecAssignExprContext() to build
* all of them.
*
* NOTE: the details of what is stored in aggcontexts and what is stored
* in the regular per-query memory context are driven by a simple
* decision: we want to reset the aggcontext at group boundaries (if not
* hashing) and in ExecReScanAgg to recover no-longer-wanted space.
*/
ExecAssignExprContext(estate, &aggstate->ss.ps);
aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
- for (i = 0; i < numGroupingSets; ++i)
+ for (int i = 0; i < numGroupingSets; ++i)
{
ExecAssignExprContext(estate, &aggstate->ss.ps);
aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
}
if (use_hashing)
aggstate->hashcontext = CreateWorkExprContext(estate);
ExecAssignExprContext(estate, &aggstate->ss.ps);
/*
* Initialize child nodes.
*
* If we are doing a hashed aggregation then the child plan does not need
* to handle REWIND efficiently; see ExecReScanAgg.
*/
if (node->aggstrategy == AGG_HASHED)
eflags &= ~EXEC_FLAG_REWIND;
outerPlan = outerPlan(node);
outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
/*
@@ -3399,75 +3398,76 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
Agg *aggnode;
Sort *sortnode;
if (phaseidx > 0)
{
aggnode = list_nth_node(Agg, node->chain, phaseidx - 1);
sortnode = castNode(Sort, outerPlan(aggnode));
}
else
{
aggnode = node;
sortnode = NULL;
}
Assert(phase <= 1 || sortnode);
if (aggnode->aggstrategy == AGG_HASHED
|| aggnode->aggstrategy == AGG_MIXED)
{
AggStatePerPhase phasedata = &aggstate->phases[0];
AggStatePerHash perhash;
Bitmapset *cols = NULL;
+ int setno = phasedata->numsets++;
Assert(phase == 0);
- i = phasedata->numsets++;
- perhash = &aggstate->perhash[i];
+ perhash = &aggstate->perhash[setno];
/* phase 0 always points to the "real" Agg in the hash case */
phasedata->aggnode = node;
phasedata->aggstrategy = node->aggstrategy;
/* but the actual Agg node representing this hash is saved here */
perhash->aggnode = aggnode;
- phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;
+ phasedata->gset_lengths[setno] = perhash->numCols = aggnode->numCols;
for (j = 0; j < aggnode->numCols; ++j)
cols = bms_add_member(cols, aggnode->grpColIdx[j]);
- phasedata->grouped_cols[i] = cols;
+ phasedata->grouped_cols[setno] = cols;
all_grouped_cols = bms_add_members(all_grouped_cols, cols);
continue;
}
else
{
AggStatePerPhase phasedata = &aggstate->phases[++phase];
int num_sets;
phasedata->numsets = num_sets = list_length(aggnode->groupingSets);
if (num_sets)
{
+ int i;
phasedata->gset_lengths = palloc(num_sets * sizeof(int));
phasedata->grouped_cols = palloc(num_sets * sizeof(Bitmapset *));
i = 0;
foreach(l, aggnode->groupingSets)
{
int current_length = list_length(lfirst(l));
Bitmapset *cols = NULL;
/* planner forces this to be correct */
for (j = 0; j < current_length; ++j)
cols = bms_add_member(cols, aggnode->grpColIdx[j]);
phasedata->grouped_cols[i] = cols;
phasedata->gset_lengths[i] = current_length;
++i;
}
all_grouped_cols = bms_add_members(all_grouped_cols,
phasedata->grouped_cols[0]);
}
@@ -3515,71 +3515,73 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
/* and for all grouped columns, unless already computed */
if (phasedata->eqfunctions[aggnode->numCols - 1] == NULL)
{
phasedata->eqfunctions[aggnode->numCols - 1] =
execTuplesMatchPrepare(scanDesc,
aggnode->numCols,
aggnode->grpColIdx,
aggnode->grpOperators,
aggnode->grpCollations,
(PlanState *) aggstate);
}
}
phasedata->aggnode = aggnode;
phasedata->aggstrategy = aggnode->aggstrategy;
phasedata->sortnode = sortnode;
}
}
/*
* Convert all_grouped_cols to a descending-order list.
*/
- i = -1;
- while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
- aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+ {
+ int i = -1;
+ while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+ aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
+ }
/*
* Set up aggregate-result storage in the output expr context, and also
* allocate my private per-agg working storage
*/
econtext = aggstate->ss.ps.ps_ExprContext;
econtext->ecxt_aggvalues = (Datum *) palloc0(sizeof(Datum) * numaggs);
econtext->ecxt_aggnulls = (bool *) palloc0(sizeof(bool) * numaggs);
peraggs = (AggStatePerAgg) palloc0(sizeof(AggStatePerAggData) * numaggs);
pertransstates = (AggStatePerTrans) palloc0(sizeof(AggStatePerTransData) * numtrans);
aggstate->peragg = peraggs;
aggstate->pertrans = pertransstates;
aggstate->all_pergroups =
(AggStatePerGroup *) palloc0(sizeof(AggStatePerGroup)
* (numGroupingSets + numHashes));
pergroups = aggstate->all_pergroups;
if (node->aggstrategy != AGG_HASHED)
{
- for (i = 0; i < numGroupingSets; i++)
+ for (int i = 0; i < numGroupingSets; i++)
{
pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
* numaggs);
}
aggstate->pergroups = pergroups;
pergroups += numGroupingSets;
}
/*
* Hashing can only appear in the initial phase.
*/
if (use_hashing)
{
Plan *outerplan = outerPlan(node);
uint64 totalGroups = 0;
int i;
aggstate->hash_metacxt = AllocSetContextCreate(aggstate->ss.ps.state->es_query_cxt,
"HashAgg meta context",
ALLOCSET_DEFAULT_SIZES);
aggstate->hash_spill_rslot = ExecInitExtraTupleSlot(estate, scanDesc,
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 1545ff9f161..f9d40fa1a0d 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1631,54 +1631,54 @@ interpret_ident_response(const char *ident_response,
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
if (strcmp(response_type, "USERID") != 0)
return false;
else
{
/*
* It's a USERID response. Good. "cursor" should be pointing
* to the colon that precedes the operating system type.
*/
if (*cursor != ':')
return false;
else
{
cursor++; /* Go over colon */
/* Skip over operating system field. */
while (*cursor != ':' && *cursor != '\r')
cursor++;
if (*cursor != ':')
return false;
else
{
- int i; /* Index into *ident_user */
+ int j; /* Index into *ident_user */
cursor++; /* Go over colon */
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
/* Rest of line is user name. Copy it over. */
- i = 0;
+ j = 0;
while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
- ident_user[i++] = *cursor++;
- ident_user[i] = '\0';
+ ident_user[j++] = *cursor++;
+ ident_user[j] = '\0';
return true;
}
}
}
}
}
}
/*
* Talk to the ident server on "remote_addr" and find out who
* owns the tcp connection to "local_addr"
* If the username is successfully retrieved, check the usermap.
*
* XXX: Using WaitLatchOrSocket() and doing a CHECK_FOR_INTERRUPTS() if the
* latch was set would improve the responsiveness to timeouts/cancellations.
*/
static int
ident_inet(hbaPort *port)
{
const SockAddr remote_addr = port->raddr;
const SockAddr local_addr = port->laddr;
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 75acea149c7..74adc4f3946 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2526,48 +2526,48 @@ cost_append(AppendPath *apath, PlannerInfo *root)
apath->path.rows = 0;
if (apath->subpaths == NIL)
return;
if (!apath->path.parallel_aware)
{
List *pathkeys = apath->path.pathkeys;
if (pathkeys == NIL)
{
Path *subpath = (Path *) linitial(apath->subpaths);
/*
* For an unordered, non-parallel-aware Append we take the startup
* cost as the startup cost of the first subpath.
*/
apath->path.startup_cost = subpath->startup_cost;
/* Compute rows and costs as sums of subplan rows and costs. */
foreach(l, apath->subpaths)
{
- Path *subpath = (Path *) lfirst(l);
+ Path *sub = (Path *) lfirst(l);
- apath->path.rows += subpath->rows;
- apath->path.total_cost += subpath->total_cost;
+ apath->path.rows += sub->rows;
+ apath->path.total_cost += sub->total_cost;
}
}
else
{
/*
* For an ordered, non-parallel-aware Append we take the startup
* cost as the sum of the subpath startup costs. This ensures
* that we don't underestimate the startup cost when a query's
* LIMIT is such that several of the children have to be run to
* satisfy it. This might be overkill --- another plausible hack
* would be to take the Append's startup cost as the maximum of
* the child startup costs. But we don't want to risk believing
* that an ORDER BY LIMIT query can be satisfied at small cost
* when the first child has small startup cost but later ones
* don't. (If we had the ability to deal with nonlinear cost
* interpolation for partial retrievals, we would not need to be
* so conservative about this.)
*
* This case is also different from the above in that we have to
* account for possibly injecting sorts into subpaths that aren't
* natively ordered.
*/
diff --git a/src/backend/optimizer/path/tidpath.c b/src/backend/optimizer/path/tidpath.c
index 279ca1f5b44..23194d6e007 100644
--- a/src/backend/optimizer/path/tidpath.c
+++ b/src/backend/optimizer/path/tidpath.c
@@ -286,48 +286,48 @@ TidQualFromRestrictInfoList(PlannerInfo *root, List *rlist, RelOptInfo *rel)
{
ListCell *j;
/*
* We must be able to extract a CTID condition from every
* sub-clause of an OR, or we can't use it.
*/
foreach(j, ((BoolExpr *) rinfo->orclause)->args)
{
Node *orarg = (Node *) lfirst(j);
List *sublist;
/* OR arguments should be ANDs or sub-RestrictInfos */
if (is_andclause(orarg))
{
List *andargs = ((BoolExpr *) orarg)->args;
/* Recurse in case there are sub-ORs */
sublist = TidQualFromRestrictInfoList(root, andargs, rel);
}
else
{
- RestrictInfo *rinfo = castNode(RestrictInfo, orarg);
+ RestrictInfo *list = castNode(RestrictInfo, orarg);
- Assert(!restriction_is_or_clause(rinfo));
- sublist = TidQualFromRestrictInfo(root, rinfo, rel);
+ Assert(!restriction_is_or_clause(list));
+ sublist = TidQualFromRestrictInfo(root, list, rel);
}
/*
* If nothing found in this arm, we can't do anything with
* this OR clause.
*/
if (sublist == NIL)
{
rlst = NIL; /* forget anything we had */
break; /* out of loop over OR args */
}
/*
* OK, continue constructing implicitly-OR'ed result list.
*/
rlst = list_concat(rlst, sublist);
}
}
else
{
/* Not an OR clause, so handle base cases */
rlst = TidQualFromRestrictInfo(root, rinfo, rel);
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index cf9e0a74dbf..e969f2be3fe 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -1975,46 +1975,44 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
* of rollups, and preparing annotations which will later be filled in with
* size estimates.
*/
static grouping_sets_data *
preprocess_grouping_sets(PlannerInfo *root)
{
Query *parse = root->parse;
List *sets;
int maxref = 0;
ListCell *lc;
ListCell *lc_set;
grouping_sets_data *gd = palloc0(sizeof(grouping_sets_data));
parse->groupingSets = expand_grouping_sets(parse->groupingSets, parse->groupDistinct, -1);
gd->any_hashable = false;
gd->unhashable_refs = NULL;
gd->unsortable_refs = NULL;
gd->unsortable_sets = NIL;
if (parse->groupClause)
{
- ListCell *lc;
-
foreach(lc, parse->groupClause)
{
SortGroupClause *gc = lfirst_node(SortGroupClause, lc);
Index ref = gc->tleSortGroupRef;
if (ref > maxref)
maxref = ref;
if (!gc->hashable)
gd->unhashable_refs = bms_add_member(gd->unhashable_refs, ref);
if (!OidIsValid(gc->sortop))
gd->unsortable_refs = bms_add_member(gd->unsortable_refs, ref);
}
}
/* Allocate workspace array for remapping */
gd->tleref_to_colnum_map = (int *) palloc((maxref + 1) * sizeof(int));
/*
* If we have any unsortable sets, we must extract them before trying to
* prepare rollups. Unsortable sets don't go through
@@ -3439,72 +3437,70 @@ get_number_of_groups(PlannerInfo *root,
List *target_list)
{
Query *parse = root->parse;
double dNumGroups;
if (parse->groupClause)
{
List *groupExprs;
if (parse->groupingSets)
{
/* Add up the estimates for each grouping set */
ListCell *lc;
ListCell *lc2;
Assert(gd); /* keep Coverity happy */
dNumGroups = 0;
foreach(lc, gd->rollups)
{
RollupData *rollup = lfirst_node(RollupData, lc);
- ListCell *lc;
+ ListCell *lc3;
groupExprs = get_sortgrouplist_exprs(rollup->groupClause,
target_list);
rollup->numGroups = 0.0;
- forboth(lc, rollup->gsets, lc2, rollup->gsets_data)
+ forboth(lc3, rollup->gsets, lc2, rollup->gsets_data)
{
- List *gset = (List *) lfirst(lc);
+ List *gset = (List *) lfirst(lc3);
GroupingSetData *gs = lfirst_node(GroupingSetData, lc2);
double numGroups = estimate_num_groups(root,
groupExprs,
path_rows,
&gset,
NULL);
gs->numGroups = numGroups;
rollup->numGroups += numGroups;
}
dNumGroups += rollup->numGroups;
}
if (gd->hash_sets_idx)
{
- ListCell *lc;
-
gd->dNumHashGroups = 0;
groupExprs = get_sortgrouplist_exprs(parse->groupClause,
target_list);
forboth(lc, gd->hash_sets_idx, lc2, gd->unsortable_sets)
{
List *gset = (List *) lfirst(lc);
GroupingSetData *gs = lfirst_node(GroupingSetData, lc2);
double numGroups = estimate_num_groups(root,
groupExprs,
path_rows,
&gset,
NULL);
gs->numGroups = numGroups;
gd->dNumHashGroups += numGroups;
}
dNumGroups += gd->dNumHashGroups;
}
}
@@ -5015,49 +5011,49 @@ create_ordered_paths(PlannerInfo *root,
path,
path->pathtarget,
root->sort_pathkeys, NULL,
&total_groups);
/* Add projection step if needed */
if (path->pathtarget != target)
path = apply_projection_to_path(root, ordered_rel,
path, target);
add_path(ordered_rel, path);
}
/*
* Consider incremental sort with a gather merge on partial paths.
*
* We can also skip the entire loop when we only have a single-item
* sort_pathkeys because then we can't possibly have a presorted
* prefix of the list without having the list be fully sorted.
*/
if (enable_incremental_sort && list_length(root->sort_pathkeys) > 1)
{
- ListCell *lc;
+ ListCell *lc2;
- foreach(lc, input_rel->partial_pathlist)
+ foreach(lc2, input_rel->partial_pathlist)
{
- Path *input_path = (Path *) lfirst(lc);
+ Path *input_path = (Path *) lfirst(lc2);
Path *sorted_path;
bool is_sorted;
int presorted_keys;
double total_groups;
/*
* We don't care if this is the cheapest partial path - we
* can't simply skip it, because it may be partially sorted in
* which case we want to consider adding incremental sort
* (instead of full sort, which is what happens above).
*/
is_sorted = pathkeys_count_contained_in(root->sort_pathkeys,
input_path->pathkeys,
&presorted_keys);
/* No point in adding incremental sort on fully sorted paths. */
if (is_sorted)
continue;
if (presorted_keys == 0)
continue;
@@ -7588,58 +7584,58 @@ apply_scanjoin_target_to_paths(PlannerInfo *root,
rel->reltarget = llast_node(PathTarget, scanjoin_targets);
/*
* If the relation is partitioned, recursively apply the scan/join target
* to all partitions, and generate brand-new Append paths in which the
* scan/join target is computed below the Append rather than above it.
* Since Append is not projection-capable, that might save a separate
* Result node, and it also is important for partitionwise aggregate.
*/
if (rel_is_partitioned)
{
List *live_children = NIL;
int i;
/* Adjust each partition. */
i = -1;
while ((i = bms_next_member(rel->live_parts, i)) >= 0)
{
RelOptInfo *child_rel = rel->part_rels[i];
AppendRelInfo **appinfos;
int nappinfos;
List *child_scanjoin_targets = NIL;
- ListCell *lc;
+ ListCell *lc2;
Assert(child_rel != NULL);
/* Dummy children can be ignored. */
if (IS_DUMMY_REL(child_rel))
continue;
/* Translate scan/join targets for this child. */
appinfos = find_appinfos_by_relids(root, child_rel->relids,
&nappinfos);
- foreach(lc, scanjoin_targets)
+ foreach(lc2, scanjoin_targets)
{
- PathTarget *target = lfirst_node(PathTarget, lc);
+ PathTarget *target = lfirst_node(PathTarget, lc2);
target = copy_pathtarget(target);
target->exprs = (List *)
adjust_appendrel_attrs(root,
(Node *) target->exprs,
nappinfos, appinfos);
child_scanjoin_targets = lappend(child_scanjoin_targets,
target);
}
pfree(appinfos);
/* Recursion does the real work. */
apply_scanjoin_target_to_paths(root, child_rel,
child_scanjoin_targets,
scanjoin_targets_contain_srfs,
scanjoin_target_parallel_safe,
tlist_same_exprs);
/* Save non-dummy children for Append paths. */
if (!IS_DUMMY_REL(child_rel))
live_children = lappend(live_children, child_rel);
}
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 71052c841d7..f97c2f5256c 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -639,47 +639,47 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
add_path(result_rel, path);
/*
* Estimate number of groups. For now we just assume the output is unique
* --- this is certainly true for the UNION case, and we want worst-case
* estimates anyway.
*/
result_rel->rows = path->rows;
/*
* Now consider doing the same thing using the partial paths plus Append
* plus Gather.
*/
if (partial_paths_valid)
{
Path *ppath;
int parallel_workers = 0;
/* Find the highest number of workers requested for any subpath. */
foreach(lc, partial_pathlist)
{
- Path *path = lfirst(lc);
+ Path *partial_path = lfirst(lc);
- parallel_workers = Max(parallel_workers, path->parallel_workers);
+ parallel_workers = Max(parallel_workers, partial_path->parallel_workers);
}
Assert(parallel_workers > 0);
/*
* If the use of parallel append is permitted, always request at least
* log2(# of children) paths. We assume it can be useful to have
* extra workers in this case because they will be spread out across
* the children. The precise formula is just a guess; see
* add_paths_to_append_rel.
*/
if (enable_parallel_append)
{
parallel_workers = Max(parallel_workers,
pg_leftmost_one_pos32(list_length(partial_pathlist)) + 1);
parallel_workers = Min(parallel_workers,
max_parallel_workers_per_gather);
}
Assert(parallel_workers > 0);
ppath = (Path *)
create_append_path(root, result_rel, NIL, partial_pathlist,
NIL, NULL,
diff --git a/src/backend/optimizer/util/paramassign.c b/src/backend/optimizer/util/paramassign.c
index 8e2d4bf5158..933460989b3 100644
--- a/src/backend/optimizer/util/paramassign.c
+++ b/src/backend/optimizer/util/paramassign.c
@@ -418,93 +418,93 @@ replace_nestloop_param_placeholdervar(PlannerInfo *root, PlaceHolderVar *phv)
* while planning the subquery. So we need not modify the subplan or the
* PlannerParamItems here. What we do need to do is add entries to
* root->curOuterParams to signal the parent nestloop plan node that it must
* provide these values. This differs from replace_nestloop_param_var in
* that the PARAM_EXEC slots to use have already been determined.
*
* Note that we also use root->curOuterRels as an implicit parameter for
* sanity checks.
*/
void
process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
{
ListCell *lc;
foreach(lc, subplan_params)
{
PlannerParamItem *pitem = lfirst_node(PlannerParamItem, lc);
if (IsA(pitem->item, Var))
{
Var *var = (Var *) pitem->item;
NestLoopParam *nlp;
- ListCell *lc;
+ ListCell *lc2;
/* If not from a nestloop outer rel, complain */
if (!bms_is_member(var->varno, root->curOuterRels))
elog(ERROR, "non-LATERAL parameter required by subquery");
/* Is this param already listed in root->curOuterParams? */
- foreach(lc, root->curOuterParams)
+ foreach(lc2, root->curOuterParams)
{
- nlp = (NestLoopParam *) lfirst(lc);
+ nlp = (NestLoopParam *) lfirst(lc2);
if (nlp->paramno == pitem->paramId)
{
Assert(equal(var, nlp->paramval));
/* Present, so nothing to do */
break;
}
}
- if (lc == NULL)
+ if (lc2 == NULL)
{
/* No, so add it */
nlp = makeNode(NestLoopParam);
nlp->paramno = pitem->paramId;
nlp->paramval = copyObject(var);
root->curOuterParams = lappend(root->curOuterParams, nlp);
}
}
else if (IsA(pitem->item, PlaceHolderVar))
{
PlaceHolderVar *phv = (PlaceHolderVar *) pitem->item;
NestLoopParam *nlp;
- ListCell *lc;
+ ListCell *lc2;
/* If not from a nestloop outer rel, complain */
if (!bms_is_subset(find_placeholder_info(root, phv)->ph_eval_at,
root->curOuterRels))
elog(ERROR, "non-LATERAL parameter required by subquery");
/* Is this param already listed in root->curOuterParams? */
- foreach(lc, root->curOuterParams)
+ foreach(lc2, root->curOuterParams)
{
- nlp = (NestLoopParam *) lfirst(lc);
+ nlp = (NestLoopParam *) lfirst(lc2);
if (nlp->paramno == pitem->paramId)
{
Assert(equal(phv, nlp->paramval));
/* Present, so nothing to do */
break;
}
}
- if (lc == NULL)
+ if (lc2 == NULL)
{
/* No, so add it */
nlp = makeNode(NestLoopParam);
nlp->paramno = pitem->paramId;
nlp->paramval = (Var *) copyObject(phv);
root->curOuterParams = lappend(root->curOuterParams, nlp);
}
}
else
elog(ERROR, "unexpected type of subquery parameter");
}
}
/*
* Identify any NestLoopParams that should be supplied by a NestLoop plan
* node with the specified lefthand rels. Remove them from the active
* root->curOuterParams list and return them as the result list.
*/
List *
identify_current_nestloop_params(PlannerInfo *root, Relids leftrelids)
{
List *result;
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index b85fbebd00e..53a17ac3f6a 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -520,49 +520,49 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
* likely expecting an un-tweaked function call.
*
* Note: the transformation changes a non-schema-qualified unnest()
* function name into schema-qualified pg_catalog.unnest(). This
* choice is also a bit debatable, but it seems reasonable to force
* use of built-in unnest() when we make this transformation.
*/
if (IsA(fexpr, FuncCall))
{
FuncCall *fc = (FuncCall *) fexpr;
if (list_length(fc->funcname) == 1 &&
strcmp(strVal(linitial(fc->funcname)), "unnest") == 0 &&
list_length(fc->args) > 1 &&
fc->agg_order == NIL &&
fc->agg_filter == NULL &&
fc->over == NULL &&
!fc->agg_star &&
!fc->agg_distinct &&
!fc->func_variadic &&
coldeflist == NIL)
{
- ListCell *lc;
+ ListCell *lc2;
- foreach(lc, fc->args)
+ foreach(lc2, fc->args)
{
- Node *arg = (Node *) lfirst(lc);
+ Node *arg = (Node *) lfirst(lc2);
FuncCall *newfc;
last_srf = pstate->p_last_srf;
newfc = makeFuncCall(SystemFuncName("unnest"),
list_make1(arg),
COERCE_EXPLICIT_CALL,
fc->location);
newfexpr = transformExpr(pstate, (Node *) newfc,
EXPR_KIND_FROM_FUNCTION);
/* nodeFunctionscan.c requires SRFs to be at top level */
if (pstate->p_last_srf != last_srf &&
pstate->p_last_srf != newfexpr)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("set-returning functions must appear at top level of FROM"),
parser_errposition(pstate,
exprLocation(pstate->p_last_srf))));
funcexprs = lappend(funcexprs, newfexpr);
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index bf698c1fc3f..744bc512b65 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1673,45 +1673,44 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
*
* XXX We have to do this even when there are no expressions in
* clauses, otherwise find_strongest_dependency may fail for stats
* with expressions (due to lookup of negative value in bitmap). So we
* need to at least filter out those dependencies. Maybe we could do
* it in a cheaper way (if there are no expr clauses, we can just
* discard all negative attnums without any lookups).
*/
if (unique_exprs_cnt > 0 || stat->exprs != NIL)
{
int ndeps = 0;
for (i = 0; i < deps->ndeps; i++)
{
bool skip = false;
MVDependency *dep = deps->deps[i];
int j;
for (j = 0; j < dep->nattributes; j++)
{
int idx;
Node *expr;
- int k;
AttrNumber unique_attnum = InvalidAttrNumber;
AttrNumber attnum;
/* undo the per-statistics offset */
attnum = dep->attributes[j];
/*
* For regular attributes we can simply check if it
* matches any clause. If there's no matching clause, we
* can just ignore it. We need to offset the attnum
* though.
*/
if (AttrNumberIsForUserDefinedAttr(attnum))
{
dep->attributes[j] = attnum + attnum_offset;
if (!bms_is_member(dep->attributes[j], clauses_attnums))
{
skip = true;
break;
}
@@ -1721,53 +1720,53 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
/*
* the attnum should be a valid system attnum (-1, -2,
* ...)
*/
Assert(AttributeNumberIsValid(attnum));
/*
* For expressions, we need to do two translations. First
* we have to translate the negative attnum to index in
* the list of expressions (in the statistics object).
* Then we need to see if there's a matching clause. The
* index of the unique expression determines the attnum
* (and we offset it).
*/
idx = -(1 + attnum);
/* Is the expression index is valid? */
Assert((idx >= 0) && (idx < list_length(stat->exprs)));
expr = (Node *) list_nth(stat->exprs, idx);
/* try to find the expression in the unique list */
- for (k = 0; k < unique_exprs_cnt; k++)
+ for (int m = 0; m < unique_exprs_cnt; m++)
{
/*
* found a matching unique expression, use the attnum
* (derived from index of the unique expression)
*/
- if (equal(unique_exprs[k], expr))
+ if (equal(unique_exprs[m], expr))
{
- unique_attnum = -(k + 1) + attnum_offset;
+ unique_attnum = -(m + 1) + attnum_offset;
break;
}
}
/*
* Found no matching expression, so we can simply skip
* this dependency, because there's no chance it will be
* fully covered.
*/
if (unique_attnum == InvalidAttrNumber)
{
skip = true;
break;
}
/* otherwise remap it to the new attnum */
dep->attributes[j] = unique_attnum;
}
/* if found a matching dependency, keep it */
if (!skip)
{
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 6b0a8652622..ba9a568389f 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -1068,44 +1068,61 @@ standard_ProcessUtility(PlannedStmt *pstmt,
ExecSecLabelStmt(stmt);
break;
}
default:
/* All other statement types have event trigger support */
ProcessUtilitySlow(pstate, pstmt, queryString,
context, params, queryEnv,
dest, qc);
break;
}
free_parsestate(pstate);
/*
* Make effects of commands visible, for instance so that
* PreCommit_on_commit_actions() can see them (see for example bug
* #15631).
*/
CommandCounterIncrement();
}
+static ObjectAddress
+TryExecRefreshMatView(RefreshMatViewStmt *stmt, const char *queryString,
+ ParamListInfo params, QueryCompletion *qc)
+{
+ ObjectAddress address;
+ PG_TRY();
+ {
+ address = ExecRefreshMatView(stmt, queryString, params, qc);
+ }
+ PG_FINALLY();
+ {
+ EventTriggerUndoInhibitCommandCollection();
+ }
+ PG_END_TRY();
+ return address;
+}
+
/*
* The "Slow" variant of ProcessUtility should only receive statements
* supported by the event triggers facility. Therefore, we always
* perform the trigger support calls if the context allows it.
*/
static void
ProcessUtilitySlow(ParseState *pstate,
PlannedStmt *pstmt,
const char *queryString,
ProcessUtilityContext context,
ParamListInfo params,
QueryEnvironment *queryEnv,
DestReceiver *dest,
QueryCompletion *qc)
{
Node *parsetree = pstmt->utilityStmt;
bool isTopLevel = (context == PROCESS_UTILITY_TOPLEVEL);
bool isCompleteQuery = (context != PROCESS_UTILITY_SUBCOMMAND);
bool needCleanup;
bool commandCollected = false;
ObjectAddress address;
ObjectAddress secondaryObject = InvalidObjectAddress;
@@ -1659,54 +1676,48 @@ ProcessUtilitySlow(ParseState *pstate,
case T_CreateSeqStmt:
address = DefineSequence(pstate, (CreateSeqStmt *) parsetree);
break;
case T_AlterSeqStmt:
address = AlterSequence(pstate, (AlterSeqStmt *) parsetree);
break;
case T_CreateTableAsStmt:
address = ExecCreateTableAs(pstate, (CreateTableAsStmt *) parsetree,
params, queryEnv, qc);
break;
case T_RefreshMatViewStmt:
/*
* REFRESH CONCURRENTLY executes some DDL commands internally.
* Inhibit DDL command collection here to avoid those commands
* from showing up in the deparsed command queue. The refresh
* command itself is queued, which is enough.
*/
EventTriggerInhibitCommandCollection();
- PG_TRY();
- {
- address = ExecRefreshMatView((RefreshMatViewStmt *) parsetree,
- queryString, params, qc);
- }
- PG_FINALLY();
- {
- EventTriggerUndoInhibitCommandCollection();
- }
- PG_END_TRY();
+
+ address = TryExecRefreshMatView((RefreshMatViewStmt *) parsetree,
+ queryString, params, qc);
+
break;
case T_CreateTrigStmt:
address = CreateTrigger((CreateTrigStmt *) parsetree,
queryString, InvalidOid, InvalidOid,
InvalidOid, InvalidOid, InvalidOid,
InvalidOid, NULL, false, false);
break;
case T_CreatePLangStmt:
address = CreateProceduralLanguage((CreatePLangStmt *) parsetree);
break;
case T_CreateDomainStmt:
address = DefineDomain((CreateDomainStmt *) parsetree);
break;
case T_CreateConversionStmt:
address = CreateConversionCommand((CreateConversionStmt *) parsetree);
break;
case T_CreateCastStmt:
diff --git a/src/backend/utils/adt/levenshtein.c b/src/backend/utils/adt/levenshtein.c
index 3026cc24311..2e67a90e516 100644
--- a/src/backend/utils/adt/levenshtein.c
+++ b/src/backend/utils/adt/levenshtein.c
@@ -174,54 +174,54 @@ varstr_levenshtein(const char *source, int slen,
* total cost increases by ins_c + del_c for each move right.
*/
int slack_d = max_d - min_theo_d;
int best_column = net_inserts < 0 ? -net_inserts : 0;
stop_column = best_column + (slack_d / (ins_c + del_c)) + 1;
if (stop_column > m)
stop_column = m + 1;
}
}
#endif
/*
* In order to avoid calling pg_mblen() repeatedly on each character in s,
* we cache all the lengths before starting the main loop -- but if all
* the characters in both strings are single byte, then we skip this and
* use a fast-path in the main loop. If only one string contains
* multi-byte characters, we still build the array, so that the fast-path
* needn't deal with the case where the array hasn't been initialized.
*/
if (m != slen || n != tlen)
{
- int i;
+ int k;
const char *cp = source;
s_char_len = (int *) palloc((m + 1) * sizeof(int));
- for (i = 0; i < m; ++i)
+ for (k = 0; k < m; ++k)
{
- s_char_len[i] = pg_mblen(cp);
- cp += s_char_len[i];
+ s_char_len[k] = pg_mblen(cp);
+ cp += s_char_len[k];
}
- s_char_len[i] = 0;
+ s_char_len[k] = 0;
}
/* One more cell for initialization column and row. */
++m;
++n;
/* Previous and current rows of notional array. */
prev = (int *) palloc(2 * m * sizeof(int));
curr = prev + m;
/*
* To transform the first i characters of s into the first 0 characters of
* t, we must perform i deletions.
*/
for (i = START_COLUMN; i < STOP_COLUMN; i++)
prev[i] = i * del_c;
/* Loop through rows of the notional array */
for (y = target, j = 1; j < n; j++)
{
int *temp;
const char *x = source;
diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c
index 93d9cef06ba..8d7b6b58c05 100644
--- a/src/pl/plpgsql/src/pl_funcs.c
+++ b/src/pl/plpgsql/src/pl_funcs.c
@@ -1628,51 +1628,50 @@ plpgsql_dumptree(PLpgSQL_function *func)
{
printf(" DEFAULT ");
dump_expr(var->default_val);
printf("\n");
}
if (var->cursor_explicit_expr != NULL)
{
if (var->cursor_explicit_argrow >= 0)
printf(" CURSOR argument row %d\n", var->cursor_explicit_argrow);
printf(" CURSOR IS ");
dump_expr(var->cursor_explicit_expr);
printf("\n");
}
if (var->promise != PLPGSQL_PROMISE_NONE)
printf(" PROMISE %d\n",
(int) var->promise);
}
break;
case PLPGSQL_DTYPE_ROW:
{
PLpgSQL_row *row = (PLpgSQL_row *) d;
- int i;
printf("ROW %-16s fields", row->refname);
- for (i = 0; i < row->nfields; i++)
+ for (int j = 0; j < row->nfields; j++)
{
- printf(" %s=var %d", row->fieldnames[i],
- row->varnos[i]);
+ printf(" %s=var %d", row->fieldnames[j],
+ row->varnos[j]);
}
printf("\n");
}
break;
case PLPGSQL_DTYPE_REC:
printf("REC %-16s typoid %u\n",
((PLpgSQL_rec *) d)->refname,
((PLpgSQL_rec *) d)->rectypeid);
if (((PLpgSQL_rec *) d)->isconst)
printf(" CONSTANT\n");
if (((PLpgSQL_rec *) d)->notnull)
printf(" NOT NULL\n");
if (((PLpgSQL_rec *) d)->default_val != NULL)
{
printf(" DEFAULT ");
dump_expr(((PLpgSQL_rec *) d)->default_val);
printf("\n");
}
break;
case PLPGSQL_DTYPE_RECFIELD:
printf("RECFIELD %-16s of REC %d\n",
((PLpgSQL_recfield *) d)->fieldname,
On Wed, 24 Aug 2022 at 14:39, Justin Pryzby <pryzby@telsasoft.com> wrote:
Attached are half of the remainder of what I've written, ready for review.
Thanks for the patches.
I started to do some analysis of the remaining warnings and put them
in the attached spreadsheet. I put each of the remaining warnings into
a category of how I think they should be fixed.
These categories are:
1. "Rescope" (adjust scope of outer variable to move it into a deeper scope)
2. "Rename" (a variable needs to be renamed)
3. "RenameOrScope" (a variable needs renamed or we need to something
more extreme to rescope)
4. "Repurpose" (variables have the same purpose and may as well use
the same variable)
5. "Refactor" (fix the code to make it better)
6. "Remove" (variable is not needed)
There's also:
7. "Bug?" (might be a bug)
8. "?" (I don't know)
I was hoping we'd already caught all of the #1s in 421892a19, but I
caught a few of those in some of your other patches. One you'd done
another way and some you'd done the rescope but just put it in the
wrong patch. The others had not been done yet. I just pushed
f959bf9a5 to fix those ones.
I really think #2s should be done last. I'm not as comfortable with
the renaming and we might want to discuss tactics on that. We could
either opt to rename the shadowed or shadowing variable, or both. If
we rename the shadowing variable, then pending patches or forward
patches could use the wrong variable. If we rename the shadowed
variable then it's not impossible that backpatching could go wrong
where the new code intends to reference the outer variable using the
newly named variable, but when that's backpatched it uses the variable
with the same name in the inner scope. Renaming both would make the
problem more obvious. I'm not sure which is best. The answer may
depend on how many lines the variable is in scope for. If it's just
for a few lines then the hunk context would conflict and the committer
would likely notice the issue when resolving the conflict.
For #3, I just couldn't decide the best fix. Many of these could be
moved into an inner scope, but it would require indenting a large
amount of code, e.g. in a switch() statement's "case:" to allow
variables to be declared within the case.
I think probably #4 should be next to do (maybe after #5)
I have some ideas on how to fix the two #5s, so I'm going to go and do that now.
There's only 1 #6. I'm not so sure on that yet. The variable being
assigned to the variable is the current time and I'm not sure if we
can reuse the existing variable or not as time may have moved on
sufficiently.
I'll study #7 a bit more. My eyes glazed over a bit from doing all
that analysis, so I might be mistaken about that being a bug.
For #8s. These are the PG_TRY() ones. I see you had a go at fixing
that by moving the nested PG_TRY()s to a helper function. I don't
think that's a good fix. If we were to ever consider making
-Wshadow=compatible-local in a standard build, then we'd basically be
saying that nested PG_TRYs are not allowed. I don't think that'll fly.
I'd rather find a better way to fix those. I see we can't make use of
##__LINE__ in the variable name since PG_TRY()'s friends use the
variables too and they'd be on a different line. We maybe could have
an "ident" parameter in the macro that we ##ident onto the variables
names, but that would break existing code.
The first patch removes 2ndary, "inner" declarations, where that seems
reasonably safe and consistent with existing practice (and probably what the
original authors intended or would have written).
Would you be able to write a patch for #4. I'll do #5 now. You could
do a draft patch for #2 as well, but I think it should be committed
last, if we decide it's a good move to make. It may be worth having
the discussion about if we actually want to run
-Wshadow=compatible-local as a standard build flag before we rename
anything.
David
Attachments:
On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:
I was hoping we'd already caught all of the #1s in 421892a19, but I
caught a few of those in some of your other patches. One you'd done
another way and some you'd done the rescope but just put it in the
wrong patch. The others had not been done yet. I just pushed
f959bf9a5 to fix those ones.
This fixed pg_get_statisticsobj_worker() but not pg_get_indexdef_worker() nor
pg_get_partkeydef_worker().
(Also, I'd mentioned that my fixes for those deliberately re-used the
outer-scope vars, which isn't what you did, and it's why I didn't include them
with the patch for inner-scope).
I really think #2s should be done last. I'm not as comfortable with
the renaming and we might want to discuss tactics on that. We could
either opt to rename the shadowed or shadowing variable, or both. If
we rename the shadowing variable, then pending patches or forward
patches could use the wrong variable. If we rename the shadowed
variable then it's not impossible that backpatching could go wrong
where the new code intends to reference the outer variable using the
newly named variable, but when that's backpatched it uses the variable
with the same name in the inner scope. Renaming both would make the
problem more obvious. I'm not sure which is best. The answer may
depend on how many lines the variable is in scope for. If it's just
for a few lines then the hunk context would conflict and the committer
would likely notice the issue when resolving the conflict.
Yes, the hope is to limit the change to variables that are only used a couple
times within a few lines. It's also possible that these will break patches in
development, but that's normal for any change at all.
I'll study #7 a bit more. My eyes glazed over a bit from doing all
that analysis, so I might be mistaken about that being a bug.
I reported this last week.
/messages/by-id/20220819211824.GX26426@telsasoft.com
--
Justin
On Thu, 25 Aug 2022 at 02:00, Justin Pryzby <pryzby@telsasoft.com> wrote:
On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:
I was hoping we'd already caught all of the #1s in 421892a19, but I
caught a few of those in some of your other patches. One you'd done
another way and some you'd done the rescope but just put it in the
wrong patch. The others had not been done yet. I just pushed
f959bf9a5 to fix those ones.This fixed pg_get_statisticsobj_worker() but not pg_get_indexdef_worker() nor
pg_get_partkeydef_worker().
The latter two can't be fixed in the same way as
pg_get_statisticsobj_worker(), which is why I left them alone. We can
deal with those when getting onto the next category of warnings, which
I believe should be the "Repurpose" category. If you look at the
shadow_analysis spreadsheet then you can see how I've categorised
each. I'm not pretending those are all 100% accurate. Various cases
the choice of category was subjective. My aim here is to fix as many
of the warnings as possible in the safest way possible for the
particular warning. This is why pg_get_statisticsobj_worker() wasn't
fixed in the same pass as pg_get_indexdef_worker() and
pg_get_partkeydef_worker().
David
On Wed, 24 Aug 2022 at 22:47, David Rowley <dgrowleyml@gmail.com> wrote:
5. "Refactor" (fix the code to make it better)
I have some ideas on how to fix the two #5s, so I'm going to go and do that now.
I've attached a patch which I think improves the code in
gistRelocateBuildBuffersOnSplit() so that there's no longer a shadowed
variable. I also benchmarked this method in a tight loop and can
measure no performance change from getting the loop index this way vs
the old way.
This only fixes one of the #5s I mentioned. I ended up scraping my
idea to fix the shadowed 'i' in get_qual_for_range() as it became too
complex. The idea was to use list_cell_number() to find out how far
we looped in the forboth() loop. It turned out that 'i' was used in
the subsequent loop in "j = i;". The fix just became too complex and I
didn't think it was worth the risk of breaking something just to get
rid of the showed 'i'.
David
Attachments:
shadow_refactor_fixes.patchtext/plain; charset=US-ASCII; name=shadow_refactor_fixes.patchDownload
diff --git a/src/backend/access/gist/gistbuildbuffers.c b/src/backend/access/gist/gistbuildbuffers.c
index eabf746018..c6c7dfe4c2 100644
--- a/src/backend/access/gist/gistbuildbuffers.c
+++ b/src/backend/access/gist/gistbuildbuffers.c
@@ -543,8 +543,7 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
GISTNodeBuffer *nodeBuffer;
BlockNumber blocknum;
IndexTuple itup;
- int splitPagesCount = 0,
- i;
+ int splitPagesCount = 0;
GISTENTRY entry[INDEX_MAX_KEYS];
bool isnull[INDEX_MAX_KEYS];
GISTNodeBuffer oldBuf;
@@ -595,11 +594,11 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
* Fill relocation buffers information for node buffers of pages produced
* by split.
*/
- i = 0;
foreach(lc, splitinfo)
{
GISTPageSplitInfo *si = (GISTPageSplitInfo *) lfirst(lc);
GISTNodeBuffer *newNodeBuffer;
+ int i = foreach_current_index(lc);
/* Decompress parent index tuple of node buffer page. */
gistDeCompressAtt(giststate, r,
@@ -618,8 +617,6 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
relocationBuffersInfos[i].nodeBuffer = newNodeBuffer;
relocationBuffersInfos[i].splitinfo = si;
-
- i++;
}
/*
On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:
On Wed, 24 Aug 2022 at 14:39, Justin Pryzby <pryzby@telsasoft.com> wrote:
Attached are half of the remainder of what I've written, ready for review.
Thanks for the patches.
4. "Repurpose" (variables have the same purpose and may as well use
the same variable)
Would you be able to write a patch for #4.
The first of the patches that I sent yesterday was all about "repurposed" vars
from outer scope (lc, l, isnull, save_errno), and was 70% of your list of vars
to repurpose.
Here, I've included the rest of your list.
Plus another patch for vars which I'd already written patches to repurpose, but
which aren't classified as "repurpose" on your list.
For subselect.c, you could remove some more "lc" vars and re-use the "l" var
for consistency (but I suppose you won't want that).
--
Justin
Attachments:
v4-reuse.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 87b243e0d4b..a090cada400 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -3017,46 +3017,45 @@ XLogFileInitInternal(XLogSegNo logsegno, TimeLineID logtli,
}
pgstat_report_wait_end();
if (save_errno)
{
/*
* If we fail to make the file, delete it to release disk space
*/
unlink(tmppath);
close(fd);
errno = save_errno;
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not write to file \"%s\": %m", tmppath)));
}
pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
if (pg_fsync(fd) != 0)
{
- int save_errno = errno;
-
+ save_errno = errno;
close(fd);
errno = save_errno;
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not fsync file \"%s\": %m", tmppath)));
}
pgstat_report_wait_end();
if (close(fd) != 0)
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not close file \"%s\": %m", tmppath)));
/*
* Now move the segment into place with its final name. Cope with
* possibility that someone else has created the file while we were
* filling ours: if so, use ours to pre-create a future log segment.
*/
installed_segno = logsegno;
/*
* XXX: What should we use as max_segno? We used to use XLOGfileslop when
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 9be04c8a1e7..dacc989d855 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -16777,45 +16777,44 @@ PreCommit_on_commit_actions(void)
oids_to_truncate = lappend_oid(oids_to_truncate, oc->relid);
break;
case ONCOMMIT_DROP:
oids_to_drop = lappend_oid(oids_to_drop, oc->relid);
break;
}
}
/*
* Truncate relations before dropping so that all dependencies between
* relations are removed after they are worked on. Doing it like this
* might be a waste as it is possible that a relation being truncated will
* be dropped anyway due to its parent being dropped, but this makes the
* code more robust because of not having to re-check that the relation
* exists at truncation time.
*/
if (oids_to_truncate != NIL)
heap_truncate(oids_to_truncate);
if (oids_to_drop != NIL)
{
ObjectAddresses *targetObjects = new_object_addresses();
- ListCell *l;
foreach(l, oids_to_drop)
{
ObjectAddress object;
object.classId = RelationRelationId;
object.objectId = lfirst_oid(l);
object.objectSubId = 0;
Assert(!object_address_present(&object, targetObjects));
add_exact_object_address(&object, targetObjects);
}
/*
* Since this is an automatic drop, rather than one directly initiated
* by the user, we pass the PERFORM_DELETION_INTERNAL flag.
*/
performMultipleDeletions(targetObjects, DROP_CASCADE,
PERFORM_DELETION_INTERNAL | PERFORM_DELETION_QUIETLY);
#ifdef USE_ASSERT_CHECKING
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index dbdfe8bd2d4..3670d1f1861 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -214,46 +214,44 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
(skip_locked ? VACOPT_SKIP_LOCKED : 0) |
(analyze ? VACOPT_ANALYZE : 0) |
(freeze ? VACOPT_FREEZE : 0) |
(full ? VACOPT_FULL : 0) |
(disable_page_skipping ? VACOPT_DISABLE_PAGE_SKIPPING : 0) |
(process_toast ? VACOPT_PROCESS_TOAST : 0);
/* sanity checks on options */
Assert(params.options & (VACOPT_VACUUM | VACOPT_ANALYZE));
Assert((params.options & VACOPT_VACUUM) ||
!(params.options & (VACOPT_FULL | VACOPT_FREEZE)));
if ((params.options & VACOPT_FULL) && params.nworkers > 0)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("VACUUM FULL cannot be performed in parallel")));
/*
* Make sure VACOPT_ANALYZE is specified if any column lists are present.
*/
if (!(params.options & VACOPT_ANALYZE))
{
- ListCell *lc;
-
foreach(lc, vacstmt->rels)
{
VacuumRelation *vrel = lfirst_node(VacuumRelation, lc);
if (vrel->va_cols != NIL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("ANALYZE option must be specified when a column list is provided")));
}
}
/*
* All freeze ages are zero if the FREEZE option is given; otherwise pass
* them as -1 which means to use the default values.
*/
if (params.options & VACOPT_FREEZE)
{
params.freeze_min_age = 0;
params.freeze_table_age = 0;
params.multixact_freeze_min_age = 0;
params.multixact_freeze_table_age = 0;
}
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index ac03271882f..901dd435efd 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -749,45 +749,44 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
*/
if (map == NULL)
{
/*
* It's safe to reuse these from the partition root, as we
* only process one tuple at a time (therefore we won't
* overwrite needed data in slots), and the results of
* projections are independent of the underlying storage.
* Projections and where clauses themselves don't store state
* / are independent of the underlying storage.
*/
onconfl->oc_ProjSlot =
rootResultRelInfo->ri_onConflict->oc_ProjSlot;
onconfl->oc_ProjInfo =
rootResultRelInfo->ri_onConflict->oc_ProjInfo;
onconfl->oc_WhereClause =
rootResultRelInfo->ri_onConflict->oc_WhereClause;
}
else
{
List *onconflset;
List *onconflcols;
- bool found_whole_row;
/*
* Translate expressions in onConflictSet to account for
* different attribute numbers. For that, map partition
* varattnos twice: first to catch the EXCLUDED
* pseudo-relation (INNER_VAR), and second to handle the main
* target relation (firstVarno).
*/
onconflset = copyObject(node->onConflictSet);
if (part_attmap == NULL)
part_attmap =
build_attrmap_by_name(RelationGetDescr(partrel),
RelationGetDescr(firstResultRel));
onconflset = (List *)
map_variable_attnos((Node *) onconflset,
INNER_VAR, 0,
part_attmap,
RelationGetForm(partrel)->reltype,
&found_whole_row);
/* We ignore the value of found_whole_row. */
onconflset = (List *)
map_variable_attnos((Node *) onconflset,
diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c
index 4b104c4d98a..8b0858e9f5f 100644
--- a/src/backend/executor/nodeWindowAgg.c
+++ b/src/backend/executor/nodeWindowAgg.c
@@ -2043,50 +2043,51 @@ update_grouptailpos(WindowAggState *winstate)
static TupleTableSlot *
ExecWindowAgg(PlanState *pstate)
{
WindowAggState *winstate = castNode(WindowAggState, pstate);
TupleTableSlot *slot;
ExprContext *econtext;
int i;
int numfuncs;
CHECK_FOR_INTERRUPTS();
if (winstate->status == WINDOWAGG_DONE)
return NULL;
/*
* Compute frame offset values, if any, during first call (or after a
* rescan). These are assumed to hold constant throughout the scan; if
* user gives us a volatile expression, we'll only use its initial value.
*/
if (winstate->all_first)
{
int frameOptions = winstate->frameOptions;
- ExprContext *econtext = winstate->ss.ps.ps_ExprContext;
Datum value;
bool isnull;
int16 len;
bool byval;
+ econtext = winstate->ss.ps.ps_ExprContext;
+
if (frameOptions & FRAMEOPTION_START_OFFSET)
{
Assert(winstate->startOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->startOffset,
econtext,
&isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
errmsg("frame starting offset must not be null")));
/* copy value into query-lifespan context */
get_typlenbyval(exprType((Node *) winstate->startOffset->expr),
&len, &byval);
winstate->startOffsetValue = datumCopy(value, byval, len);
if (frameOptions & (FRAMEOPTION_ROWS | FRAMEOPTION_GROUPS))
{
/* value is known to be int8 */
int64 offset = DatumGetInt64(value);
if (offset < 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PRECEDING_OR_FOLLOWING_SIZE),
diff --git a/src/backend/lib/integerset.c b/src/backend/lib/integerset.c
index 5aff292c287..41d3abdb09c 100644
--- a/src/backend/lib/integerset.c
+++ b/src/backend/lib/integerset.c
@@ -546,46 +546,44 @@ intset_update_upper(IntegerSet *intset, int level, intset_node *child,
intset_update_upper(intset, level + 1, (intset_node *) parent, child_key);
}
}
/*
* Does the set contain the given value?
*/
bool
intset_is_member(IntegerSet *intset, uint64 x)
{
intset_node *node;
intset_leaf_node *leaf;
int level;
int itemno;
leaf_item *item;
/*
* The value might be in the buffer of newly-added values.
*/
if (intset->num_buffered_values > 0 && x >= intset->buffered_values[0])
{
- int itemno;
-
itemno = intset_binsrch_uint64(x,
intset->buffered_values,
intset->num_buffered_values,
false);
if (itemno >= intset->num_buffered_values)
return false;
else
return (intset->buffered_values[itemno] == x);
}
/*
* Start from the root, and walk down the B-tree to find the right leaf
* node.
*/
if (!intset->root)
return false;
node = intset->root;
for (level = intset->num_levels - 1; level > 0; level--)
{
intset_internal_node *n = (intset_internal_node *) node;
Assert(node->level == level);
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2e7330f7bc6..10cd19e6cd9 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1633,46 +1633,44 @@ interpret_ident_response(const char *ident_response,
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
if (strcmp(response_type, "USERID") != 0)
return false;
else
{
/*
* It's a USERID response. Good. "cursor" should be pointing
* to the colon that precedes the operating system type.
*/
if (*cursor != ':')
return false;
else
{
cursor++; /* Go over colon */
/* Skip over operating system field. */
while (*cursor != ':' && *cursor != '\r')
cursor++;
if (*cursor != ':')
return false;
else
{
- int i; /* Index into *ident_user */
-
cursor++; /* Go over colon */
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
/* Rest of line is user name. Copy it over. */
i = 0;
while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
ident_user[i++] = *cursor++;
ident_user[i] = '\0';
return true;
}
}
}
}
}
}
/*
* Talk to the ident server on "remote_addr" and find out who
* owns the tcp connection to "local_addr"
* If the username is successfully retrieved, check the usermap.
*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index d929ce34171..df86d18a604 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -4757,45 +4757,45 @@ create_final_distinct_paths(PlannerInfo *root, RelOptInfo *input_rel,
* First, if we have any adequately-presorted paths, just stick a
* Unique node on those. Then consider doing an explicit sort of the
* cheapest input path and Unique'ing that.
*
* When we have DISTINCT ON, we must sort by the more rigorous of
* DISTINCT and ORDER BY, else it won't have the desired behavior.
* Also, if we do have to do an explicit sort, we might as well use
* the more rigorous ordering to avoid a second sort later. (Note
* that the parser will have ensured that one clause is a prefix of
* the other.)
*/
List *needed_pathkeys;
if (parse->hasDistinctOn &&
list_length(root->distinct_pathkeys) <
list_length(root->sort_pathkeys))
needed_pathkeys = root->sort_pathkeys;
else
needed_pathkeys = root->distinct_pathkeys;
foreach(lc, input_rel->pathlist)
{
- Path *path = (Path *) lfirst(lc);
+ path = (Path *) lfirst(lc);
if (pathkeys_contained_in(needed_pathkeys, path->pathkeys))
{
add_path(distinct_rel, (Path *)
create_upper_unique_path(root, distinct_rel,
path,
list_length(root->distinct_pathkeys),
numDistinctRows));
}
}
/* For explicit-sort case, always use the more rigorous clause */
if (list_length(root->distinct_pathkeys) <
list_length(root->sort_pathkeys))
{
needed_pathkeys = root->sort_pathkeys;
/* Assert checks that parser didn't mess up... */
Assert(pathkeys_contained_in(root->distinct_pathkeys,
needed_pathkeys));
}
else
needed_pathkeys = root->distinct_pathkeys;
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 043181b586b..71052c841d7 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -634,45 +634,44 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo *root,
* For UNION ALL, we just need the Append path. For UNION, need to add
* node(s) to remove duplicates.
*/
if (!op->all)
path = make_union_unique(op, path, tlist, root);
add_path(result_rel, path);
/*
* Estimate number of groups. For now we just assume the output is unique
* --- this is certainly true for the UNION case, and we want worst-case
* estimates anyway.
*/
result_rel->rows = path->rows;
/*
* Now consider doing the same thing using the partial paths plus Append
* plus Gather.
*/
if (partial_paths_valid)
{
Path *ppath;
- ListCell *lc;
int parallel_workers = 0;
/* Find the highest number of workers requested for any subpath. */
foreach(lc, partial_pathlist)
{
Path *path = lfirst(lc);
parallel_workers = Max(parallel_workers, path->parallel_workers);
}
Assert(parallel_workers > 0);
/*
* If the use of parallel append is permitted, always request at least
* log2(# of children) paths. We assume it can be useful to have
* extra workers in this case because they will be spread out across
* the children. The precise formula is just a guess; see
* add_paths_to_append_rel.
*/
if (enable_parallel_append)
{
parallel_workers = Max(parallel_workers,
pg_leftmost_one_pos32(list_length(partial_pathlist)) + 1);
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index c1c27e67d47..bf698c1fc3f 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1246,45 +1246,44 @@ dependency_is_compatible_expression(Node *clause, Index relid, List *statlist, N
* first argument, and pseudoconstant is the second one.
*/
if (!is_pseudo_constant_clause(lsecond(expr->args)))
return false;
clause_expr = linitial(expr->args);
/*
* If it's not an "=" operator, just ignore the clause, as it's not
* compatible with functional dependencies. The operator is identified
* simply by looking at which function it uses to estimate
* selectivity. That's a bit strange, but it's what other similar
* places do.
*/
if (get_oprrest(expr->opno) != F_EQSEL)
return false;
/* OK to proceed with checking "var" */
}
else if (is_orclause(clause))
{
BoolExpr *bool_expr = (BoolExpr *) clause;
- ListCell *lc;
/* start with no expression (we'll use the first match) */
*expr = NULL;
foreach(lc, bool_expr->args)
{
Node *or_expr = NULL;
/*
* Had we found incompatible expression in the arguments, treat
* the whole expression as incompatible.
*/
if (!dependency_is_compatible_expression((Node *) lfirst(lc), relid,
statlist, &or_expr))
return false;
if (*expr == NULL)
*expr = or_expr;
/* ensure all the expressions are the same */
if (!equal(or_expr, *expr))
return false;
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index c0e907d4373..aad79493e86 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -3784,46 +3784,44 @@ advanceConnectionState(TState *thread, CState *st, StatsData *agg)
st->estatus = ESTATUS_META_COMMAND_ERROR;
}
/*
* We're now waiting for an SQL command to complete, or
* finished processing a metacommand, or need to sleep, or
* something bad happened.
*/
Assert(st->state == CSTATE_WAIT_RESULT ||
st->state == CSTATE_END_COMMAND ||
st->state == CSTATE_SLEEP ||
st->state == CSTATE_ABORTED);
break;
/*
* non executed conditional branch
*/
case CSTATE_SKIP_COMMAND:
Assert(!conditional_active(st->cstack));
/* quickly skip commands until something to do... */
while (true)
{
- Command *command;
-
command = sql_script[st->use_file].commands[st->command];
/* cannot reach end of script in that state */
Assert(command != NULL);
/*
* if this is conditional related, update conditional
* state
*/
if (command->type == META_COMMAND &&
(command->meta == META_IF ||
command->meta == META_ELIF ||
command->meta == META_ELSE ||
command->meta == META_ENDIF))
{
switch (conditional_stack_peek(st->cstack))
{
case IFSTATE_FALSE:
if (command->meta == META_IF ||
command->meta == META_ELIF)
{
/* we must evaluate the condition */
@@ -3940,46 +3938,44 @@ advanceConnectionState(TState *thread, CState *st, StatsData *agg)
* instead of CSTATE_START_TX.
*/
case CSTATE_SLEEP:
pg_time_now_lazy(&now);
if (now < st->sleep_until)
return; /* still sleeping, nothing to do here */
/* Else done sleeping. */
st->state = CSTATE_END_COMMAND;
break;
/*
* End of command: record stats and proceed to next command.
*/
case CSTATE_END_COMMAND:
/*
* command completed: accumulate per-command execution times
* in thread-local data structure, if per-command latencies
* are requested.
*/
if (report_per_command)
{
- Command *command;
-
pg_time_now_lazy(&now);
command = sql_script[st->use_file].commands[st->command];
/* XXX could use a mutex here, but we choose not to */
addToSimpleStats(&command->stats,
PG_TIME_GET_DOUBLE(now - st->stmt_begin));
}
/* Go ahead with next command, to be executed or skipped */
st->command++;
st->state = conditional_active(st->cstack) ?
CSTATE_START_COMMAND : CSTATE_SKIP_COMMAND;
break;
/*
* Clean up after an error.
*/
case CSTATE_ERROR:
{
TStatus tstatus;
Assert(st->estatus != ESTATUS_NO_ERROR);
v4-reuse-more.txttext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/gist/gistbuildbuffers.c b/src/backend/access/gist/gistbuildbuffers.c
index eabf7460182..77677150aff 100644
--- a/src/backend/access/gist/gistbuildbuffers.c
+++ b/src/backend/access/gist/gistbuildbuffers.c
@@ -615,46 +615,45 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
* empty.
*/
newNodeBuffer = gistGetNodeBuffer(gfbb, giststate, BufferGetBlockNumber(si->buf), level);
relocationBuffersInfos[i].nodeBuffer = newNodeBuffer;
relocationBuffersInfos[i].splitinfo = si;
i++;
}
/*
* Loop through all index tuples in the buffer of the page being split,
* moving them to buffers for the new pages. We try to move each tuple to
* the page that will result in the lowest penalty for the leading column
* or, in the case of a tie, the lowest penalty for the earliest column
* that is not tied.
*
* The page searching logic is very similar to gistchoose().
*/
while (gistPopItupFromNodeBuffer(gfbb, &oldBuf, &itup))
{
float best_penalty[INDEX_MAX_KEYS];
- int i,
- which;
+ int which;
IndexTuple newtup;
RelocationBufferInfo *targetBufferInfo;
gistDeCompressAtt(giststate, r,
itup, NULL, (OffsetNumber) 0, entry, isnull);
/* default to using first page (shouldn't matter) */
which = 0;
/*
* best_penalty[j] is the best penalty we have seen so far for column
* j, or -1 when we haven't yet examined column j. Array entries to
* the right of the first -1 are undefined.
*/
best_penalty[0] = -1;
/*
* Loop over possible target pages, looking for one to move this tuple
* to.
*/
for (i = 0; i < splitPagesCount; i++)
{
diff --git a/src/backend/access/hash/hash_xlog.c b/src/backend/access/hash/hash_xlog.c
index 2e68303cbfd..e88213c7425 100644
--- a/src/backend/access/hash/hash_xlog.c
+++ b/src/backend/access/hash/hash_xlog.c
@@ -221,45 +221,44 @@ hash_xlog_add_ovfl_page(XLogReaderState *record)
PageSetLSN(leftpage, lsn);
MarkBufferDirty(leftbuf);
}
if (BufferIsValid(leftbuf))
UnlockReleaseBuffer(leftbuf);
UnlockReleaseBuffer(ovflbuf);
/*
* Note: in normal operation, we'd update the bitmap and meta page while
* still holding lock on the overflow pages. But during replay it's not
* necessary to hold those locks, since no other index updates can be
* happening concurrently.
*/
if (XLogRecHasBlockRef(record, 2))
{
Buffer mapbuffer;
if (XLogReadBufferForRedo(record, 2, &mapbuffer) == BLK_NEEDS_REDO)
{
Page mappage = (Page) BufferGetPage(mapbuffer);
uint32 *freep = NULL;
- char *data;
uint32 *bitmap_page_bit;
freep = HashPageGetBitmap(mappage);
data = XLogRecGetBlockData(record, 2, &datalen);
bitmap_page_bit = (uint32 *) data;
SETBIT(freep, *bitmap_page_bit);
PageSetLSN(mappage, lsn);
MarkBufferDirty(mapbuffer);
}
if (BufferIsValid(mapbuffer))
UnlockReleaseBuffer(mapbuffer);
}
if (XLogRecHasBlockRef(record, 3))
{
Buffer newmapbuf;
newmapbuf = XLogInitBufferForRedo(record, 3);
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index aab8d6fa4e5..3133d1e0585 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -6256,45 +6256,45 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
return multi;
}
/*
* Do a more thorough second pass over the multi to figure out which
* member XIDs actually need to be kept. Checking the precise status of
* individual members might even show that we don't need to keep anything.
*/
nnewmembers = 0;
newmembers = palloc(sizeof(MultiXactMember) * nmembers);
has_lockers = false;
update_xid = InvalidTransactionId;
update_committed = false;
temp_xid_out = *mxid_oldest_xid_out; /* init for FRM_RETURN_IS_MULTI */
for (i = 0; i < nmembers; i++)
{
/*
* Determine whether to keep this member or ignore it.
*/
if (ISUPDATE_from_mxstatus(members[i].status))
{
- TransactionId xid = members[i].xid;
+ xid = members[i].xid;
Assert(TransactionIdIsValid(xid));
if (TransactionIdPrecedes(xid, relfrozenxid))
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg_internal("found update xid %u from before relfrozenxid %u",
xid, relfrozenxid)));
/*
* It's an update; should we keep it? If the transaction is known
* aborted or crashed then it's okay to ignore it, otherwise not.
* Note that an updater older than cutoff_xid cannot possibly be
* committed, because HeapTupleSatisfiesVacuum would have returned
* HEAPTUPLE_DEAD and we would not be trying to freeze the tuple.
*
* As with all tuple visibility routines, it's critical to test
* TransactionIdIsInProgress before TransactionIdDidCommit,
* because of race conditions explained in detail in
* heapam_visibility.c.
*/
if (TransactionIdIsCurrentTransactionId(xid) ||
TransactionIdIsInProgress(xid))
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
index 8f7d12950e5..ec57f56adf3 100644
--- a/src/backend/access/transam/multixact.c
+++ b/src/backend/access/transam/multixact.c
@@ -1595,45 +1595,44 @@ mXactCachePut(MultiXactId multi, int nmembers, MultiXactMember *members)
debug_elog2(DEBUG2, "CachePut: initializing memory context");
MXactContext = AllocSetContextCreate(TopTransactionContext,
"MultiXact cache context",
ALLOCSET_SMALL_SIZES);
}
entry = (mXactCacheEnt *)
MemoryContextAlloc(MXactContext,
offsetof(mXactCacheEnt, members) +
nmembers * sizeof(MultiXactMember));
entry->multi = multi;
entry->nmembers = nmembers;
memcpy(entry->members, members, nmembers * sizeof(MultiXactMember));
/* mXactCacheGetBySet assumes the entries are sorted, so sort them */
qsort(entry->members, nmembers, sizeof(MultiXactMember), mxactMemberComparator);
dlist_push_head(&MXactCache, &entry->node);
if (MXactCacheMembers++ >= MAX_CACHE_ENTRIES)
{
dlist_node *node;
- mXactCacheEnt *entry;
node = dlist_tail_node(&MXactCache);
dlist_delete(node);
MXactCacheMembers--;
entry = dlist_container(mXactCacheEnt, node, node);
debug_elog3(DEBUG2, "CachePut: pruning cached multi %u",
entry->multi);
pfree(entry);
}
}
static char *
mxstatus_to_string(MultiXactStatus status)
{
switch (status)
{
case MultiXactStatusForKeyShare:
return "keysh";
case MultiXactStatusForShare:
return "sh";
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index a090cada400..537845cada7 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -4701,45 +4701,44 @@ XLogInitNewTimeline(TimeLineID endTLI, XLogRecPtr endOfLog, TimeLineID newTLI)
/*
* Make a copy of the file on the new timeline.
*
* Writing WAL isn't allowed yet, so there are no locking
* considerations. But we should be just as tense as XLogFileInit to
* avoid emplacing a bogus file.
*/
XLogFileCopy(newTLI, endLogSegNo, endTLI, endLogSegNo,
XLogSegmentOffset(endOfLog, wal_segment_size));
}
else
{
/*
* The switch happened at a segment boundary, so just create the next
* segment on the new timeline.
*/
int fd;
fd = XLogFileInit(startLogSegNo, newTLI);
if (close(fd) != 0)
{
- char xlogfname[MAXFNAMELEN];
int save_errno = errno;
XLogFileName(xlogfname, newTLI, startLogSegNo, wal_segment_size);
errno = save_errno;
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not close file \"%s\": %m", xlogfname)));
}
}
/*
* Let's just make real sure there are not .ready or .done flags posted
* for the new segment.
*/
XLogFileName(xlogfname, newTLI, startLogSegNo, wal_segment_size);
XLogArchiveCleanup(xlogfname);
}
/*
* Perform cleanup actions at the conclusion of archive recovery.
*/
static void
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index e7e37146f69..e6fcfc23b93 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -102,45 +102,44 @@ compute_return_type(TypeName *returnType, Oid languageOid,
if (typtup)
{
if (!((Form_pg_type) GETSTRUCT(typtup))->typisdefined)
{
if (languageOid == SQLlanguageId)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("SQL function cannot return shell type %s",
TypeNameToString(returnType))));
else
ereport(NOTICE,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("return type %s is only a shell",
TypeNameToString(returnType))));
}
rettype = typeTypeId(typtup);
ReleaseSysCache(typtup);
}
else
{
char *typnam = TypeNameToString(returnType);
Oid namespaceId;
- AclResult aclresult;
char *typname;
ObjectAddress address;
/*
* Only C-coded functions can be I/O functions. We enforce this
* restriction here mainly to prevent littering the catalogs with
* shell types due to simple typos in user-defined function
* definitions.
*/
if (languageOid != INTERNALlanguageId &&
languageOid != ClanguageId)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("type \"%s\" does not exist", typnam)));
/* Reject if there's typmod decoration, too */
if (returnType->typmods != NIL)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("type modifier cannot be specified for shell type \"%s\"",
typnam)));
@@ -1093,46 +1092,44 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt)
language = "sql";
else
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("no language specified")));
}
/* Look up the language and validate permissions */
languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language));
if (!HeapTupleIsValid(languageTuple))
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("language \"%s\" does not exist", language),
(extension_file_exists(language) ?
errhint("Use CREATE EXTENSION to load the language into the database.") : 0)));
languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
languageOid = languageStruct->oid;
if (languageStruct->lanpltrusted)
{
/* if trusted language, need USAGE privilege */
- AclResult aclresult;
-
aclresult = pg_language_aclcheck(languageOid, GetUserId(), ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_LANGUAGE,
NameStr(languageStruct->lanname));
}
else
{
/* if untrusted language, must be superuser */
if (!superuser())
aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE,
NameStr(languageStruct->lanname));
}
languageValidator = languageStruct->lanvalidator;
ReleaseSysCache(languageTuple);
/*
* Only superuser is allowed to create leakproof functions because
* leakproof functions can see tuples which have not yet been filtered out
* by security barrier views or row-level security policies.
*/
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 29bc26669b0..a250a33f8cb 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -2465,45 +2465,44 @@ _SPI_execute_plan(SPIPlanPtr plan, const SPIExecuteOptions *options,
* there be only one query.
*/
if (options->must_return_tuples && plan->plancache_list == NIL)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("empty query does not return tuples")));
foreach(lc1, plan->plancache_list)
{
CachedPlanSource *plansource = (CachedPlanSource *) lfirst(lc1);
List *stmt_list;
ListCell *lc2;
spicallbackarg.query = plansource->query_string;
/*
* If this is a one-shot plan, we still need to do parse analysis.
*/
if (plan->oneshot)
{
RawStmt *parsetree = plansource->raw_parse_tree;
const char *src = plansource->query_string;
- List *stmt_list;
/*
* Parameter datatypes are driven by parserSetup hook if provided,
* otherwise we use the fixed parameter list.
*/
if (parsetree == NULL)
stmt_list = NIL;
else if (plan->parserSetup != NULL)
{
Assert(plan->nargs == 0);
stmt_list = pg_analyze_and_rewrite_withcb(parsetree,
src,
plan->parserSetup,
plan->parserSetupArg,
_SPI_current->queryEnv);
}
else
{
stmt_list = pg_analyze_and_rewrite_fixedparams(parsetree,
src,
plan->argtypes,
plan->nargs,
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 7d176e7b00a..0557e945ca7 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -2169,45 +2169,45 @@ match_clause_to_index(PlannerInfo *root,
* but what if someone builds an expression index on a constant? It's not
* totally unreasonable to do so with a partial index, either.)
*/
if (rinfo->pseudoconstant)
return;
/*
* If clause can't be used as an indexqual because it must wait till after
* some lower-security-level restriction clause, reject it.
*/
if (!restriction_is_securely_promotable(rinfo, index->rel))
return;
/* OK, check each index key column for a match */
for (indexcol = 0; indexcol < index->nkeycolumns; indexcol++)
{
IndexClause *iclause;
ListCell *lc;
/* Ignore duplicates */
foreach(lc, clauseset->indexclauses[indexcol])
{
- IndexClause *iclause = (IndexClause *) lfirst(lc);
+ iclause = (IndexClause *) lfirst(lc);
if (iclause->rinfo == rinfo)
return;
}
/* OK, try to match the clause to the index column */
iclause = match_clause_to_indexcol(root,
rinfo,
indexcol,
index);
if (iclause)
{
/* Success, so record it */
clauseset->indexclauses[indexcol] =
lappend(clauseset->indexclauses[indexcol], iclause);
clauseset->nonempty = true;
return;
}
}
}
/*
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index df4ca129191..b15ecc83971 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2383,45 +2383,45 @@ finalize_plan(PlannerInfo *root, Plan *plan,
/* We must run finalize_plan on the subquery */
rel = find_base_rel(root, sscan->scan.scanrelid);
subquery_params = rel->subroot->outer_params;
if (gather_param >= 0)
subquery_params = bms_add_member(bms_copy(subquery_params),
gather_param);
finalize_plan(rel->subroot, sscan->subplan, gather_param,
subquery_params, NULL);
/* Now we can add its extParams to the parent's params */
context.paramids = bms_add_members(context.paramids,
sscan->subplan->extParam);
/* We need scan_params too, though */
context.paramids = bms_add_members(context.paramids,
scan_params);
}
break;
case T_FunctionScan:
{
FunctionScan *fscan = (FunctionScan *) plan;
- ListCell *lc;
+ ListCell *lc; //
/*
* Call finalize_primnode independently on each function
* expression, so that we can record which params are
* referenced in each, in order to decide which need
* re-evaluating during rescan.
*/
foreach(lc, fscan->functions)
{
RangeTblFunction *rtfunc = (RangeTblFunction *) lfirst(lc);
finalize_primnode_context funccontext;
funccontext = context;
funccontext.paramids = NULL;
finalize_primnode(rtfunc->funcexpr, &funccontext);
/* remember results for execution */
rtfunc->funcparams = funccontext.paramids;
/* add the function's params to the overall set */
context.paramids = bms_add_members(context.paramids,
@@ -2491,158 +2491,148 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_NamedTuplestoreScan:
context.paramids = bms_add_members(context.paramids, scan_params);
break;
case T_ForeignScan:
{
ForeignScan *fscan = (ForeignScan *) plan;
finalize_primnode((Node *) fscan->fdw_exprs,
&context);
finalize_primnode((Node *) fscan->fdw_recheck_quals,
&context);
/* We assume fdw_scan_tlist cannot contain Params */
context.paramids = bms_add_members(context.paramids,
scan_params);
}
break;
case T_CustomScan:
{
CustomScan *cscan = (CustomScan *) plan;
- ListCell *lc;
+ ListCell *lc; //
finalize_primnode((Node *) cscan->custom_exprs,
&context);
/* We assume custom_scan_tlist cannot contain Params */
context.paramids =
bms_add_members(context.paramids, scan_params);
/* child nodes if any */
foreach(lc, cscan->custom_plans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(lc),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_ModifyTable:
{
ModifyTable *mtplan = (ModifyTable *) plan;
/* Force descendant scan nodes to reference epqParam */
locally_added_param = mtplan->epqParam;
valid_params = bms_add_member(bms_copy(valid_params),
locally_added_param);
scan_params = bms_add_member(bms_copy(scan_params),
locally_added_param);
finalize_primnode((Node *) mtplan->returningLists,
&context);
finalize_primnode((Node *) mtplan->onConflictSet,
&context);
finalize_primnode((Node *) mtplan->onConflictWhere,
&context);
/* exclRelTlist contains only Vars, doesn't need examination */
}
break;
case T_Append:
{
- ListCell *l;
-
foreach(l, ((Append *) plan)->appendplans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(l),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_MergeAppend:
{
- ListCell *l;
-
foreach(l, ((MergeAppend *) plan)->mergeplans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(l),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_BitmapAnd:
{
- ListCell *l;
-
foreach(l, ((BitmapAnd *) plan)->bitmapplans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(l),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_BitmapOr:
{
- ListCell *l;
-
foreach(l, ((BitmapOr *) plan)->bitmapplans)
{
context.paramids =
bms_add_members(context.paramids,
finalize_plan(root,
(Plan *) lfirst(l),
gather_param,
valid_params,
scan_params));
}
}
break;
case T_NestLoop:
{
- ListCell *l;
-
finalize_primnode((Node *) ((Join *) plan)->joinqual,
&context);
/* collect set of params that will be passed to right child */
foreach(l, ((NestLoop *) plan)->nestParams)
{
NestLoopParam *nlp = (NestLoopParam *) lfirst(l);
nestloop_params = bms_add_member(nestloop_params,
nlp->paramno);
}
}
break;
case T_MergeJoin:
finalize_primnode((Node *) ((Join *) plan)->joinqual,
&context);
finalize_primnode((Node *) ((MergeJoin *) plan)->mergeclauses,
&context);
break;
case T_HashJoin:
finalize_primnode((Node *) ((Join *) plan)->joinqual,
diff --git a/src/backend/partitioning/partbounds.c b/src/backend/partitioning/partbounds.c
index 091d6e886b6..2720a2508cb 100644
--- a/src/backend/partitioning/partbounds.c
+++ b/src/backend/partitioning/partbounds.c
@@ -4300,46 +4300,45 @@ get_qual_for_range(Relation parent, PartitionBoundSpec *spec,
int i,
j;
PartitionRangeDatum *ldatum,
*udatum;
PartitionKey key = RelationGetPartitionKey(parent);
Expr *keyCol;
Const *lower_val,
*upper_val;
List *lower_or_arms,
*upper_or_arms;
int num_or_arms,
current_or_arm;
ListCell *lower_or_start_datum,
*upper_or_start_datum;
bool need_next_lower_arm,
need_next_upper_arm;
if (spec->is_default)
{
List *or_expr_args = NIL;
PartitionDesc pdesc = RelationGetPartitionDesc(parent, false);
Oid *inhoids = pdesc->oids;
- int nparts = pdesc->nparts,
- i;
+ int nparts = pdesc->nparts;
for (i = 0; i < nparts; i++)
{
Oid inhrelid = inhoids[i];
HeapTuple tuple;
Datum datum;
bool isnull;
PartitionBoundSpec *bspec;
tuple = SearchSysCache1(RELOID, inhrelid);
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for relation %u", inhrelid);
datum = SysCacheGetAttr(RELOID, tuple,
Anum_pg_class_relpartbound,
&isnull);
if (isnull)
elog(ERROR, "null relpartbound for relation %u", inhrelid);
bspec = (PartitionBoundSpec *)
stringToNode(TextDatumGetCString(datum));
if (!IsA(bspec, PartitionBoundSpec))
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 89cf9f9389c..8ac78a6cf38 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -2301,45 +2301,44 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
* previous tuple's toast chunks.
*/
Assert(change->data.tp.clear_toast_afterwards);
ReorderBufferToastReset(rb, txn);
/* We don't need this record anymore. */
ReorderBufferReturnChange(rb, specinsert, true);
specinsert = NULL;
}
break;
case REORDER_BUFFER_CHANGE_TRUNCATE:
{
int i;
int nrelids = change->data.truncate.nrelids;
int nrelations = 0;
Relation *relations;
relations = palloc0(nrelids * sizeof(Relation));
for (i = 0; i < nrelids; i++)
{
Oid relid = change->data.truncate.relids[i];
- Relation relation;
relation = RelationIdGetRelation(relid);
if (!RelationIsValid(relation))
elog(ERROR, "could not open relation with OID %u", relid);
if (!RelationIsLogicallyLogged(relation))
continue;
relations[nrelations++] = relation;
}
/* Apply the truncate. */
ReorderBufferApplyTruncate(rb, txn, nrelations,
relations, change,
streaming);
for (i = 0; i < nrelations; i++)
RelationClose(relations[i]);
break;
}
diff --git a/src/backend/rewrite/rowsecurity.c b/src/backend/rewrite/rowsecurity.c
index a233dd47585..b2a72374306 100644
--- a/src/backend/rewrite/rowsecurity.c
+++ b/src/backend/rewrite/rowsecurity.c
@@ -805,45 +805,44 @@ add_with_check_options(Relation rel,
wco->polname = NULL;
wco->cascaded = false;
if (list_length(permissive_quals) == 1)
wco->qual = (Node *) linitial(permissive_quals);
else
wco->qual = (Node *) makeBoolExpr(OR_EXPR, permissive_quals, -1);
ChangeVarNodes(wco->qual, 1, rt_index, 0);
*withCheckOptions = list_append_unique(*withCheckOptions, wco);
/*
* Now add WithCheckOptions for each of the restrictive policy clauses
* (which will be combined together using AND). We use a separate
* WithCheckOption for each restrictive policy to allow the policy
* name to be included in error reports if the policy is violated.
*/
foreach(item, restrictive_policies)
{
RowSecurityPolicy *policy = (RowSecurityPolicy *) lfirst(item);
Expr *qual = QUAL_FOR_WCO(policy);
- WithCheckOption *wco;
if (qual != NULL)
{
qual = copyObject(qual);
ChangeVarNodes((Node *) qual, 1, rt_index, 0);
wco = makeNode(WithCheckOption);
wco->kind = kind;
wco->relname = pstrdup(RelationGetRelationName(rel));
wco->polname = pstrdup(policy->policy_name);
wco->qual = (Node *) qual;
wco->cascaded = false;
*withCheckOptions = list_append_unique(*withCheckOptions, wco);
*hasSubLinks |= policy->hassublinks;
}
}
}
else
{
/*
* If there were no policy clauses to check new data, add a single
diff --git a/src/backend/utils/adt/rangetypes_spgist.c b/src/backend/utils/adt/rangetypes_spgist.c
index 1190b8000bc..71a6053b6a0 100644
--- a/src/backend/utils/adt/rangetypes_spgist.c
+++ b/src/backend/utils/adt/rangetypes_spgist.c
@@ -674,73 +674,71 @@ spg_range_quad_inner_consistent(PG_FUNCTION_ARGS)
if (minLower)
{
/*
* If the centroid's lower bound is less than or equal to the
* minimum lower bound, anything in the 3rd and 4th quadrants
* will have an even smaller lower bound, and thus can't
* match.
*/
if (range_cmp_bounds(typcache, ¢roidLower, minLower) <= 0)
which &= (1 << 1) | (1 << 2) | (1 << 5);
}
if (maxLower)
{
/*
* If the centroid's lower bound is greater than the maximum
* lower bound, anything in the 1st and 2nd quadrants will
* also have a greater than or equal lower bound, and thus
* can't match. If the centroid's lower bound is equal to the
* maximum lower bound, we can still exclude the 1st and 2nd
* quadrants if we're looking for a value strictly greater
* than the maximum.
*/
- int cmp;
cmp = range_cmp_bounds(typcache, ¢roidLower, maxLower);
if (cmp > 0 || (!inclusive && cmp == 0))
which &= (1 << 3) | (1 << 4) | (1 << 5);
}
if (minUpper)
{
/*
* If the centroid's upper bound is less than or equal to the
* minimum upper bound, anything in the 2nd and 3rd quadrants
* will have an even smaller upper bound, and thus can't
* match.
*/
if (range_cmp_bounds(typcache, ¢roidUpper, minUpper) <= 0)
which &= (1 << 1) | (1 << 4) | (1 << 5);
}
if (maxUpper)
{
/*
* If the centroid's upper bound is greater than the maximum
* upper bound, anything in the 1st and 4th quadrants will
* also have a greater than or equal upper bound, and thus
* can't match. If the centroid's upper bound is equal to the
* maximum upper bound, we can still exclude the 1st and 4th
* quadrants if we're looking for a value strictly greater
* than the maximum.
*/
- int cmp;
cmp = range_cmp_bounds(typcache, ¢roidUpper, maxUpper);
if (cmp > 0 || (!inclusive && cmp == 0))
which &= (1 << 2) | (1 << 3) | (1 << 5);
}
if (which == 0)
break; /* no need to consider remaining conditions */
}
}
/* We must descend into the quadrant(s) identified by 'which' */
out->nodeNumbers = (int *) palloc(sizeof(int) * in->nNodes);
if (needPrevious)
out->traversalValues = (void **) palloc(sizeof(void *) * in->nNodes);
out->nNodes = 0;
/*
* Elements of traversalValues should be allocated in
* traversalMemoryContext
*/
oldCtx = MemoryContextSwitchTo(in->traversalMemoryContext);
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 8280711f7ef..9959f6910e9 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -1284,45 +1284,44 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
idxrelrec = (Form_pg_class) GETSTRUCT(ht_idxrel);
/*
* Fetch the pg_am tuple of the index' access method
*/
ht_am = SearchSysCache1(AMOID, ObjectIdGetDatum(idxrelrec->relam));
if (!HeapTupleIsValid(ht_am))
elog(ERROR, "cache lookup failed for access method %u",
idxrelrec->relam);
amrec = (Form_pg_am) GETSTRUCT(ht_am);
/* Fetch the index AM's API struct */
amroutine = GetIndexAmRoutine(amrec->amhandler);
/*
* Get the index expressions, if any. (NOTE: we do not use the relcache
* versions of the expressions and predicate, because we want to display
* non-const-folded expressions.)
*/
if (!heap_attisnull(ht_idx, Anum_pg_index_indexprs, NULL))
{
Datum exprsDatum;
- bool isnull;
char *exprsString;
exprsDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
Anum_pg_index_indexprs, &isnull);
Assert(!isnull);
exprsString = TextDatumGetCString(exprsDatum);
indexprs = (List *) stringToNode(exprsString);
pfree(exprsString);
}
else
indexprs = NIL;
indexpr_item = list_head(indexprs);
context = deparse_context_for(get_relation_name(indrelid), indrelid);
/*
* Start the index definition. Note that the index's name should never be
* schema-qualified, but the indexed rel's name may be.
*/
initStringInfo(&buf);
@@ -1481,45 +1480,44 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
*/
if (showTblSpc)
{
Oid tblspc;
tblspc = get_rel_tablespace(indexrelid);
if (OidIsValid(tblspc))
{
if (isConstraint)
appendStringInfoString(&buf, " USING INDEX");
appendStringInfo(&buf, " TABLESPACE %s",
quote_identifier(get_tablespace_name(tblspc)));
}
}
/*
* If it's a partial index, decompile and append the predicate
*/
if (!heap_attisnull(ht_idx, Anum_pg_index_indpred, NULL))
{
Node *node;
Datum predDatum;
- bool isnull;
char *predString;
/* Convert text string to node tree */
predDatum = SysCacheGetAttr(INDEXRELID, ht_idx,
Anum_pg_index_indpred, &isnull);
Assert(!isnull);
predString = TextDatumGetCString(predDatum);
node = (Node *) stringToNode(predString);
pfree(predString);
/* Deparse */
str = deparse_expression_pretty(node, context, false, false,
prettyFlags, 0);
if (isConstraint)
appendStringInfo(&buf, " WHERE (%s)", str);
else
appendStringInfo(&buf, " WHERE %s", str);
}
}
/* Clean up */
ReleaseSysCache(ht_idx);
@@ -1926,45 +1924,44 @@ pg_get_partkeydef_worker(Oid relid, int prettyFlags,
Assert(form->partrelid == relid);
/* Must get partclass and partcollation the hard way */
datum = SysCacheGetAttr(PARTRELID, tuple,
Anum_pg_partitioned_table_partclass, &isnull);
Assert(!isnull);
partclass = (oidvector *) DatumGetPointer(datum);
datum = SysCacheGetAttr(PARTRELID, tuple,
Anum_pg_partitioned_table_partcollation, &isnull);
Assert(!isnull);
partcollation = (oidvector *) DatumGetPointer(datum);
/*
* Get the expressions, if any. (NOTE: we do not use the relcache
* versions of the expressions, because we want to display
* non-const-folded expressions.)
*/
if (!heap_attisnull(tuple, Anum_pg_partitioned_table_partexprs, NULL))
{
Datum exprsDatum;
- bool isnull;
char *exprsString;
exprsDatum = SysCacheGetAttr(PARTRELID, tuple,
Anum_pg_partitioned_table_partexprs, &isnull);
Assert(!isnull);
exprsString = TextDatumGetCString(exprsDatum);
partexprs = (List *) stringToNode(exprsString);
if (!IsA(partexprs, List))
elog(ERROR, "unexpected node type found in partexprs: %d",
(int) nodeTag(partexprs));
pfree(exprsString);
}
else
partexprs = NIL;
partexpr_item = list_head(partexprs);
context = deparse_context_for(get_relation_name(relid), relid);
initStringInfo(&buf);
On Thu, 25 Aug 2022 at 14:08, Justin Pryzby <pryzby@telsasoft.com> wrote:
Here, I've included the rest of your list.
OK, I've gone through v3-remove-var-declarations.txt, v4-reuse.txt
v4-reuse-more.txt and committed most of what you had and removed a few
that I thought should be renames instead.
I also added some additional ones after reprocessing the RenameOrScope
category from the spreadsheet.
With some minor adjustments to a small number of your ones, I pushed
what I came up with.
David
Attachments:
shadow_analysis.odsapplication/vnd.oasis.opendocument.spreadsheet; name=shadow_analysis.odsDownload
PK nvU�l9�. . mimetypeapplication/vnd.oasis.opendocument.spreadsheetPK nvU Configurations2/popupmenu/PK nvU Configurations2/menubar/PK nvU Configurations2/progressbar/PK nvU Configurations2/toolbar/PK nvU Configurations2/statusbar/PK nvU Configurations2/images/Bitmaps/PK nvU Configurations2/floater/PK nvU Configurations2/toolpanel/PK nvU Configurations2/accelerator/PK nvU manifest.rdf���n�0D�|�e�x�^
�P�s�?p�!V������}�VU������F3oG����#{����"�L[���c��0d�|�&��K�cQm��S��!�`Y�<�#UUA^BYfQ���y�,��M[=:M�b�����p�6hD�_�7u8;���&�����=*(�P�N�5��0��%L:H�~H���;s�V�#����R^�%=KnE��o��aL��Q_�����vA�P�Sd�����?�&�PKLJq� E PK nvU
styles.xml�Zmo�6��_a���S$�i{���[�m0���KAS�L������)R�dIQ��)�ME���{�����m�F"$���7>��0�h�\y}x�_x��o.ySL����d����9�L�S������#I�<C)�s��<'�c��h�f���<���Q���P�[������9Rt�jb89+��yh�b�3.�`��A���o�>�����l;5���l�YG��._f�"F�j2��Gk@j�!�������i���@�������>���O�*��$�M�$��+$o���"W=��s�J��r��qu�l�.�Rh�����`�
�����^r�����T�&5P��h��e��.bER����3���]���DP=��a��$4��W�qhG��*�"T�P+H�0����b8�S��|��y��*.Ej�Qi.��0i�y�fWE:������X�|p�5�y��*�~ �O6���Et�R�]D��#MD����i ���P�}VF��S����]��|]���"�S����_�)@���l�So�v����#L��`&�EZ����Y�x�����TP�MJ�]e*������8����B
��)�����B�2&����M�oQ��O
�b�wy��R�(r�0�� j��=��EY�X�F�D��wR��Q�S,�����^� L�n0�t�������!"1Z3��9�VOS�}L�y�J�W~��E��+�������T(���
�q�Lg�>�Q�#�b>g��5J`�df CIV����k��!YQ��C�d9��+7c����7��u���m3�J����6��S��`��,���R�sz
��}Jif6��"�P%{�P�L���~-���� �f~M�>)�0��*�
~}��81Ngz�g���80���3����2V4��i�MnxKh���`�YT��=�X�>N��cgDx���_�����v�����n��~�o�����������'_n�"�6J�x�
��J�n^}`�w^D ����WBc���9�����E��,����q#h?h4zv���2_q��|�����8W����.��(t����=���#�s��9�"���H����-������R�P�C�������9��m�:$�169>$�����zz�T�!k%*
��@��o(���A��<J4��u�=& #��I6A���M.N_��C|��� yXo�3�`���nV��� 8��`A;$,'��������������#`{@���M{a����|�������SW������*�_w��M�}���HQ���{o�&]7J����0�D^y�s������D������U�&5O�^���~�-�������r��+����v,��!��G��ae ]}�i��H`��X������.�R�=HX^��YUN���y��uu�=����������|�����n�����Y8�p\--Y�Jt*�U ���=K��@�J��?���bj'R$K�[;�%���W�����/.�W2r�W�)�����As���
��n�7W^�����������m�Z��p/�n����
NT
�^�=�����&F��=d�'A���~(��@�������UI���� ��X2���"B��j^gW;��G%��Al�_���^���"��VM��a�+�pn~K���������C<�2��j����P�)�������PK;
�z� L'