From 6a80a46886d3659a3ccd5783fdf16b2ae750d73b Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 23 Jul 2022 15:10:01 -0500
Subject: [PATCH 3/9] fix whitespace

weird since c91560defc57f89f7e88632ea14ae77b5cec78ee

See also:
https://www.postgresql.org/message-id/CAFBsxsHtYvE1Txm+NYM5gB1C3EZN_qDPJOOsJK_VsC2DXqZfcA@mail.gmail.com
---
 src/backend/utils/cache/inval.c | 188 ++++++++++++++++----------------
 src/backend/utils/mmgr/mcxt.c   |   4 +-
 2 files changed, 96 insertions(+), 96 deletions(-)

diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 0008826f67c..cc7de542b1e 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -3,100 +3,100 @@
  * inval.c
  *	  POSTGRES cache invalidation dispatcher code.
  *
- *	This is subtle stuff, so pay attention:
- *
- *	When a tuple is updated or deleted, our standard visibility rules
- *	consider that it is *still valid* so long as we are in the same command,
- *	ie, until the next CommandCounterIncrement() or transaction commit.
- *	(See access/heap/heapam_visibility.c, and note that system catalogs are
- *  generally scanned under the most current snapshot available, rather than
- *  the transaction snapshot.)	At the command boundary, the old tuple stops
- *	being valid and the new version, if any, becomes valid.  Therefore,
- *	we cannot simply flush a tuple from the system caches during heap_update()
- *	or heap_delete().  The tuple is still good at that point; what's more,
- *	even if we did flush it, it might be reloaded into the caches by a later
- *	request in the same command.  So the correct behavior is to keep a list
- *	of outdated (updated/deleted) tuples and then do the required cache
- *	flushes at the next command boundary.  We must also keep track of
- *	inserted tuples so that we can flush "negative" cache entries that match
- *	the new tuples; again, that mustn't happen until end of command.
- *
- *	Once we have finished the command, we still need to remember inserted
- *	tuples (including new versions of updated tuples), so that we can flush
- *	them from the caches if we abort the transaction.  Similarly, we'd better
- *	be able to flush "negative" cache entries that may have been loaded in
- *	place of deleted tuples, so we still need the deleted ones too.
- *
- *	If we successfully complete the transaction, we have to broadcast all
- *	these invalidation events to other backends (via the SI message queue)
- *	so that they can flush obsolete entries from their caches.  Note we have
- *	to record the transaction commit before sending SI messages, otherwise
- *	the other backends won't see our updated tuples as good.
- *
- *	When a subtransaction aborts, we can process and discard any events
- *	it has queued.  When a subtransaction commits, we just add its events
- *	to the pending lists of the parent transaction.
- *
- *	In short, we need to remember until xact end every insert or delete
- *	of a tuple that might be in the system caches.  Updates are treated as
- *	two events, delete + insert, for simplicity.  (If the update doesn't
- *	change the tuple hash value, catcache.c optimizes this into one event.)
- *
- *	We do not need to register EVERY tuple operation in this way, just those
- *	on tuples in relations that have associated catcaches.  We do, however,
- *	have to register every operation on every tuple that *could* be in a
- *	catcache, whether or not it currently is in our cache.  Also, if the
- *	tuple is in a relation that has multiple catcaches, we need to register
- *	an invalidation message for each such catcache.  catcache.c's
- *	PrepareToInvalidateCacheTuple() routine provides the knowledge of which
- *	catcaches may need invalidation for a given tuple.
- *
- *	Also, whenever we see an operation on a pg_class, pg_attribute, or
- *	pg_index tuple, we register a relcache flush operation for the relation
- *	described by that tuple (as specified in CacheInvalidateHeapTuple()).
- *	Likewise for pg_constraint tuples for foreign keys on relations.
- *
- *	We keep the relcache flush requests in lists separate from the catcache
- *	tuple flush requests.  This allows us to issue all the pending catcache
- *	flushes before we issue relcache flushes, which saves us from loading
- *	a catcache tuple during relcache load only to flush it again right away.
- *	Also, we avoid queuing multiple relcache flush requests for the same
- *	relation, since a relcache flush is relatively expensive to do.
- *	(XXX is it worth testing likewise for duplicate catcache flush entries?
- *	Probably not.)
- *
- *	Many subsystems own higher-level caches that depend on relcache and/or
- *	catcache, and they register callbacks here to invalidate their caches.
- *	While building a higher-level cache entry, a backend may receive a
- *	callback for the being-built entry or one of its dependencies.  This
- *	implies the new higher-level entry would be born stale, and it might
- *	remain stale for the life of the backend.  Many caches do not prevent
- *	that.  They rely on DDL for can't-miss catalog changes taking
- *	AccessExclusiveLock on suitable objects.  (For a change made with less
- *	locking, backends might never read the change.)  The relation cache,
- *	however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
- *	than the beginning of the next transaction.  Hence, when a relevant
- *	invalidation callback arrives during a build, relcache.c reattempts that
- *	build.  Caches with similar needs could do likewise.
- *
- *	If a relcache flush is issued for a system relation that we preload
- *	from the relcache init file, we must also delete the init file so that
- *	it will be rebuilt during the next backend restart.  The actual work of
- *	manipulating the init file is in relcache.c, but we keep track of the
- *	need for it here.
- *
- *	Currently, inval messages are sent without regard for the possibility
- *	that the object described by the catalog tuple might be a session-local
- *	object such as a temporary table.  This is because (1) this code has
- *	no practical way to tell the difference, and (2) it is not certain that
- *	other backends don't have catalog cache or even relcache entries for
- *	such tables, anyway; there is nothing that prevents that.  It might be
- *	worth trying to avoid sending such inval traffic in the future, if those
- *	problems can be overcome cheaply.
- *
- *	When wal_level=logical, write invalidations into WAL at each command end to
- *	support the decoding of the in-progress transactions.  See
- *	CommandEndInvalidationMessages.
+ * This is subtle stuff, so pay attention:
+ *
+ * When a tuple is updated or deleted, our standard visibility rules
+ * consider that it is *still valid* so long as we are in the same command,
+ * ie, until the next CommandCounterIncrement() or transaction commit.
+ * (See access/heap/heapam_visibility.c, and note that system catalogs are
+ * generally scanned under the most current snapshot available, rather than
+ * the transaction snapshot.)  At the command boundary, the old tuple stops
+ * being valid and the new version, if any, becomes valid.  Therefore,
+ * we cannot simply flush a tuple from the system caches during heap_update()
+ * or heap_delete().  The tuple is still good at that point; what's more,
+ * even if we did flush it, it might be reloaded into the caches by a later
+ * request in the same command.  So the correct behavior is to keep a list
+ * of outdated (updated/deleted) tuples and then do the required cache
+ * flushes at the next command boundary.  We must also keep track of
+ * inserted tuples so that we can flush "negative" cache entries that match
+ * the new tuples; again, that mustn't happen until end of command.
+ *
+ * Once we have finished the command, we still need to remember inserted
+ * tuples (including new versions of updated tuples), so that we can flush
+ * them from the caches if we abort the transaction.  Similarly, we'd better
+ * be able to flush "negative" cache entries that may have been loaded in
+ * place of deleted tuples, so we still need the deleted ones too.
+ *
+ * If we successfully complete the transaction, we have to broadcast all
+ * these invalidation events to other backends (via the SI message queue)
+ * so that they can flush obsolete entries from their caches.  Note we have
+ * to record the transaction commit before sending SI messages, otherwise
+ * the other backends won't see our updated tuples as good.
+ *
+ * When a subtransaction aborts, we can process and discard any events
+ * it has queued.  When a subtransaction commits, we just add its events
+ * to the pending lists of the parent transaction.
+ *
+ * In short, we need to remember until xact end every insert or delete
+ * of a tuple that might be in the system caches.  Updates are treated as
+ * two events, delete + insert, for simplicity.  (If the update doesn't
+ * change the tuple hash value, catcache.c optimizes this into one event.)
+ *
+ * We do not need to register EVERY tuple operation in this way, just those
+ * on tuples in relations that have associated catcaches.  We do, however,
+ * have to register every operation on every tuple that *could* be in a
+ * catcache, whether or not it currently is in our cache.  Also, if the
+ * tuple is in a relation that has multiple catcaches, we need to register
+ * an invalidation message for each such catcache.  catcache.c's
+ * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
+ * catcaches may need invalidation for a given tuple.
+ *
+ * Also, whenever we see an operation on a pg_class, pg_attribute, or
+ * pg_index tuple, we register a relcache flush operation for the relation
+ * described by that tuple (as specified in CacheInvalidateHeapTuple()).
+ * Likewise for pg_constraint tuples for foreign keys on relations.
+ *
+ * We keep the relcache flush requests in lists separate from the catcache
+ * tuple flush requests.  This allows us to issue all the pending catcache
+ * flushes before we issue relcache flushes, which saves us from loading
+ * a catcache tuple during relcache load only to flush it again right away.
+ * Also, we avoid queuing multiple relcache flush requests for the same
+ * relation, since a relcache flush is relatively expensive to do.
+ * (XXX is it worth testing likewise for duplicate catcache flush entries?
+ * Probably not.)
+ *
+ * Many subsystems own higher-level caches that depend on relcache and/or
+ * catcache, and they register callbacks here to invalidate their caches.
+ * While building a higher-level cache entry, a backend may receive a
+ * callback for the being-built entry or one of its dependencies.  This
+ * implies the new higher-level entry would be born stale, and it might
+ * remain stale for the life of the backend.  Many caches do not prevent
+ * that.  They rely on DDL for can't-miss catalog changes taking
+ * AccessExclusiveLock on suitable objects.  (For a change made with less
+ * locking, backends might never read the change.)  The relation cache,
+ * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
+ * than the beginning of the next transaction.  Hence, when a relevant
+ * invalidation callback arrives during a build, relcache.c reattempts that
+ * build.  Caches with similar needs could do likewise.
+ *
+ * If a relcache flush is issued for a system relation that we preload
+ * from the relcache init file, we must also delete the init file so that
+ * it will be rebuilt during the next backend restart.  The actual work of
+ * manipulating the init file is in relcache.c, but we keep track of the
+ * need for it here.
+ *
+ * Currently, inval messages are sent without regard for the possibility
+ * that the object described by the catalog tuple might be a session-local
+ * object such as a temporary table.  This is because (1) this code has
+ * no practical way to tell the difference, and (2) it is not certain that
+ * other backends don't have catalog cache or even relcache entries for
+ * such tables, anyway; there is nothing that prevents that.  It might be
+ * worth trying to avoid sending such inval traffic in the future, if those
+ * problems can be overcome cheaply.
+ *
+ * When wal_level=logical, write invalidations into WAL at each command end to
+ * support the decoding of the in-progress transactions.  See
+ * CommandEndInvalidationMessages.
  *
  * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index 8b6591abfb2..392602a020f 100644
--- a/src/backend/utils/mmgr/mcxt.c
+++ b/src/backend/utils/mmgr/mcxt.c
@@ -929,7 +929,7 @@ MemoryContextCheck(MemoryContext context)
  * context creation routines, not by the unwashed masses.
  *
  * The memory context creation procedure goes like this:
- *	1.  Context-type-specific routine makes some initial space allocation,
+ *	1.	Context-type-specific routine makes some initial space allocation,
  *		including enough space for the context header.  If it fails,
  *		it can ereport() with no damage done.
  *	2.	Context-type-specific routine sets up all type-specific fields of
@@ -939,7 +939,7 @@ MemoryContextCheck(MemoryContext context)
  *		the initial space allocation should be freed before ereport'ing.
  *	3.	Context-type-specific routine calls MemoryContextCreate() to fill in
  *		the generic header fields and link the context into the context tree.
- *	4.  We return to the context-type-specific routine, which finishes
+ *	4.	We return to the context-type-specific routine, which finishes
  *		up type-specific initialization.  This routine can now do things
  *		that might fail (like allocate more memory), so long as it's
  *		sure the node is left in a state that delete will handle.
-- 
2.25.1

