Analysis on backend-private memory usage (and a patch)
I received a complaint that each backend consumes a lot of
backend-private memory, even if it's completely idle. "a lot" is of
course very subjective and how much memory is actually used depends
heavily on the application. In this case, the database is fairly small,
but they have 250 connections. 'top' output says that each backend is
consuming roughly 3MB of memory (RES - SHR). That's 750 MB of
backend-private memory, which is a significant chunk of total RAM.
So I spent some time analyzing backend memory usage, looking for any
low-hanging fruit. This isn't *that* big an issue, so I don't think we'd
want to do any big rearchitecting for this.
On my laptop, just starting psql, the backend uses 1632 KB of private
memory. Running a simple query like "select * from foo where i = 1"
makes no noticeable difference, but after "\d" (which I'm using to
represent a somewhat more complicated query), it goes up to 1960 KB.
The largest consumer of that memory is the relcache and syscaches. After
starting psql, without running any queries, MemoryContextStats says:
CacheMemoryContext: 817840 total in 20 blocks; 134824 free (4 chunks);
683016 used
plus there is one sub-memorycontext for each index in the relcache, each
using about 1KB. After "\d":
CacheMemoryContext: 1342128 total in 21 blocks; 517472 free (1 chunks);
824656 used
Another thing that can consume a lot of memory is PrivateRefCount lookup
table. It's an array with one int32 for each shared buffer, ie. 512 KB
for each GB of shared_buffers. See previous discussion here:
/messages/by-id/1164624036.3778.107.camel@silverbirch.site.
That discussion didn't lead to anything, but I think there's some
potential in turning PrivateRefCount into a tiny hash table or simply a
linear array. Or even simpler, change it from int32 to int16, and accept
that you will get an error if you try to hold more than 2^16 pins one a
buffer in one backend.
One fairly simple thing we could do is to teach catcache.c to resize the
caches. Then we could make the initial size of all the syscaches much
smaller. At the moment, we use fairly caches for catalogs like pg_enum
(256 entries) and pg_usermapping (128), even though most databases don't
use those features at all. If they could be resized on demand, we could
easily allocate them initially with just, say, 4 entries.
Attached is a patch for that. That saves about 300 KB, for a backend
that does nothing. Resizing the caches on demand also has the benefit
that if you have a lot more objects of some type than usual, lookups
won't be bogged down by a too small cache. I haven't tried to measure
that, though.
- Heikki
Attachments:
resize-syscaches-1.patchtext/x-diff; name=resize-syscaches-1.patchDownload
diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index cca0572..36fbc67 100644
--- a/src/backend/utils/cache/catcache.c
+++ b/src/backend/utils/cache/catcache.c
@@ -734,9 +734,8 @@ InitCatCache(int id,
int i;
/*
- * nbuckets is the number of hash buckets to use in this catcache.
- * Currently we just use a hard-wired estimate of an appropriate size for
- * each cache; maybe later make them dynamically resizable?
+ * nbuckets is the initial number of hash buckets to use in this catcache.
+ * It will be enlarged later if it becomes too full.
*
* nbuckets must be a power of two. We check this via Assert rather than
* a full runtime check because the values will be coming from constant
@@ -775,7 +774,8 @@ InitCatCache(int id,
*
* Note: we rely on zeroing to initialize all the dlist headers correctly
*/
- cp = (CatCache *) palloc0(sizeof(CatCache) + nbuckets * sizeof(dlist_head));
+ cp = (CatCache *) palloc0(sizeof(CatCache));
+ cp->cc_bucket = palloc0(nbuckets * sizeof(dlist_head));
/*
* initialize the cache's relation information for the relation
@@ -814,6 +814,44 @@ InitCatCache(int id,
}
/*
+ * Enlarge a catcache, doubling the number of buckets.
+ */
+static void
+RehashCatCache(CatCache *cp)
+{
+ dlist_head *newbucket;
+ int newnbuckets;
+ int i;
+
+ elog(DEBUG1, "rehashing cache with id %d for %s; %d tups, %d buckets",
+ cp->id, cp->cc_relname, cp->cc_ntup, cp->cc_nbuckets);
+
+ /* Allocate a new, larger, hash table. */
+ newnbuckets = cp->cc_nbuckets * 2;
+ newbucket = (dlist_head *) MemoryContextAllocZero(CacheMemoryContext, newnbuckets * sizeof(dlist_head));
+
+ /* Move all entries from old hash table to new. */
+ for (i = 0; i < cp->cc_nbuckets; i++)
+ {
+ while (!dlist_is_empty(&cp->cc_bucket[i]))
+ {
+ dlist_node * node = dlist_pop_head_node(&cp->cc_bucket[i]);
+ CatCTup *ct = dlist_container(CatCTup, cache_elem, node);
+ int hashIndex;
+
+ hashIndex = HASH_INDEX(ct->hash_value, newnbuckets);
+
+ dlist_push_head(&newbucket[hashIndex], &ct->cache_elem);
+ }
+ }
+
+ /* Switch to the new array */
+ pfree(cp->cc_bucket);
+ cp->cc_nbuckets = newnbuckets;
+ cp->cc_bucket = newbucket;
+}
+
+/*
* CatalogCacheInitializeCache
*
* This function does final initialization of a catcache: obtain the tuple
@@ -1684,6 +1722,13 @@ CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp,
cache->cc_ntup++;
CacheHdr->ch_ntup++;
+ /*
+ * If the cache has grown too large, enlarge the buckets array. Quite
+ * arbitrarily, we enlarge when fill factor > 2.
+ */
+ if (cache->cc_ntup > cache->cc_nbuckets * 2)
+ RehashCatCache(cache);
+
return ct;
}
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 1ff2f2b..624db44 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -122,7 +122,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 32
+ 2
},
{AccessMethodRelationId, /* AMNAME */
AmNameIndexId,
@@ -155,7 +155,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_amop_amopfamily,
0
},
- 64
+ 32
},
{AccessMethodOperatorRelationId, /* AMOPSTRATEGY */
AccessMethodStrategyIndexId,
@@ -166,7 +166,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_amop_amoprighttype,
Anum_pg_amop_amopstrategy
},
- 64
+ 32
},
{AccessMethodProcedureRelationId, /* AMPROCNUM */
AccessMethodProcedureIndexId,
@@ -177,7 +177,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_amproc_amprocrighttype,
Anum_pg_amproc_amprocnum
},
- 64
+ 32
},
{AttributeRelationId, /* ATTNAME */
AttributeRelidNameIndexId,
@@ -188,7 +188,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 2048
+ 512
},
{AttributeRelationId, /* ATTNUM */
AttributeRelidNumIndexId,
@@ -199,7 +199,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 2048
+ 512
},
{AuthMemRelationId, /* AUTHMEMMEMROLE */
AuthMemMemRoleIndexId,
@@ -210,7 +210,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 16
},
{AuthMemRelationId, /* AUTHMEMROLEMEM */
AuthMemRoleMemIndexId,
@@ -221,7 +221,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 16
},
{AuthIdRelationId, /* AUTHNAME */
AuthIdRolnameIndexId,
@@ -232,7 +232,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 8
},
{AuthIdRelationId, /* AUTHOID */
AuthIdOidIndexId,
@@ -243,7 +243,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 8
},
{
CastRelationId, /* CASTSOURCETARGET */
@@ -255,7 +255,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 256
+ 128
},
{OperatorClassRelationId, /* CLAAMNAMENSP */
OpclassAmNameNspIndexId,
@@ -266,7 +266,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_opclass_opcnamespace,
0
},
- 64
+ 32
},
{OperatorClassRelationId, /* CLAOID */
OpclassOidIndexId,
@@ -277,7 +277,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 64
+ 32
},
{CollationRelationId, /* COLLNAMEENCNSP */
CollationNameEncNspIndexId,
@@ -299,7 +299,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 64
+ 16
},
{ConversionRelationId, /* CONDEFAULT */
ConversionDefaultIndexId,
@@ -310,7 +310,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_conversion_contoencoding,
ObjectIdAttributeNumber,
},
- 128
+ 8
},
{ConversionRelationId, /* CONNAMENSP */
ConversionNameNspIndexId,
@@ -321,7 +321,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 8
},
{ConstraintRelationId, /* CONSTROID */
ConstraintOidIndexId,
@@ -332,7 +332,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 1024
+ 16
},
{ConversionRelationId, /* CONVOID */
ConversionOidIndexId,
@@ -343,7 +343,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 8
},
{DatabaseRelationId, /* DATABASEOID */
DatabaseOidIndexId,
@@ -365,7 +365,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_default_acl_defaclobjtype,
0
},
- 256
+ 4
},
{EnumRelationId, /* ENUMOID */
EnumOidIndexId,
@@ -376,7 +376,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 256
+ 4
},
{EnumRelationId, /* ENUMTYPOIDNAME */
EnumTypIdLabelIndexId,
@@ -387,7 +387,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 256
+ 4
},
{EventTriggerRelationId, /* EVENTTRIGGERNAME */
EventTriggerNameIndexId,
@@ -398,7 +398,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 8
+ 4
},
{EventTriggerRelationId, /* EVENTTRIGGEROID */
EventTriggerOidIndexId,
@@ -409,7 +409,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 8
+ 4
},
{ForeignDataWrapperRelationId, /* FOREIGNDATAWRAPPERNAME */
ForeignDataWrapperNameIndexId,
@@ -420,7 +420,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 8
+ 2
},
{ForeignDataWrapperRelationId, /* FOREIGNDATAWRAPPEROID */
ForeignDataWrapperOidIndexId,
@@ -431,7 +431,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 8
+ 2
},
{ForeignServerRelationId, /* FOREIGNSERVERNAME */
ForeignServerNameIndexId,
@@ -442,7 +442,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 32
+ 4
},
{ForeignServerRelationId, /* FOREIGNSERVEROID */
ForeignServerOidIndexId,
@@ -453,7 +453,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 32
+ 4
},
{ForeignTableRelationId, /* FOREIGNTABLEREL */
ForeignTableRelidIndexId,
@@ -464,7 +464,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 4
},
{IndexRelationId, /* INDEXRELID */
IndexRelidIndexId,
@@ -475,7 +475,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 1024
+ 128
},
{LanguageRelationId, /* LANGNAME */
LanguageNameIndexId,
@@ -508,7 +508,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 256
+ 4
},
{NamespaceRelationId, /* NAMESPACEOID */
NamespaceOidIndexId,
@@ -519,7 +519,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 256
+ 4
},
{OperatorRelationId, /* OPERNAMENSP */
OperatorNameNspIndexId,
@@ -530,7 +530,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_operator_oprright,
Anum_pg_operator_oprnamespace
},
- 1024
+ 512
},
{OperatorRelationId, /* OPEROID */
OperatorOidIndexId,
@@ -541,7 +541,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 1024
+ 512
},
{OperatorFamilyRelationId, /* OPFAMILYAMNAMENSP */
OpfamilyAmNameNspIndexId,
@@ -552,7 +552,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_opfamily_opfnamespace,
0
},
- 64
+ 32
},
{OperatorFamilyRelationId, /* OPFAMILYOID */
OpfamilyOidIndexId,
@@ -563,7 +563,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 64
+ 32
},
{ProcedureRelationId, /* PROCNAMEARGSNSP */
ProcedureNameArgsNspIndexId,
@@ -574,7 +574,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_proc_pronamespace,
0
},
- 2048
+ 512
},
{ProcedureRelationId, /* PROCOID */
ProcedureOidIndexId,
@@ -585,7 +585,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 2048
+ 512
},
{RangeRelationId, /* RANGETYPE */
RangeTypidIndexId,
@@ -596,7 +596,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 64
+ 4
},
{RelationRelationId, /* RELNAMENSP */
ClassNameNspIndexId,
@@ -607,7 +607,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 1024
+ 256
},
{RelationRelationId, /* RELOID */
ClassOidIndexId,
@@ -618,7 +618,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 1024
+ 256
},
{RewriteRelationId, /* RULERELNAME */
RewriteRelRulenameIndexId,
@@ -629,7 +629,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 1024
+ 64
},
{StatisticRelationId, /* STATRELATTINH */
StatisticRelidAttnumInhIndexId,
@@ -640,7 +640,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_statistic_stainherit,
0
},
- 1024
+ 128
},
{TableSpaceRelationId, /* TABLESPACEOID */
TablespaceOidIndexId,
@@ -651,7 +651,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0,
},
- 16
+ 4
},
{TSConfigMapRelationId, /* TSCONFIGMAP */
TSConfigMapIndexId,
@@ -662,7 +662,7 @@ static const struct cachedesc cacheinfo[] = {
Anum_pg_ts_config_map_mapseqno,
0
},
- 4
+ 2
},
{TSConfigRelationId, /* TSCONFIGNAMENSP */
TSConfigNameNspIndexId,
@@ -673,7 +673,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 16
+ 2
},
{TSConfigRelationId, /* TSCONFIGOID */
TSConfigOidIndexId,
@@ -684,7 +684,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 16
+ 2
},
{TSDictionaryRelationId, /* TSDICTNAMENSP */
TSDictionaryNameNspIndexId,
@@ -695,7 +695,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 16
+ 2
},
{TSDictionaryRelationId, /* TSDICTOID */
TSDictionaryOidIndexId,
@@ -706,7 +706,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 16
+ 2
},
{TSParserRelationId, /* TSPARSERNAMENSP */
TSParserNameNspIndexId,
@@ -717,7 +717,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 4
+ 2
},
{TSParserRelationId, /* TSPARSEROID */
TSParserOidIndexId,
@@ -728,7 +728,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 4
+ 2
},
{TSTemplateRelationId, /* TSTEMPLATENAMENSP */
TSTemplateNameNspIndexId,
@@ -739,7 +739,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 16
+ 2
},
{TSTemplateRelationId, /* TSTEMPLATEOID */
TSTemplateOidIndexId,
@@ -750,7 +750,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 16
+ 2
},
{TypeRelationId, /* TYPENAMENSP */
TypeNameNspIndexId,
@@ -761,7 +761,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 1024
+ 512
},
{TypeRelationId, /* TYPEOID */
TypeOidIndexId,
@@ -772,7 +772,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 1024
+ 512
},
{UserMappingRelationId, /* USERMAPPINGOID */
UserMappingOidIndexId,
@@ -783,7 +783,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 2
},
{UserMappingRelationId, /* USERMAPPINGUSERSERVER */
UserMappingUserServerIndexId,
@@ -794,7 +794,7 @@ static const struct cachedesc cacheinfo[] = {
0,
0
},
- 128
+ 2
}
};
diff --git a/src/include/utils/catcache.h b/src/include/utils/catcache.h
index b6e1c97..524319a 100644
--- a/src/include/utils/catcache.h
+++ b/src/include/utils/catcache.h
@@ -66,8 +66,8 @@ typedef struct catcache
long cc_lsearches; /* total # list-searches */
long cc_lhits; /* # of matches against existing lists */
#endif
- dlist_head cc_bucket[1]; /* hash buckets --- VARIABLE LENGTH ARRAY */
-} CatCache; /* VARIABLE LENGTH STRUCT */
+ dlist_head *cc_bucket; /* hash buckets */
+} CatCache;
typedef struct catctup
Heikki Linnakangas <hlinnakangas@vmware.com> writes:
One fairly simple thing we could do is to teach catcache.c to resize the
caches. Then we could make the initial size of all the syscaches much
smaller.
I think this is attractive for the *other* reason you mention, namely
preserving reasonable performance when a catcache grows larger than
expected; but I'm pretty skeptical of nickel-and-diming caches that are
already really small. Is it really worth cutting the TSPARSER caches
from 4 pointers to 2 for instance?
What concerns me about initially-undersized caches is that we'll waste
space and time in the enlargement process. I'd suggest trying to get some
numbers about the typical size of each cache in a backend that's done a
few things (not merely started up --- we should not be optimizing for the
case of connections that get abandoned without running any queries).
Then set the initial size to the next larger power of 2.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 04.09.2013 23:56, Tom Lane wrote:
Heikki Linnakangas<hlinnakangas@vmware.com> writes:
One fairly simple thing we could do is to teach catcache.c to resize the
caches. Then we could make the initial size of all the syscaches much
smaller.I think this is attractive for the *other* reason you mention, namely
preserving reasonable performance when a catcache grows larger than
expected; but I'm pretty skeptical of nickel-and-diming caches that are
already really small. Is it really worth cutting the TSPARSER caches
from 4 pointers to 2 for instance?
Yeah, that may be overdoing it. Then again, enlarging a hash table from
2 to 4 entries when needed is also very cheap.
What concerns me about initially-undersized caches is that we'll waste
space and time in the enlargement process.
Enlarging a small hash tables is very cheap because, well, it's small.
Enlarging a large hash table is more expensive, but if you have a lot of
entries in the hash, then you also get the benefit of a larger hash when
doing lookups. It does require some memory to hold the old and the new
hash table while rehashing, but again, with a small hash table that's
not significant, and with a large one the actual cached tuples take a
lot more space than the buckets array anyway.
Per Wikipedia [1]:
If the table size increases or decreases by a fixed percentage at each expansion, the total cost of these resizings, amortized over all insert and delete operations, is still a constant, independent of the number of entries n and of the number m of operations performed.
For example, consider a table that was created with the minimum possible size and is doubled each time the load ratio exceeds some threshold. If m elements are inserted into that table, the total number of extra re-insertions that occur in all dynamic resizings of the table is at most m − 1. In other words, dynamic resizing roughly doubles the cost of each insert or delete operation.
Considering the amount of work involved in adding an entry to a catalog
cache, I wouldn't worry about adding a tiny constant to the insertion time.
I did some quick testing by creating 100000 tables, and running a
pgbench script that selects randomly from them:
\setrandom tableno 1 100000
select * from foo:tableno where i = 1;
I timed the rehash operations with gettimeofday calls before and after
the rehash:
LOG: rehashed catalog cache id 44 for pg_class from 256 to 512 buckets: 27 us
LOG: rehashed catalog cache id 47 for pg_statistic from 128 to 256 buckets: 14 us
LOG: rehashed catalog cache id 44 for pg_class from 512 to 1024 buckets: 54 us
LOG: rehashed catalog cache id 45 for pg_class from 256 to 512 buckets: 29 us
LOG: rehashed catalog cache id 47 for pg_statistic from 256 to 512 buckets: 30 us
LOG: rehashed catalog cache id 44 for pg_class from 1024 to 2048 buckets: 147 us
LOG: rehashed catalog cache id 7 for pg_attribute from 512 to 1024 buckets: 87 us
LOG: rehashed catalog cache id 45 for pg_class from 512 to 1024 buckets: 80 us
LOG: rehashed catalog cache id 47 for pg_statistic from 512 to 1024 buckets: 88 us
LOG: rehashed catalog cache id 44 for pg_class from 2048 to 4096 buckets: 342 us
LOG: rehashed catalog cache id 7 for pg_attribute from 1024 to 2048 buckets: 197 us
LOG: rehashed catalog cache id 45 for pg_class from 1024 to 2048 buckets: 183 us
LOG: rehashed catalog cache id 47 for pg_statistic from 1024 to 2048 buckets: 194 us
LOG: rehashed catalog cache id 44 for pg_class from 4096 to 8192 buckets: 764 us
LOG: rehashed catalog cache id 7 for pg_attribute from 2048 to 4096 buckets: 401 us
LOG: rehashed catalog cache id 45 for pg_class from 2048 to 4096 buckets: 383 us
LOG: rehashed catalog cache id 47 for pg_statistic from 2048 to 4096 buckets: 406 us
LOG: rehashed catalog cache id 44 for pg_class from 8192 to 16384 buckets: 1758 us
LOG: rehashed catalog cache id 7 for pg_attribute from 4096 to 8192 buckets: 833 us
LOG: rehashed catalog cache id 45 for pg_class from 4096 to 8192 buckets: 842 us
LOG: rehashed catalog cache id 47 for pg_statistic from 4096 to 8192 buckets: 859 us
LOG: rehashed catalog cache id 44 for pg_class from 16384 to 32768 buckets: 3564 us
LOG: rehashed catalog cache id 7 for pg_attribute from 8192 to 16384 buckets: 1769 us
LOG: rehashed catalog cache id 45 for pg_class from 8192 to 16384 buckets: 1752 us
LOG: rehashed catalog cache id 47 for pg_statistic from 8192 to 16384 buckets: 1719 us
LOG: rehashed catalog cache id 44 for pg_class from 32768 to 65536 buckets: 7538 us
LOG: rehashed catalog cache id 7 for pg_attribute from 16384 to 32768 buckets: 3644 us
LOG: rehashed catalog cache id 45 for pg_class from 16384 to 32768 buckets: 3609 us
LOG: rehashed catalog cache id 47 for pg_statistic from 16384 to 32768 buckets: 3508 us
LOG: rehashed catalog cache id 44 for pg_class from 65536 to 131072 buckets: 16457 us
LOG: rehashed catalog cache id 7 for pg_attribute from 32768 to 65536 buckets: 7978 us
LOG: rehashed catalog cache id 45 for pg_class from 32768 to 65536 buckets: 8281 us
LOG: rehashed catalog cache id 47 for pg_statistic from 32768 to 65536 buckets: 7724 us
The time spent in rehashing seems to be about 60 ns per catcache entry
(the patch rehashes when fillfactor reaches 2, so when rehashing e.g
from 256 to 512 buckets, there are 1024 entries in the hash), at the
larger hash sizes.
I'd suggest trying to get some
numbers about the typical size of each cache in a backend that's done a
few things (not merely started up --- we should not be optimizing for the
case of connections that get abandoned without running any queries).
Then set the initial size to the next larger power of 2.
Makes sense.
I ran pgbench for ten seconds, and printed the number of tuples in each
catcache after that:
LOG: cache id 61 on (not known yet): 0 tups
LOG: cache id 60 on (not known yet): 0 tups
LOG: cache id 59 on pg_type: 6 tups
LOG: cache id 58 on (not known yet): 0 tups
LOG: cache id 57 on (not known yet): 0 tups
LOG: cache id 56 on (not known yet): 0 tups
LOG: cache id 55 on (not known yet): 0 tups
LOG: cache id 54 on (not known yet): 0 tups
LOG: cache id 53 on (not known yet): 0 tups
LOG: cache id 52 on (not known yet): 0 tups
LOG: cache id 51 on (not known yet): 0 tups
LOG: cache id 50 on (not known yet): 0 tups
LOG: cache id 49 on (not known yet): 0 tups
LOG: cache id 48 on pg_tablespace: 1 tups
LOG: cache id 47 on pg_statistic: 11 tups
LOG: cache id 46 on (not known yet): 0 tups
LOG: cache id 45 on pg_class: 4 tups
LOG: cache id 44 on pg_class: 8 tups
LOG: cache id 43 on (not known yet): 0 tups
LOG: cache id 42 on pg_proc: 4 tups
LOG: cache id 41 on pg_proc: 1 tups
LOG: cache id 40 on (not known yet): 0 tups
LOG: cache id 39 on (not known yet): 0 tups
LOG: cache id 38 on pg_operator: 2 tups
LOG: cache id 37 on pg_operator: 2 tups
LOG: cache id 36 on (not known yet): 0 tups
LOG: cache id 35 on pg_namespace: 3 tups
LOG: cache id 34 on (not known yet): 0 tups
LOG: cache id 33 on (not known yet): 0 tups
LOG: cache id 32 on pg_index: 5 tups
LOG: cache id 31 on (not known yet): 0 tups
LOG: cache id 30 on (not known yet): 0 tups
LOG: cache id 29 on (not known yet): 0 tups
LOG: cache id 28 on (not known yet): 0 tups
LOG: cache id 27 on (not known yet): 0 tups
LOG: cache id 26 on (not known yet): 0 tups
LOG: cache id 25 on (not known yet): 0 tups
LOG: cache id 24 on (not known yet): 0 tups
LOG: cache id 23 on (not known yet): 0 tups
LOG: cache id 22 on (not known yet): 0 tups
LOG: cache id 21 on pg_database: 1 tups
LOG: cache id 20 on (not known yet): 0 tups
LOG: cache id 19 on (not known yet): 0 tups
LOG: cache id 18 on (not known yet): 0 tups
LOG: cache id 17 on (not known yet): 0 tups
LOG: cache id 16 on (not known yet): 0 tups
LOG: cache id 15 on (not known yet): 0 tups
LOG: cache id 14 on (not known yet): 0 tups
LOG: cache id 13 on (not known yet): 0 tups
LOG: cache id 12 on pg_cast: 1 tups
LOG: cache id 11 on pg_authid: 1 tups
LOG: cache id 10 on pg_authid: 1 tups
LOG: cache id 9 on (not known yet): 0 tups
LOG: cache id 8 on (not known yet): 0 tups
LOG: cache id 7 on pg_attribute: 6 tups
LOG: cache id 6 on (not known yet): 0 tups
LOG: cache id 5 on (not known yet): 0 tups
LOG: cache id 4 on pg_amop: 1 tups
LOG: cache id 3 on pg_amop: 2 tups
LOG: cache id 2 on pg_am: 1 tups
LOG: cache id 1 on (not known yet): 0 tups
LOG: cache id 0 on (not known yet): 0 tups
CacheMemoryContext: 516096 total in 6 blocks; 81192 free (0 chunks);
434904 used
I'm surprised how few rows there are in the caches after that. Actually,
given that, and the above timings, I'm starting to think that we should
just get rid of the hand-tuned initial sizes altogether. Start all
caches small, say only 1 entries, and just let the automatic rehashing
enlarge them as required. Of course, a more complicated query will touch
many more catalogs, but resizing is cheap enough that it doesn't really
matter.
PS. Once the hashes are resized on demand, perhaps we should get rid of
the move-to-the-head behavior in SearchCatCache. If all the buckets
contain < 2 entries on average, moving the least-recently-used one to
the front hardly makes any difference in lookup times. But it does add a
couple of cycles to every cache hit.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Heikki Linnakangas <hlinnakangas@vmware.com> writes:
I ran pgbench for ten seconds, and printed the number of tuples in each
catcache after that:
[ very tiny numbers ]
I find these numbers a bit suspicious. For example, we must have hit at
least 13 different system catalogs, and more than that many indexes, in
the course of populating the syscaches you show as initialized. How is
it there are only 4 entries in the RELOID cache? I wonder if there were
cache resets going on.
A larger issue is that pgbench might not be too representative. In
a quick check, I find that cache 37 (OPERNAMENSP) starts out empty,
and contains 1 entry after "select 2=2", which is expected since
the operator-lookup code will start by looking for int4 = int4 and
will get an exact match. But after "select 2=2::numeric" there are
61 entries, as a byproduct of having thumbed through every binary
operator named "=" to resolve the ambiguous match. We went so far
as to install another level of caching in front of OPERNAMENSP because
it was getting too expensive to deal with heavily-overloaded operators
like that one. In general, we've had to spend enough sweat on optimizing
catcache searches to make me highly dubious of any claim that the caches
are usually almost empty.
I understand your argument that resizing is so cheap that it might not
matter, but nonetheless reducing these caches as far as you're suggesting
sounds to me to be penny-wise and pound-foolish. I'm okay with setting
them on the small side rather than on the large side as they are now, but
not with choosing sizes that are guaranteed to result in resizing cycles
during startup of any real app.
PS. Once the hashes are resized on demand, perhaps we should get rid of
the move-to-the-head behavior in SearchCatCache. If all the buckets
-1. If the bucket in fact has just one member, dlist_move_head reduces to
just one comparison. And again I argue that you're optimizing for the
wrong case. Pure luck will result in some hash chains being (much) longer
than the average, and if we don't do move-to-front we'll get hurt there.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 05.09.2013 17:22, Tom Lane wrote:
Heikki Linnakangas<hlinnakangas@vmware.com> writes:
I ran pgbench for ten seconds, and printed the number of tuples in each
catcache after that:
[ very tiny numbers ]I find these numbers a bit suspicious. For example, we must have hit at
least 13 different system catalogs, and more than that many indexes, in
the course of populating the syscaches you show as initialized. How is
it there are only 4 entries in the RELOID cache? I wonder if there were
cache resets going on.
Relcache is loaded from the init file. The lookups of those system
catalogs and indexes never hit the syscache, because the entries are
found in relcache. When I delete the init file and launch psql, without
running any queries, I get this (caches with 0 tups left out):
LOG: cache id 45 on pg_class: 7 tups
LOG: cache id 32 on pg_index: 63 tups
LOG: cache id 21 on pg_database: 1 tups
LOG: cache id 11 on pg_authid: 1 tups
LOG: cache id 10 on pg_authid: 1 tups
LOG: cache id 2 on pg_am: 1 tups
A larger issue is that pgbench might not be too representative. In
a quick check, I find that cache 37 (OPERNAMENSP) starts out empty,
and contains 1 entry after "select 2=2", which is expected since
the operator-lookup code will start by looking for int4 = int4 and
will get an exact match. But after "select 2=2::numeric" there are
61 entries, as a byproduct of having thumbed through every binary
operator named "=" to resolve the ambiguous match. We went so far
as to install another level of caching in front of OPERNAMENSP because
it was getting too expensive to deal with heavily-overloaded operators
like that one. In general, we've had to spend enough sweat on optimizing
catcache searches to make me highly dubious of any claim that the caches
are usually almost empty.I understand your argument that resizing is so cheap that it might not
matter, but nonetheless reducing these caches as far as you're suggesting
sounds to me to be penny-wise and pound-foolish. I'm okay with setting
them on the small side rather than on the large side as they are now, but
not with choosing sizes that are guaranteed to result in resizing cycles
during startup of any real app.
Ok, committed the attached.
To choose the initial sizes, I put a WARNING into the rehash function,
ran the regression suite, and adjusted the sizes so that most regression
tests run without rehashing. With the attached patch, 18 regression
tests cause rehashing (see regression.diffs). The ones that do are
because they are exercising some parts of the system more than a typical
application would do: the enum regression test for example causes
rehashes of the pg_enum catalog cache, and the aggregate regression test
causes rehashing of pg_aggregate, and so on. A few regression tests do a
database-wide VACUUM or ANALYZE; those touch all relations, and cause a
rehash of pg_class and pg_index.
- Heikki
Attachments:
resize-syscaches-2.patchtext/x-diff; name=resize-syscaches-2.patchDownload
diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index cca0572..c467f11 100644
--- a/src/backend/utils/cache/catcache.c
+++ b/src/backend/utils/cache/catcache.c
@@ -728,21 +728,20 @@ InitCatCache(int id,
int nkeys,
const int *key,
int nbuckets)
{
CatCache *cp;
MemoryContext oldcxt;
int i;
/*
- * nbuckets is the number of hash buckets to use in this catcache.
- * Currently we just use a hard-wired estimate of an appropriate size for
- * each cache; maybe later make them dynamically resizable?
+ * nbuckets is the initial number of hash buckets to use in this catcache.
+ * It will be enlarged later if it becomes too full.
*
* nbuckets must be a power of two. We check this via Assert rather than
* a full runtime check because the values will be coming from constant
* tables.
*
* If you're confused by the power-of-two check, see comments in
* bitmapset.c for an explanation.
*/
Assert(nbuckets > 0 && (nbuckets & -nbuckets) == nbuckets);
@@ -769,19 +768,20 @@ InitCatCache(int id,
on_proc_exit(CatCachePrintStats, 0);
#endif
}
/*
* allocate a new cache structure
*
* Note: we rely on zeroing to initialize all the dlist headers correctly
*/
- cp = (CatCache *) palloc0(sizeof(CatCache) + nbuckets * sizeof(dlist_head));
+ cp = (CatCache *) palloc0(sizeof(CatCache));
+ cp->cc_bucket = palloc0(nbuckets * sizeof(dlist_head));
/*
* initialize the cache's relation information for the relation
* corresponding to this cache, and initialize some of the new cache's
* other internal fields. But don't open the relation yet.
*/
cp->id = id;
cp->cc_relname = "(not known yet)";
cp->cc_reloid = reloid;
@@ -808,18 +808,55 @@ InitCatCache(int id,
/*
* back to the old context before we return...
*/
MemoryContextSwitchTo(oldcxt);
return cp;
}
/*
+ * Enlarge a catcache, doubling the number of buckets.
+ */
+static void
+RehashCatCache(CatCache *cp)
+{
+ dlist_head *newbucket;
+ int newnbuckets;
+ int i;
+
+ elog(DEBUG1, "rehashing catalog cache id %d for %s; %d tups, %d buckets",
+ cp->id, cp->cc_relname, cp->cc_ntup, cp->cc_nbuckets);
+
+ /* Allocate a new, larger, hash table. */
+ newnbuckets = cp->cc_nbuckets * 2;
+ newbucket = (dlist_head *) MemoryContextAllocZero(CacheMemoryContext, newnbuckets * sizeof(dlist_head));
+
+ /* Move all entries from old hash table to new. */
+ for (i = 0; i < cp->cc_nbuckets; i++)
+ {
+ dlist_mutable_iter iter;
+ dlist_foreach_modify(iter, &cp->cc_bucket[i])
+ {
+ CatCTup *ct = dlist_container(CatCTup, cache_elem, iter.cur);
+ int hashIndex = HASH_INDEX(ct->hash_value, newnbuckets);
+
+ dlist_delete(iter.cur);
+ dlist_push_head(&newbucket[hashIndex], &ct->cache_elem);
+ }
+ }
+
+ /* Switch to the new array. */
+ pfree(cp->cc_bucket);
+ cp->cc_nbuckets = newnbuckets;
+ cp->cc_bucket = newbucket;
+}
+
+/*
* CatalogCacheInitializeCache
*
* This function does final initialization of a catcache: obtain the tuple
* descriptor and set up the hash and equality function links. We assume
* that the relcache entry can be opened at this point!
*/
#ifdef CACHEDEBUG
#define CatalogCacheInitializeCache_DEBUG1 \
elog(DEBUG2, "CatalogCacheInitializeCache: cache @%p rel=%u", cache, \
@@ -1678,18 +1715,25 @@ CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp,
ct->dead = false;
ct->negative = negative;
ct->hash_value = hashValue;
dlist_push_head(&cache->cc_bucket[hashIndex], &ct->cache_elem);
cache->cc_ntup++;
CacheHdr->ch_ntup++;
+ /*
+ * If the hash table has become too full, enlarge the buckets array.
+ * Quite arbitrarily, we enlarge when fill factor > 2.
+ */
+ if (cache->cc_ntup > cache->cc_nbuckets * 2)
+ RehashCatCache(cache);
+
return ct;
}
/*
* build_dummy_tuple
* Generate a palloc'd HeapTuple that contains the specified key
* columns, and NULLs for other columns.
*
* This is used to store the keys for negative cache entries and CatCList
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 1ff2f2b..e9bdfea 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -116,19 +116,19 @@ static const struct cachedesc cacheinfo[] = {
{AggregateRelationId, /* AGGFNOID */
AggregateFnoidIndexId,
1,
{
Anum_pg_aggregate_aggfnoid,
0,
0,
0
},
- 32
+ 16
},
{AccessMethodRelationId, /* AMNAME */
AmNameIndexId,
1,
{
Anum_pg_am_amname,
0,
0,
0
@@ -171,85 +171,85 @@ static const struct cachedesc cacheinfo[] = {
{AccessMethodProcedureRelationId, /* AMPROCNUM */
AccessMethodProcedureIndexId,
4,
{
Anum_pg_amproc_amprocfamily,
Anum_pg_amproc_amproclefttype,
Anum_pg_amproc_amprocrighttype,
Anum_pg_amproc_amprocnum
},
- 64
+ 16
},
{AttributeRelationId, /* ATTNAME */
AttributeRelidNameIndexId,
2,
{
Anum_pg_attribute_attrelid,
Anum_pg_attribute_attname,
0,
0
},
- 2048
+ 32
},
{AttributeRelationId, /* ATTNUM */
AttributeRelidNumIndexId,
2,
{
Anum_pg_attribute_attrelid,
Anum_pg_attribute_attnum,
0,
0
},
- 2048
+ 128
},
{AuthMemRelationId, /* AUTHMEMMEMROLE */
AuthMemMemRoleIndexId,
2,
{
Anum_pg_auth_members_member,
Anum_pg_auth_members_roleid,
0,
0
},
- 128
+ 8
},
{AuthMemRelationId, /* AUTHMEMROLEMEM */
AuthMemRoleMemIndexId,
2,
{
Anum_pg_auth_members_roleid,
Anum_pg_auth_members_member,
0,
0
},
- 128
+ 8
},
{AuthIdRelationId, /* AUTHNAME */
AuthIdRolnameIndexId,
1,
{
Anum_pg_authid_rolname,
0,
0,
0
},
- 128
+ 8
},
{AuthIdRelationId, /* AUTHOID */
AuthIdOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 128
+ 8
},
{
CastRelationId, /* CASTSOURCETARGET */
CastSourceTargetIndexId,
2,
{
Anum_pg_cast_castsource,
Anum_pg_cast_casttarget,
0,
@@ -260,96 +260,96 @@ static const struct cachedesc cacheinfo[] = {
{OperatorClassRelationId, /* CLAAMNAMENSP */
OpclassAmNameNspIndexId,
3,
{
Anum_pg_opclass_opcmethod,
Anum_pg_opclass_opcname,
Anum_pg_opclass_opcnamespace,
0
},
- 64
+ 8
},
{OperatorClassRelationId, /* CLAOID */
OpclassOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 64
+ 8
},
{CollationRelationId, /* COLLNAMEENCNSP */
CollationNameEncNspIndexId,
3,
{
Anum_pg_collation_collname,
Anum_pg_collation_collencoding,
Anum_pg_collation_collnamespace,
0
},
- 64
+ 8
},
{CollationRelationId, /* COLLOID */
CollationOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 64
+ 8
},
{ConversionRelationId, /* CONDEFAULT */
ConversionDefaultIndexId,
4,
{
Anum_pg_conversion_connamespace,
Anum_pg_conversion_conforencoding,
Anum_pg_conversion_contoencoding,
ObjectIdAttributeNumber,
},
- 128
+ 8
},
{ConversionRelationId, /* CONNAMENSP */
ConversionNameNspIndexId,
2,
{
Anum_pg_conversion_conname,
Anum_pg_conversion_connamespace,
0,
0
},
- 128
+ 8
},
{ConstraintRelationId, /* CONSTROID */
ConstraintOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 1024
+ 16
},
{ConversionRelationId, /* CONVOID */
ConversionOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 128
+ 8
},
{DatabaseRelationId, /* DATABASEOID */
DatabaseOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
@@ -359,41 +359,41 @@ static const struct cachedesc cacheinfo[] = {
{DefaultAclRelationId, /* DEFACLROLENSPOBJ */
DefaultAclRoleNspObjIndexId,
3,
{
Anum_pg_default_acl_defaclrole,
Anum_pg_default_acl_defaclnamespace,
Anum_pg_default_acl_defaclobjtype,
0
},
- 256
+ 8
},
{EnumRelationId, /* ENUMOID */
EnumOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 256
+ 8
},
{EnumRelationId, /* ENUMTYPOIDNAME */
EnumTypIdLabelIndexId,
2,
{
Anum_pg_enum_enumtypid,
Anum_pg_enum_enumlabel,
0,
0
},
- 256
+ 8
},
{EventTriggerRelationId, /* EVENTTRIGGERNAME */
EventTriggerNameIndexId,
1,
{
Anum_pg_event_trigger_evtname,
0,
0,
0
@@ -414,74 +414,74 @@ static const struct cachedesc cacheinfo[] = {
{ForeignDataWrapperRelationId, /* FOREIGNDATAWRAPPERNAME */
ForeignDataWrapperNameIndexId,
1,
{
Anum_pg_foreign_data_wrapper_fdwname,
0,
0,
0
},
- 8
+ 2
},
{ForeignDataWrapperRelationId, /* FOREIGNDATAWRAPPEROID */
ForeignDataWrapperOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 8
+ 2
},
{ForeignServerRelationId, /* FOREIGNSERVERNAME */
ForeignServerNameIndexId,
1,
{
Anum_pg_foreign_server_srvname,
0,
0,
0
},
- 32
+ 2
},
{ForeignServerRelationId, /* FOREIGNSERVEROID */
ForeignServerOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 32
+ 2
},
{ForeignTableRelationId, /* FOREIGNTABLEREL */
ForeignTableRelidIndexId,
1,
{
Anum_pg_foreign_table_ftrelid,
0,
0,
0
},
- 128
+ 4
},
{IndexRelationId, /* INDEXRELID */
IndexRelidIndexId,
1,
{
Anum_pg_index_indexrelid,
0,
0,
0
},
- 1024
+ 64
},
{LanguageRelationId, /* LANGNAME */
LanguageNameIndexId,
1,
{
Anum_pg_language_lanname,
0,
0,
0
@@ -502,305 +502,305 @@ static const struct cachedesc cacheinfo[] = {
{NamespaceRelationId, /* NAMESPACENAME */
NamespaceNameIndexId,
1,
{
Anum_pg_namespace_nspname,
0,
0,
0
},
- 256
+ 4
},
{NamespaceRelationId, /* NAMESPACEOID */
NamespaceOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 256
+ 16
},
{OperatorRelationId, /* OPERNAMENSP */
OperatorNameNspIndexId,
4,
{
Anum_pg_operator_oprname,
Anum_pg_operator_oprleft,
Anum_pg_operator_oprright,
Anum_pg_operator_oprnamespace
},
- 1024
+ 256
},
{OperatorRelationId, /* OPEROID */
OperatorOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 1024
+ 32
},
{OperatorFamilyRelationId, /* OPFAMILYAMNAMENSP */
OpfamilyAmNameNspIndexId,
3,
{
Anum_pg_opfamily_opfmethod,
Anum_pg_opfamily_opfname,
Anum_pg_opfamily_opfnamespace,
0
},
- 64
+ 8
},
{OperatorFamilyRelationId, /* OPFAMILYOID */
OpfamilyOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 64
+ 8
},
{ProcedureRelationId, /* PROCNAMEARGSNSP */
ProcedureNameArgsNspIndexId,
3,
{
Anum_pg_proc_proname,
Anum_pg_proc_proargtypes,
Anum_pg_proc_pronamespace,
0
},
- 2048
+ 128
},
{ProcedureRelationId, /* PROCOID */
ProcedureOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 2048
+ 128
},
{RangeRelationId, /* RANGETYPE */
RangeTypidIndexId,
1,
{
Anum_pg_range_rngtypid,
0,
0,
0
},
- 64
+ 4
},
{RelationRelationId, /* RELNAMENSP */
ClassNameNspIndexId,
2,
{
Anum_pg_class_relname,
Anum_pg_class_relnamespace,
0,
0
},
- 1024
+ 128
},
{RelationRelationId, /* RELOID */
ClassOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 1024
+ 128
},
{RewriteRelationId, /* RULERELNAME */
RewriteRelRulenameIndexId,
2,
{
Anum_pg_rewrite_ev_class,
Anum_pg_rewrite_rulename,
0,
0
},
- 1024
+ 8
},
{StatisticRelationId, /* STATRELATTINH */
StatisticRelidAttnumInhIndexId,
3,
{
Anum_pg_statistic_starelid,
Anum_pg_statistic_staattnum,
Anum_pg_statistic_stainherit,
0
},
- 1024
+ 128
},
{TableSpaceRelationId, /* TABLESPACEOID */
TablespaceOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0,
},
- 16
+ 4
},
{TSConfigMapRelationId, /* TSCONFIGMAP */
TSConfigMapIndexId,
3,
{
Anum_pg_ts_config_map_mapcfg,
Anum_pg_ts_config_map_maptokentype,
Anum_pg_ts_config_map_mapseqno,
0
},
- 4
+ 2
},
{TSConfigRelationId, /* TSCONFIGNAMENSP */
TSConfigNameNspIndexId,
2,
{
Anum_pg_ts_config_cfgname,
Anum_pg_ts_config_cfgnamespace,
0,
0
},
- 16
+ 2
},
{TSConfigRelationId, /* TSCONFIGOID */
TSConfigOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 16
+ 2
},
{TSDictionaryRelationId, /* TSDICTNAMENSP */
TSDictionaryNameNspIndexId,
2,
{
Anum_pg_ts_dict_dictname,
Anum_pg_ts_dict_dictnamespace,
0,
0
},
- 16
+ 2
},
{TSDictionaryRelationId, /* TSDICTOID */
TSDictionaryOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 16
+ 2
},
{TSParserRelationId, /* TSPARSERNAMENSP */
TSParserNameNspIndexId,
2,
{
Anum_pg_ts_parser_prsname,
Anum_pg_ts_parser_prsnamespace,
0,
0
},
- 4
+ 2
},
{TSParserRelationId, /* TSPARSEROID */
TSParserOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 4
+ 2
},
{TSTemplateRelationId, /* TSTEMPLATENAMENSP */
TSTemplateNameNspIndexId,
2,
{
Anum_pg_ts_template_tmplname,
Anum_pg_ts_template_tmplnamespace,
0,
0
},
- 16
+ 2
},
{TSTemplateRelationId, /* TSTEMPLATEOID */
TSTemplateOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 16
+ 2
},
{TypeRelationId, /* TYPENAMENSP */
TypeNameNspIndexId,
2,
{
Anum_pg_type_typname,
Anum_pg_type_typnamespace,
0,
0
},
- 1024
+ 64
},
{TypeRelationId, /* TYPEOID */
TypeOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 1024
+ 64
},
{UserMappingRelationId, /* USERMAPPINGOID */
UserMappingOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 128
+ 2
},
{UserMappingRelationId, /* USERMAPPINGUSERSERVER */
UserMappingUserServerIndexId,
2,
{
Anum_pg_user_mapping_umuser,
Anum_pg_user_mapping_umserver,
0,
0
},
- 128
+ 2
}
};
static CatCache *SysCache[
lengthof(cacheinfo)];
static int SysCacheSize = lengthof(cacheinfo);
static bool CacheInitialized = false;
static Oid SysCacheRelationOid[lengthof(cacheinfo)];
diff --git a/src/include/utils/catcache.h b/src/include/utils/catcache.h
index b6e1c97..524319a 100644
--- a/src/include/utils/catcache.h
+++ b/src/include/utils/catcache.h
@@ -60,20 +60,20 @@ typedef struct catcache
/*
* cc_searches - (cc_hits + cc_neg_hits + cc_newloads) is number of failed
* searches, each of which will result in loading a negative entry
*/
long cc_invals; /* # of entries invalidated from cache */
long cc_lsearches; /* total # list-searches */
long cc_lhits; /* # of matches against existing lists */
#endif
- dlist_head cc_bucket[1]; /* hash buckets --- VARIABLE LENGTH ARRAY */
-} CatCache; /* VARIABLE LENGTH STRUCT */
+ dlist_head *cc_bucket; /* hash buckets */
+} CatCache;
typedef struct catctup
{
int ct_magic; /* for identifying CatCTup entries */
#define CT_MAGIC 0x57261502
CatCache *my_cache; /* link to owning catcache */
/*
regression.diffstext/plain; name=regression.diffsDownload
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/enum.out 2013-08-22 17:45:02.577496726 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/enum.out 2013-09-05 19:07:25.264383746 +0300
***************
*** 126,131 ****
--- 126,132 ----
alter type insenum add value 'i3' before 'L2';
alter type insenum add value 'i4' before 'L2';
alter type insenum add value 'i5' before 'L2';
+ WARNING: rehashing catalog cache id 24 for pg_enum; 17 tups, 8 buckets
alter type insenum add value 'i6' before 'L2';
alter type insenum add value 'i7' before 'L2';
alter type insenum add value 'i8' before 'L2';
***************
*** 142,147 ****
--- 143,149 ----
alter type insenum add value 'i19' before 'L2';
alter type insenum add value 'i20' before 'L2';
alter type insenum add value 'i21' before 'L2';
+ WARNING: rehashing catalog cache id 24 for pg_enum; 33 tups, 16 buckets
alter type insenum add value 'i22' before 'L2';
alter type insenum add value 'i23' before 'L2';
alter type insenum add value 'i24' before 'L2';
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/rangetypes.out 2013-08-22 17:45:02.661496722 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/rangetypes.out 2013-09-05 19:07:26.112383795 +0300
***************
*** 1130,1135 ****
--- 1130,1138 ----
create domain mydomain as int4;
create type mydomainrange as range(subtype=mydomain);
select '[4,50)'::mydomainrange @> 7::mydomain;
+ WARNING: rehashing catalog cache id 43 for pg_range; 9 tups, 4 buckets
+ LINE 1: select '[4,50)'::mydomainrange @> 7::mydomain;
+ ^
?column?
----------
t
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/create_index.out 2013-08-27 18:17:41.238830573 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/create_index.out 2013-09-05 19:07:35.372384332 +0300
***************
*** 1937,1942 ****
--- 1937,1943 ----
(1 row)
CREATE INDEX textarrayidx ON array_index_op_test USING gin (t);
+ WARNING: rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets
explain (costs off)
SELECT * FROM array_index_op_test WHERE t @> '{AAAAAAAA72908}' ORDER BY seqno;
QUERY PLAN
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/updatable_views.out 2013-08-27 18:17:41.242830573 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/updatable_views.out 2013-09-05 19:07:41.776384704 +0300
***************
*** 853,858 ****
--- 853,859 ----
RESET SESSION AUTHORIZATION;
SET SESSION AUTHORIZATION view_user2;
CREATE VIEW rw_view2 AS SELECT b AS bb, c AS cc, a AS aa FROM base_tbl;
+ WARNING: rehashing catalog cache id 22 for pg_default_acl; 17 tups, 8 buckets
SELECT * FROM base_tbl; -- ok
a | b | c
---+-------+---
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/sanity_check.out 2013-08-22 17:45:02.681496721 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/sanity_check.out 2013-09-05 19:07:42.868384767 +0300
***************
*** 1,4 ****
--- 1,6 ----
VACUUM;
+ WARNING: rehashing catalog cache id 32 for pg_index; 129 tups, 64 buckets
+ WARNING: rehashing catalog cache id 45 for pg_class; 257 tups, 128 buckets
--
-- sanity check, if we don't have indices the test will take years to
-- complete. But skip TOAST relations (since they will have varying
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/aggregates.out 2013-09-05 10:24:41.048849206 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/aggregates.out 2013-09-05 19:07:44.492384862 +0300
***************
*** 193,198 ****
--- 193,199 ----
(1 row)
SELECT count(four) AS cnt_1000 FROM onek;
+ WARNING: rehashing catalog cache id 0 for pg_aggregate; 33 tups, 16 buckets
cnt_1000
----------
1000
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/matview.out 2013-08-27 18:17:41.238830573 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/matview.out 2013-09-05 19:07:51.220385252 +0300
***************
*** 392,397 ****
--- 392,406 ----
(0 rows)
VACUUM ANALYZE;
+ WARNING: rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets
+ WARNING: rehashing catalog cache id 12 for pg_cast; 513 tups, 256 buckets
+ WARNING: rehashing catalog cache id 5 for pg_amproc; 33 tups, 16 buckets
+ WARNING: rehashing catalog cache id 7 for pg_attribute; 257 tups, 128 buckets
+ WARNING: rehashing catalog cache id 32 for pg_index; 129 tups, 64 buckets
+ WARNING: rehashing catalog cache id 12 for pg_cast; 1025 tups, 512 buckets
+ WARNING: rehashing catalog cache id 14 for pg_opclass; 33 tups, 16 buckets
+ WARNING: rehashing catalog cache id 45 for pg_class; 257 tups, 128 buckets
+ WARNING: rehashing catalog cache id 7 for pg_attribute; 513 tups, 256 buckets
SELECT * FROM hogeview WHERE i < 10;
i
---
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/alter_generic.out 2013-08-27 18:17:41.238830573 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/alter_generic.out 2013-09-05 19:07:51.296385257 +0300
***************
*** 404,409 ****
--- 404,410 ----
-- Should work. Textbook case of ALTER OPERATOR FAMILY ... ADD OPERATOR with FOR ORDER BY
CREATE OPERATOR FAMILY alt_opf11 USING gist;
ALTER OPERATOR FAMILY alt_opf11 USING gist ADD OPERATOR 1 < (int4, int4) FOR ORDER BY float_ops;
+ WARNING: rehashing catalog cache id 39 for pg_opfamily; 17 tups, 8 buckets
ALTER OPERATOR FAMILY alt_opf11 USING gist DROP OPERATOR 1 (int4, int4);
DROP OPERATOR FAMILY alt_opf11 USING gist;
-- Should fail. btree comparison functions should return INTEGER in ALTER OPERATOR FAMILY ... ADD FUNCTION
***************
*** 514,519 ****
--- 515,521 ----
ALTER TEXT SEARCH DICTIONARY alt_ts_dict3 RENAME TO alt_ts_dict4; -- failed (not owner)
ERROR: must be owner of text search dictionary alt_ts_dict3
ALTER TEXT SEARCH DICTIONARY alt_ts_dict1 RENAME TO alt_ts_dict4; -- OK
+ WARNING: rehashing catalog cache id 52 for pg_ts_dict; 5 tups, 2 buckets
ALTER TEXT SEARCH DICTIONARY alt_ts_dict3 OWNER TO regtest_alter_user2; -- failed (not owner)
ERROR: must be owner of text search dictionary alt_ts_dict3
ALTER TEXT SEARCH DICTIONARY alt_ts_dict2 OWNER TO regtest_alter_user3; -- failed (no role membership)
***************
*** 545,550 ****
--- 547,553 ----
ALTER TEXT SEARCH CONFIGURATION alt_ts_conf1 RENAME TO alt_ts_conf2; -- failed (name conflict)
ERROR: text search configuration "alt_ts_conf2" already exists in schema "alt_nsp1"
ALTER TEXT SEARCH CONFIGURATION alt_ts_conf1 RENAME TO alt_ts_conf3; -- OK
+ WARNING: rehashing catalog cache id 50 for pg_ts_config; 5 tups, 2 buckets
ALTER TEXT SEARCH CONFIGURATION alt_ts_conf2 OWNER TO regtest_alter_user2; -- failed (no role membership)
ERROR: must be member of role "regtest_alter_user2"
ALTER TEXT SEARCH CONFIGURATION alt_ts_conf2 OWNER TO regtest_alter_user3; -- OK
***************
*** 585,590 ****
--- 588,594 ----
ALTER TEXT SEARCH TEMPLATE alt_ts_temp1 RENAME TO alt_ts_temp2; -- failed (name conflict)
ERROR: text search template "alt_ts_temp2" already exists in schema "alt_nsp1"
ALTER TEXT SEARCH TEMPLATE alt_ts_temp1 RENAME TO alt_ts_temp3; -- OK
+ WARNING: rehashing catalog cache id 56 for pg_ts_template; 5 tups, 2 buckets
ALTER TEXT SEARCH TEMPLATE alt_ts_temp2 SET SCHEMA alt_nsp2; -- OK
CREATE TEXT SEARCH TEMPLATE alt_ts_temp2 (lexize=dsimple_lexize);
ALTER TEXT SEARCH TEMPLATE alt_ts_temp2 SET SCHEMA alt_nsp2; -- failed (name conflict)
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/rules.out 2013-08-27 18:17:41.242830573 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/rules.out 2013-09-05 19:07:53.956385411 +0300
***************
*** 1277,1282 ****
--- 1277,1283 ----
-- Check that ruleutils are working
--
SELECT viewname, definition FROM pg_views WHERE schemaname <> 'information_schema' ORDER BY viewname;
+ WARNING: rehashing catalog cache id 7 for pg_attribute; 257 tups, 128 buckets
viewname | definition
---------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
iexit | SELECT ih.name, +
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/event_trigger.out 2013-08-22 17:45:02.577496726 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/event_trigger.out 2013-09-05 19:07:54.316385432 +0300
***************
*** 115,120 ****
--- 115,121 ----
CREATE OR REPLACE FUNCTION schema_two.add(int, int) RETURNS int LANGUAGE plpgsql
CALLED ON NULL INPUT
AS $$ BEGIN RETURN coalesce($1,0) + coalesce($2,0); END; $$;
+ WARNING: rehashing catalog cache id 22 for pg_default_acl; 17 tups, 8 buckets
CREATE AGGREGATE schema_two.newton
(BASETYPE = int, SFUNC = schema_two.add, STYPE = int);
RESET SESSION AUTHORIZATION;
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/dependency.out 2013-08-22 17:45:02.577496726 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/dependency.out 2013-09-05 19:07:55.908385524 +0300
***************
*** 60,65 ****
--- 60,66 ----
GRANT ALL ON deptest1 TO regression_user1 WITH GRANT OPTION;
SET SESSION AUTHORIZATION regression_user1;
CREATE TABLE deptest (a serial primary key, b text);
+ WARNING: rehashing catalog cache id 22 for pg_default_acl; 17 tups, 8 buckets
GRANT ALL ON deptest1 TO regression_user2;
RESET SESSION AUTHORIZATION;
\z deptest1
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/tsdicts.out 2013-08-22 17:45:02.717496719 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/tsdicts.out 2013-09-05 19:07:54.360385434 +0300
***************
*** 197,202 ****
--- 197,205 ----
Synonyms=synonym_sample
);
SELECT ts_lexize('synonym', 'PoStGrEs');
+ WARNING: rehashing catalog cache id 52 for pg_ts_dict; 5 tups, 2 buckets
+ LINE 1: SELECT ts_lexize('synonym', 'PoStGrEs');
+ ^
ts_lexize
-----------
{pgsql}
***************
*** 223,228 ****
--- 226,235 ----
Dictionary=english_stem
);
SELECT ts_lexize('thesaurus', 'one');
+ WARNING: rehashing catalog cache id 52 for pg_ts_dict; 9 tups, 4 buckets
+ LINE 1: SELECT ts_lexize('thesaurus', 'one');
+ ^
+ WARNING: rehashing catalog cache id 53 for pg_ts_dict; 5 tups, 2 buckets
ts_lexize
-----------
{1}
***************
*** 259,264 ****
--- 266,272 ----
);
ALTER TEXT SEARCH CONFIGURATION hunspell_tst ALTER MAPPING
REPLACE ispell WITH hunspell;
+ WARNING: rehashing catalog cache id 50 for pg_ts_config; 5 tups, 2 buckets
SELECT to_tsvector('hunspell_tst', 'Booking the skies after rebookings for footballklubber from a foot');
to_tsvector
----------------------------------------------------------------------------------------------------
***************
*** 316,321 ****
--- 324,331 ----
ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR
asciiword, hword_asciipart, asciihword
WITH synonym, thesaurus, english_stem;
+ WARNING: rehashing catalog cache id 50 for pg_ts_config; 9 tups, 4 buckets
+ WARNING: rehashing catalog cache id 51 for pg_ts_config; 5 tups, 2 buckets
SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
to_tsvector
----------------------------------
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/foreign_data.out 2013-08-22 17:45:02.601496724 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/foreign_data.out 2013-09-05 19:07:54.764385458 +0300
***************
*** 561,566 ****
--- 561,567 ----
ERROR: user mapping "foreign_data_user" already exists for server s4
CREATE USER MAPPING FOR public SERVER s4 OPTIONS ("this mapping" 'is public');
CREATE USER MAPPING FOR user SERVER s8 OPTIONS (username 'test', password 'secret'); -- ERROR
+ WARNING: rehashing catalog cache id 29 for pg_foreign_server; 5 tups, 2 buckets
ERROR: invalid option "username"
HINT: Valid options in this context are: user, password
CREATE USER MAPPING FOR user SERVER s8 OPTIONS (user 'test', password 'secret');
***************
*** 570,580 ****
--- 571,583 ----
CREATE USER MAPPING FOR current_user SERVER s5;
CREATE USER MAPPING FOR current_user SERVER s6 OPTIONS (username 'test');
CREATE USER MAPPING FOR current_user SERVER s7; -- ERROR
+ WARNING: rehashing catalog cache id 30 for pg_foreign_server; 5 tups, 2 buckets
ERROR: permission denied for foreign server s7
CREATE USER MAPPING FOR public SERVER s8; -- ERROR
ERROR: must be owner of foreign server s8
RESET ROLE;
ALTER SERVER t1 OWNER TO regress_test_indirect;
+ WARNING: rehashing catalog cache id 29 for pg_foreign_server; 9 tups, 4 buckets
SET ROLE regress_test_role;
CREATE USER MAPPING FOR current_user SERVER t1 OPTIONS (username 'bob', password 'boo');
CREATE USER MAPPING FOR public SERVER t1;
***************
*** 636,641 ****
--- 639,645 ----
DROP USER MAPPING IF EXISTS FOR public SERVER s7;
NOTICE: user mapping "public" does not exist for the server, skipping
CREATE USER MAPPING FOR public SERVER s8;
+ WARNING: rehashing catalog cache id 61 for pg_user_mapping; 5 tups, 2 buckets
SET ROLE regress_test_role;
DROP USER MAPPING FOR public SERVER s8; -- ERROR
ERROR: must be owner of foreign server s8
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/xmlmap_1.out 2013-08-22 17:45:02.721496719 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/xmlmap.out 2013-09-05 19:07:54.836385462 +0300
***************
*** 96,101 ****
--- 96,105 ----
DETAIL: This functionality requires the server to be built with libxml support.
HINT: You need to rebuild PostgreSQL using --with-libxml.
SELECT schema_to_xmlschema('testxmlschema', false, true, '');
+ WARNING: rehashing catalog cache id 45 for pg_class; 257 tups, 128 buckets
+ CONTEXT: SQL statement "SELECT oid FROM pg_catalog.pg_class WHERE relnamespace = 856104 AND relkind IN ('r', 'm', 'v') AND pg_catalog.has_table_privilege (oid, 'SELECT') ORDER BY relname;"
+ WARNING: rehashing catalog cache id 45 for pg_class; 513 tups, 256 buckets
+ CONTEXT: SQL statement "SELECT oid FROM pg_catalog.pg_class WHERE relnamespace = 856104 AND relkind IN ('r', 'm', 'v') AND pg_catalog.has_table_privilege (oid, 'SELECT') ORDER BY relname;"
ERROR: unsupported XML feature
DETAIL: This functionality requires the server to be built with libxml support.
HINT: You need to rebuild PostgreSQL using --with-libxml.
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/conversion.out 2013-08-22 17:45:02.573496726 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/conversion.out 2013-09-05 19:08:07.908386221 +0300
***************
*** 160,165 ****
--- 160,166 ----
-- ISO-8859-5 --> WIN1251
SELECT CONVERT('foo', 'ISO-8859-5', 'WIN1251');
+ WARNING: rehashing catalog cache id 17 for pg_conversion; 17 tups, 8 buckets
convert
---------
foo
***************
*** 272,277 ****
--- 273,279 ----
-- EUC_TW --> MULE_INTERNAL
SELECT CONVERT('foo', 'EUC_TW', 'MULE_INTERNAL');
+ WARNING: rehashing catalog cache id 17 for pg_conversion; 33 tups, 16 buckets
convert
---------
foo
***************
*** 510,515 ****
--- 512,518 ----
-- EUC_TW --> UTF8
SELECT CONVERT('foo', 'EUC_TW', 'UTF8');
+ WARNING: rehashing catalog cache id 17 for pg_conversion; 65 tups, 32 buckets
convert
---------
foo
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/alter_table.out 2013-08-27 18:17:41.238830573 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/alter_table.out 2013-09-05 19:08:20.604386957 +0300
***************
*** 1746,1751 ****
--- 1746,1752 ----
-- table's row type
create table tab1 (a int, b text);
create table tab2 (x int, y tab1);
+ WARNING: rehashing catalog cache id 58 for pg_type; 129 tups, 64 buckets
alter table tab1 alter column b type varchar; -- fails
ERROR: cannot alter table "tab1" because column "tab2.y" uses its row type
-- disallow recursive containment of row types
***************
*** 2318,2323 ****
--- 2319,2326 ----
FROM pg_class
WHERE relkind IN ('r', 'i', 'S', 't', 'm')
) mapped;
+ WARNING: rehashing catalog cache id 45 for pg_class; 257 tups, 128 buckets
+ WARNING: rehashing catalog cache id 45 for pg_class; 513 tups, 256 buckets
incorrectly_mapped | have_mappings
--------------------+---------------
0 | t
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/sequence.out 2013-08-22 17:45:02.713496719 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/sequence.out 2013-09-05 19:08:08.660386264 +0300
***************
*** 300,305 ****
--- 300,306 ----
('sequence_test2', 'serialtest2_f2_seq', 'serialtest2_f3_seq',
'serialtest2_f4_seq', 'serialtest2_f5_seq', 'serialtest2_f6_seq')
ORDER BY sequence_name ASC;
+ WARNING: rehashing catalog cache id 36 for pg_namespace; 33 tups, 16 buckets
sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option
------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+---------------+---------------------+-----------+--------------
regression | public | sequence_test2 | bigint | 64 | 2 | 0 | 32 | 5 | 36 | 4 | YES
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/xml_1.out 2013-08-22 17:45:02.721496719 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/xml.out 2013-09-05 19:08:08.268386241 +0300
***************
*** 463,468 ****
--- 463,469 ----
HINT: You need to rebuild PostgreSQL using --with-libxml.
SELECT table_name, view_definition FROM information_schema.views
WHERE table_name LIKE 'xmlview%' ORDER BY 1;
+ WARNING: rehashing catalog cache id 36 for pg_namespace; 33 tups, 16 buckets
table_name | view_definition
------------+--------------------------------------------------------------------------------
xmlview1 | SELECT xmlcomment('test'::text) AS xmlcomment;
======================================================================
On 4 Sep 2013 20:46, "Heikki Linnakangas" <hlinnakangas@vmware.com> wrote:
One fairly simple thing we could do is to teach catcache.c to resize the
caches. Then we could make the initial size of all the syscaches much
smaller. At the moment, we use fairly caches for catalogs like pg_enum (256
entries) and pg_usermapping (128), even though most databases don't use
those features at all. If they could be resized on demand, we could easily
allocate them initially with just, say, 4 entries.
If most databases don't use the feature at all, tsparser, enums, etc, why
not start out with *no* cache and only build one when it's first needed?
This would also mean there's less overhead for implementing new features
that aren't universally used.