weird hash plan cost, starting with pg10
Hello
While messing with EXPLAIN on a query emitted by pg_dump, I noticed that
current Postgres 10 emits weird bucket/batch/memory values for certain
hash nodes:
-> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)
Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB
-> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)
It shows normal values in 9.6.
The complete query is:
SELECT c.tableoid, c.oid, c.relname, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(c.relacl,pg_catalog.acldefault(CASE WHEN c.relkind = 'S' THEN 's' ELSE 'r' END::"char",c.relowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(CASE WHEN c.relkind = 'S' THEN 's' ELSE 'r' END::"char",c.relowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) AS relacl, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(CASE WHEN c.relkind = 'S' THEN 's' ELSE 'r' END::"char",c.relowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(c.relacl,pg_catalog.acldefault(CASE WHEN c.relkind = 'S' THEN 's' ELSE 'r' END::"char",c.relowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) as rrelacl, NULL AS initrelacl, NULL as initrrelacl, c.relkind, c.relnamespace, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = c.relowner) AS rolname, c.relchecks, c.relhastriggers, c.relhasindex, c.relhasrules, 'f'::bool AS relhasoids, c.relrowsecurity, c.relforcerowsecurity, c.relfrozenxid, c.relminmxid, tc.oid AS toid, tc.relfrozenxid AS tfrozenxid, tc.relminmxid AS tminmxid, c.relpersistence, c.relispopulated, c.relreplident, c.relpages, am.amname, CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, d.refobjid AS owning_tab, d.refobjsubid AS owning_col, (SELECT spcname FROM pg_tablespace t WHERE t.oid = c.reltablespace) AS reltablespace, array_remove(array_remove(c.reloptions,'check_option=local'),'check_option=cascaded') AS reloptions, CASE WHEN 'check_option=local' = ANY (c.reloptions) THEN 'LOCAL'::text WHEN 'check_option=cascaded' = ANY (c.reloptions) THEN 'CASCADED'::text ELSE NULL END AS checkoption, tc.reloptions AS toast_reloptions, c.relkind = 'S' AND EXISTS (SELECT 1 FROM pg_depend WHERE classid = 'pg_class'::regclass AND objid = c.oid AND objsubid = 0 AND refclassid = 'pg_class'::regclass AND deptype = 'i') AS is_identity_sequence, EXISTS (SELECT 1 FROM pg_attribute at LEFT JOIN pg_init_privs pip ON (c.oid = pip.objoid AND pip.classoid = 'pg_class'::regclass AND pip.objsubid = at.attnum)WHERE at.attrelid = c.oid AND ((SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(at.attacl,pg_catalog.acldefault('c',c.relowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('c',c.relowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) IS NOT NULL OR (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('c',c.relowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(at.attacl,pg_catalog.acldefault('c',c.relowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) IS NOT NULL OR NULL IS NOT NULL OR NULL IS NOT NULL))AS changed_acl, pg_get_partkeydef(c.oid) AS partkeydef, c.relispartition AS ispartition, pg_get_expr(c.relpartbound, c.oid) AS partbound FROM pg_class c LEFT JOIN pg_depend d ON (c.relkind = 'S' AND d.classid = c.tableoid AND d.objid = c.oid AND d.objsubid = 0 AND d.refclassid = c.tableoid AND d.deptype IN ('a', 'i')) LEFT JOIN pg_class tc ON (c.reltoastrelid = tc.oid AND c.relkind <> 'p') LEFT JOIN pg_am am ON (c.relam = am.oid) LEFT JOIN pg_init_privs pip ON (c.oid = pip.objoid AND pip.classoid = 'pg_class'::regclass AND pip.objsubid = 0) WHERE c.relkind in ('r', 'S', 'v', 'c', 'm', 'f', 'p') ORDER BY c.oid
I'm not looking into this right now. If somebody is bored in
quarantine, they might have a good time bisecting this.
--
Álvaro Herrera
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
While messing with EXPLAIN on a query emitted by pg_dump, I noticed that
current Postgres 10 emits weird bucket/batch/memory values for certain
hash nodes:
-> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)
Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB
-> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)
Looks suspiciously like uninitialized memory ...
The complete query is:
Reproduces here, though oddly only a couple of the several hash subplans
are doing that.
I'm not planning to dig into it right this second either.
regards, tom lane
On Tue, Mar 24, 2020 at 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
While messing with EXPLAIN on a query emitted by pg_dump, I noticed that
current Postgres 10 emits weird bucket/batch/memory values for certain
hash nodes:-> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)
Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB
-> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)Looks suspiciously like uninitialized memory ...
I think "hashtable" might have been pfree'd before
ExecHashGetInstrumentation() ran, because those numbers look like
CLOBBER_FREED_MEMORY's pattern:
hex(2139062143)
'0x7f7f7f7f'
hex(8971876904722400 / 1024)
'0x7f7f7f7f7f7'
Maybe there is something wrong with the shutdown order of nested subplans.
On Tue, Mar 24, 2020 at 9:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:
On Tue, Mar 24, 2020 at 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
While messing with EXPLAIN on a query emitted by pg_dump, I noticed that
current Postgres 10 emits weird bucket/batch/memory values for certain
hash nodes:-> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)
Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB
-> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)Looks suspiciously like uninitialized memory ...
I think "hashtable" might have been pfree'd before
ExecHashGetInstrumentation() ran, because those numbers look like
CLOBBER_FREED_MEMORY's pattern:hex(2139062143)
'0x7f7f7f7f'
hex(8971876904722400 / 1024)
'0x7f7f7f7f7f7'
Maybe there is something wrong with the shutdown order of nested subplans.
I think there might be a case like this:
* ExecRescanHashJoin() decides it can't reuse the hash table for a
rescan, so it calls ExecHashTableDestroy(), clears HashJoinState's
hj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE
* the HashState node still has a reference to the pfree'd HashJoinTable!
* HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation case so
it doesn't bother to build a new hash table
* EXPLAIN examines the HashState's pointer to a freed HashJoinTable struct
You could fix the dangling pointer problem by clearing it, but then
you'd have no data for EXPLAIN to show in this case. Some other
solution is probably needed, but I didn't have time to dig further
today.
On Mon, Mar 23, 2020 at 01:50:59PM -0300, Alvaro Herrera wrote:
While messing with EXPLAIN on a query emitted by pg_dump, I noticed that
current Postgres 10 emits weird bucket/batch/memory values for certain
hash nodes:-> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)
Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB
-> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)It shows normal values in 9.6.
Your message wasn't totally clear, but this is a live bug on 13dev.
It's actually broken on 9.6, but the issue isn't exposed until commit
6f236e1eb: "psql: Add tab completion for logical replication",
..which adds a nondefault ACL.
I reproduced the problem with this recipe, which doesn't depend on
c.relispartion or pg_get_partkeydef, and everything else shifting underfoot..
|CREATE TABLE t (i int); REVOKE ALL ON t FROM pryzbyj; explain analyze SELECT (SELECT 1 FROM (SELECT * FROM unnest(c.relacl)AS acl WHERE NOT EXISTS ( SELECT 1 FROM unnest(c.relacl) AS init(init_acl) WHERE acl=init_acl)) as foo) AS relacl , EXISTS (SELECT 1 FROM pg_depend WHERE objid=c.oid) FROM pg_class c ORDER BY c.oid;
| Index Scan using pg_class_oid_index on pg_class c (cost=0.27..4704.25 rows=333 width=9) (actual time=16.257..28.054 rows=334 loops=1)
| SubPlan 1
| -> Hash Anti Join (cost=2.25..3.63 rows=1 width=4) (actual time=0.024..0.024 rows=0 loops=334)
| Hash Cond: (acl.acl = init.init_acl)
| -> Function Scan on unnest acl (cost=0.00..1.00 rows=100 width=12) (actual time=0.007..0.007 rows=1 loops=334)
| -> Hash (cost=1.00..1.00 rows=100 width=12) (actual time=0.015..0.015 rows=2 loops=179)
| Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB
| -> Function Scan on unnest init (cost=0.00..1.00 rows=100 width=12) (actual time=0.009..0.010 rows=2 loops=179)
| SubPlan 2
| -> Seq Scan on pg_depend (cost=0.00..144.21 rows=14 width=0) (never executed)
| Filter: (objid = c.oid)
| SubPlan 3
| -> Seq Scan on pg_depend pg_depend_1 (cost=0.00..126.17 rows=7217 width=4) (actual time=0.035..6.270 rows=7220 loops=1)
When I finally gave up on thinking I knew what branch was broken, I got:
|3fc6e2d7f5b652b417fa6937c34de2438d60fa9f is the first bad commit
|commit 3fc6e2d7f5b652b417fa6937c34de2438d60fa9f
|Author: Tom Lane <tgl@sss.pgh.pa.us>
|Date: Mon Mar 7 15:58:22 2016 -0500
|
| Make the upper part of the planner work by generating and comparing Paths.
--
Justin
On Tue, Mar 24, 2020 at 11:05 AM Thomas Munro <thomas.munro@gmail.com>
wrote:
On Tue, Mar 24, 2020 at 9:55 AM Thomas Munro <thomas.munro@gmail.com>
wrote:On Tue, Mar 24, 2020 at 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
While messing with EXPLAIN on a query emitted by pg_dump, I noticed
that
current Postgres 10 emits weird bucket/batch/memory values for
certain
hash nodes:
-> Hash (cost=0.11..0.11 rows=10
width=12) (actual time=0.002..0.002 rows=1 loops=8)
Buckets: 2139062143 Batches:
2139062143 Memory Usage: 8971876904722400kB
-> Function Scan on unnest init_1
(cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)
Looks suspiciously like uninitialized memory ...
I think "hashtable" might have been pfree'd before
ExecHashGetInstrumentation() ran, because those numbers look like
CLOBBER_FREED_MEMORY's pattern:hex(2139062143)
'0x7f7f7f7f'
hex(8971876904722400 / 1024)
'0x7f7f7f7f7f7'
Maybe there is something wrong with the shutdown order of nested
subplans.
I think there might be a case like this:
* ExecRescanHashJoin() decides it can't reuse the hash table for a
rescan, so it calls ExecHashTableDestroy(), clears HashJoinState's
hj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE
* the HashState node still has a reference to the pfree'd HashJoinTable!
* HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation case so
it doesn't bother to build a new hash table
* EXPLAIN examines the HashState's pointer to a freed HashJoinTable struct
Yes, debugging with gdb shows this is exactly what happens.
Thanks
Richard
On Tue, Mar 24, 2020 at 3:36 PM Richard Guo <guofenglinux@gmail.com> wrote:
On Tue, Mar 24, 2020 at 11:05 AM Thomas Munro <thomas.munro@gmail.com>
wrote:I think there might be a case like this:
* ExecRescanHashJoin() decides it can't reuse the hash table for a
rescan, so it calls ExecHashTableDestroy(), clears HashJoinState's
hj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE
* the HashState node still has a reference to the pfree'd HashJoinTable!
* HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation case so
it doesn't bother to build a new hash table
* EXPLAIN examines the HashState's pointer to a freed HashJoinTable structYes, debugging with gdb shows this is exactly what happens.
According to the scenario above, here is a recipe that reproduces this
issue.
-- recipe start
create table a(i int, j int);
create table b(i int, j int);
create table c(i int, j int);
insert into a select 3,3;
insert into a select 2,2;
insert into a select 1,1;
insert into b select 3,3;
insert into c select 0,0;
analyze a;
analyze b;
analyze c;
set enable_nestloop to off;
set enable_mergejoin to off;
explain analyze
select exists(select * from b join c on a.i > c.i and a.i = b.i and b.j =
c.j) from a;
-- recipe end
I tried this recipe on different PostgreSQL versions, starting from
current master and going backwards. I was able to reproduce this issue
on all versions above 8.4. In 8.4 version, we do not output information
on hash buckets/batches. But manual inspection with gdb shows in 8.4 we
also have the dangling pointer for HashState->hashtable. I didn't check
versions below 8.4 though.
Thanks
Richard
On 25.03.2020 13:36, Richard Guo wrote:
On Tue, Mar 24, 2020 at 3:36 PM Richard Guo <guofenglinux@gmail.com
<mailto:guofenglinux@gmail.com>> wrote:On Tue, Mar 24, 2020 at 11:05 AM Thomas Munro
<thomas.munro@gmail.com <mailto:thomas.munro@gmail.com>> wrote:I think there might be a case like this:
* ExecRescanHashJoin() decides it can't reuse the hash table for a
rescan, so it calls ExecHashTableDestroy(), clears HashJoinState's
hj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE
* the HashState node still has a reference to the pfree'd
HashJoinTable!
* HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation
case so
it doesn't bother to build a new hash table
* EXPLAIN examines the HashState's pointer to a freed
HashJoinTable structYes, debugging with gdb shows this is exactly what happens.
According to the scenario above, here is a recipe that reproduces this
issue.-- recipe start
create table a(i int, j int);
create table b(i int, j int);
create table c(i int, j int);insert into a select 3,3;
insert into a select 2,2;
insert into a select 1,1;insert into b select 3,3;
insert into c select 0,0;
analyze a;
analyze b;
analyze c;set enable_nestloop to off;
set enable_mergejoin to off;explain analyze
select exists(select * from b join c on a.i > c.i and a.i = b.i and
b.j = c.j) from a;
-- recipe endI tried this recipe on different PostgreSQL versions, starting from
current master and going backwards. I was able to reproduce this issue
on all versions above 8.4. In 8.4 version, we do not output information
on hash buckets/batches. But manual inspection with gdb shows in 8.4 we
also have the dangling pointer for HashState->hashtable. I didn't check
versions below 8.4 though.Thanks
Richard
I can propose the following patch for the problem.
--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachments:
hash_join_instrumentation.patchtext/x-patch; name=hash_join_instrumentation.patchDownload+11-1
Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:
On 25.03.2020 13:36, Richard Guo wrote:
I tried this recipe on different PostgreSQL versions, starting from
current master and going backwards. I was able to reproduce this issue
on all versions above 8.4. In 8.4 version, we do not output information
on hash buckets/batches. But manual inspection with gdb shows in 8.4 we
also have the dangling pointer for HashState->hashtable. I didn't check
versions below 8.4 though.
I can propose the following patch for the problem.
I looked at this patch a bit, and I don't think it goes far enough.
What this issue is really pointing out is that EXPLAIN is not considering
the possibility of a Hash node having had several hashtable instantiations
over its lifespan. I propose what we do about that is generalize the
policy that show_hash_info() is already implementing (in a rather half
baked way) for multiple workers, and report the maximum field values
across all instantiations. We can combine the code needed to do so
with the code for the parallelism case, as shown in the 0001 patch
below.
In principle we could probably get away with back-patching 0001,
at least into branches that already have the HashState.hinstrument
pointer. I'm not sure it's worth any risk though. A much simpler
fix is to make sure we clear the dangling hashtable pointer, as in
0002 below (a simplified form of Konstantin's patch). The net
effect of that is that in the case where a hash table is destroyed
and never rebuilt, EXPLAIN ANALYZE would report no hash stats,
rather than possibly-garbage stats like it does today. That's
probably good enough, because it should be an uncommon corner case.
Thoughts?
regards, tom lane
On Sat, Apr 11, 2020 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:
On 25.03.2020 13:36, Richard Guo wrote:
I tried this recipe on different PostgreSQL versions, starting from
current master and going backwards. I was able to reproduce this issue
on all versions above 8.4. In 8.4 version, we do not output information
on hash buckets/batches. But manual inspection with gdb shows in 8.4 we
also have the dangling pointer for HashState->hashtable. I didn't check
versions below 8.4 though.I can propose the following patch for the problem.
I looked at this patch a bit, and I don't think it goes far enough.
What this issue is really pointing out is that EXPLAIN is not considering
the possibility of a Hash node having had several hashtable instantiations
over its lifespan. I propose what we do about that is generalize the
policy that show_hash_info() is already implementing (in a rather half
baked way) for multiple workers, and report the maximum field values
across all instantiations. We can combine the code needed to do so
with the code for the parallelism case, as shown in the 0001 patch
below.
I looked through 0001 patch and it looks good to me.
At first I was wondering if we need to check whether HashState.hashtable
is not NULL in ExecShutdownHash() before we decide to allocate save
space for HashState.hinstrument. And then I convinced myself that that's
not necessary since HashState.hinstrument and HashState.hashtable cannot
be both NULL there.
In principle we could probably get away with back-patching 0001,
at least into branches that already have the HashState.hinstrument
pointer. I'm not sure it's worth any risk though. A much simpler
fix is to make sure we clear the dangling hashtable pointer, as in
0002 below (a simplified form of Konstantin's patch). The net
effect of that is that in the case where a hash table is destroyed
and never rebuilt, EXPLAIN ANALYZE would report no hash stats,
rather than possibly-garbage stats like it does today. That's
probably good enough, because it should be an uncommon corner case.
Yes it's an uncommon corner case. But I think it may still surprise
people that most of the time the hash stat shows well but sometimes it
does not.
Thanks
Richard
Richard Guo <guofenglinux@gmail.com> writes:
At first I was wondering if we need to check whether HashState.hashtable
is not NULL in ExecShutdownHash() before we decide to allocate save
space for HashState.hinstrument. And then I convinced myself that that's
not necessary since HashState.hinstrument and HashState.hashtable cannot
be both NULL there.
Even if the hashtable is null at that point, creating an all-zeroes
hinstrument struct is harmless.
regards, tom lane
On Mon, Apr 13, 2020 at 9:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Richard Guo <guofenglinux@gmail.com> writes:
At first I was wondering if we need to check whether HashState.hashtable
is not NULL in ExecShutdownHash() before we decide to allocate save
space for HashState.hinstrument. And then I convinced myself that that's
not necessary since HashState.hinstrument and HashState.hashtable cannot
be both NULL there.Even if the hashtable is null at that point, creating an all-zeroes
hinstrument struct is harmless.
Correct. The only benefit we may get from checking if the hashtable is
null is to avoid an unnecessary palloc0 for hinstrument. But that case
cannot happen though.
Thanks
Richard
On Fri, Apr 10, 2020 at 04:11:27PM -0400, Tom Lane wrote:
I'm not sure it's worth any risk though. A much simpler
fix is to make sure we clear the dangling hashtable pointer, as in
0002 below (a simplified form of Konstantin's patch). The net
effect of that is that in the case where a hash table is destroyed
and never rebuilt, EXPLAIN ANALYZE would report no hash stats,
rather than possibly-garbage stats like it does today. That's
probably good enough, because it should be an uncommon corner case.Thoughts?
Checking if you're planning to backpatch this ?
diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c index c901a80..9e28ddd 100644 --- a/src/backend/executor/nodeHashjoin.c +++ b/src/backend/executor/nodeHashjoin.c @@ -1336,6 +1336,12 @@ ExecReScanHashJoin(HashJoinState *node) else { /* must destroy and rebuild hash table */ + HashState *hashNode = castNode(HashState, innerPlanState(node)); + + /* for safety, be sure to clear child plan node's pointer too */ + Assert(hashNode->hashtable == node->hj_HashTable); + hashNode->hashtable = NULL; + ExecHashTableDestroy(node->hj_HashTable); node->hj_HashTable = NULL; node->hj_JoinState = HJ_BUILD_HASHTABLE;
--
Justin
Justin Pryzby <pryzby@telsasoft.com> writes:
Checking if you're planning to backpatch this ?
Are you speaking of 5c27bce7f et al?
regards, tom lane
On Mon, Apr 27, 2020 at 12:26:03PM -0400, Tom Lane wrote:
Justin Pryzby <pryzby@telsasoft.com> writes:
Checking if you're planning to backpatch this ?
Are you speaking of 5c27bce7f et al?
Oops, yes, thanks.
I updated wiki/PostgreSQL_13_Open_Items just now.
--
Justin