Improve hash join's handling of tuples with null join keys
The attached patch is a response to the discussion at [1]/messages/by-id/18909-e5e1b702c9441b8a@postgresql.org, where
it emerged that lots of rows with null join keys can send a hash
join into too-many-batches hell, if they are on the outer side
of the join so that they must be null-extended not just discarded.
This isn't really surprising given that such rows will certainly
end up in the same hash bucket, and no amount of splitting can
reduce the size of that bucket. (I'm a bit surprised that the
growEnabled heuristic didn't kick in, but it seems it didn't,
at least not up to several million batches.)
Thinking about that, it occurred to me to wonder why we are putting
null-keyed tuples into the hash table at all. They cannot match
anything, so all we really have to do with them is emit one
null-extended copy. Awhile later I had the attached, which shoves
such rows into a tuplestore that's separate from the hash table
proper, ensuring that they can't bollix our algorithms for when to
grow the hash table. (For tuples coming from the right input, we
need to use a tuplestore in case we're asked to rescan the existing
hashtable. For tuples coming from the left input, we could
theoretically emit 'em and forget 'em immediately, but that'd require
some major code restructuring so I decided to just use a tuplestore
there too.)
This passes check-world, and I've extended a couple of existing test
cases to ensure that the new code paths are exercised. I've not done
any real performance testing, though.
regards, tom lane
Attachments:
v1-0001-Improve-hash-join-s-handling-of-tuples-with-null-.patchtext/x-diff; charset=us-ascii; name*0=v1-0001-Improve-hash-join-s-handling-of-tuples-with-null-.p; name*1=atchDownload+356-60
On 5/6/25 01:11, Tom Lane wrote:
The attached patch is a response to the discussion at [1], where
it emerged that lots of rows with null join keys can send a hash
join into too-many-batches hell, if they are on the outer side
of the join so that they must be null-extended not just discarded.
This isn't really surprising given that such rows will certainly
end up in the same hash bucket, and no amount of splitting can
reduce the size of that bucket. (I'm a bit surprised that the
growEnabled heuristic didn't kick in, but it seems it didn't,
at least not up to several million batches.)
I don't think that's too surprising - growEnabled depends on all tuples
getting into the same batch during a split. But even if there are many
duplicate values, real-world data sets often have a couple more tuples
that just happen to fall into that bucket too. And then during the split
some get into one batch and some get into another.
My personal experience is that the growEnabled heuristics is overly
sensitive, and probably does not trigger very often. It can also get set
to "true" to early, but that's (much) harder to hit.
I have suggested to make growEnabled less strict in [2]/messages/by-id/7bed6c08-72a0-4ab9-a79c-e01fcdd0940f@vondra.me, i.e. to
calculate the threshold as percentage of the batch, and not disable
growth permanently. But it was orthogonal to what that thread did.
But more importantly, wasn't the issue discussed in [1] about parallel
hash joins? I got quite confused while reading the thread ... I'm asking
because growEnabled is checked only in ExecHashIncreaseNumBatches, not
in ExecParallelHashIncreaseNumBatches. So AFAICS the parallel hash joins
don't use growEnabled at all, no?
[2]: /messages/by-id/7bed6c08-72a0-4ab9-a79c-e01fcdd0940f@vondra.me
/messages/by-id/7bed6c08-72a0-4ab9-a79c-e01fcdd0940f@vondra.me
Thinking about that, it occurred to me to wonder why we are putting
null-keyed tuples into the hash table at all. They cannot match
anything, so all we really have to do with them is emit one
null-extended copy. Awhile later I had the attached, which shoves
such rows into a tuplestore that's separate from the hash table
proper, ensuring that they can't bollix our algorithms for when to
grow the hash table. (For tuples coming from the right input, we
need to use a tuplestore in case we're asked to rescan the existing
hashtable. For tuples coming from the left input, we could
theoretically emit 'em and forget 'em immediately, but that'd require
some major code restructuring so I decided to just use a tuplestore
there too.)This passes check-world, and I've extended a couple of existing test
cases to ensure that the new code paths are exercised. I've not done
any real performance testing, though.
Are you planning to? If not, I can try to collect some numbers, but I
can't promise that before pgconf.dev.
I'd be surprised if this was a regression, the hash table lookups are
not exactly free. And even if it was a minor regression, it'd affect
only cases with many NULL keys, but it improves robustness.
BTW do you consider this to be a bugfix for PG18? Or would it have to
wait for PG19 at this point?
regards
--
Tomas Vondra
Tomas Vondra <tomas@vondra.me> writes:
My personal experience is that the growEnabled heuristics is overly
sensitive, and probably does not trigger very often.
Yeah, it would be good to make it not quite all-or-nothing.
But more importantly, wasn't the issue discussed in [1] about parallel
hash joins?
I'm not clear on that either; it seemed that the OP was able to
trigger it in some non-parallel cases too. But we don't have a
reproducer so I can't say for sure. Building a reproducer would
be a useful exercise for testing this. There might well be some
parallel-specific misbehavior that would be worth ameliorating
independently of this work, in case of a lot of non-null duplicate
keys.
This passes check-world, and I've extended a couple of existing test
cases to ensure that the new code paths are exercised. I've not done
any real performance testing, though.
Are you planning to? If not, I can try to collect some numbers, but I
can't promise that before pgconf.dev.
If you have time after the conference, please feel free.
BTW do you consider this to be a bugfix for PG18? Or would it have to
wait for PG19 at this point?
This has been like this forever I suspect --- certainly for as long
as we've had PHJ, and probably longer. So I'm seeing it as new work
for v19, not something we'd attempt to back-patch.
regards, tom lane
On Tue, May 6, 2025 at 12:12 PM Tomas Vondra <tomas@vondra.me> wrote:
On 5/6/25 01:11, Tom Lane wrote:
The attached patch is a response to the discussion at [1], where
it emerged that lots of rows with null join keys can send a hash
join into too-many-batches hell, if they are on the outer side
of the join so that they must be null-extended not just discarded.
This isn't really surprising given that such rows will certainly
end up in the same hash bucket, and no amount of splitting can
reduce the size of that bucket. (I'm a bit surprised that the
growEnabled heuristic didn't kick in, but it seems it didn't,
at least not up to several million batches.)
Good idea. I haven't reviewed it properly, but one observation is
that trapping the null-keys tuples in per-worker tuple stores creates
unfairness. That could be fixed by using a SharedTuplestore instead,
but unfortunately SharedTuplestore always spills to disk at the
moment, so maybe I should think about how to give it some memory for
small sets like regular Tuplestore. Will look more closely after
Montreal.
I don't think that's too surprising - growEnabled depends on all tuples
getting into the same batch during a split. But even if there are many
duplicate values, real-world data sets often have a couple more tuples
that just happen to fall into that bucket too. And then during the split
some get into one batch and some get into another.
Yeah.
My personal experience is that the growEnabled heuristics is overly
sensitive, and probably does not trigger very often. It can also get set
to "true" to early, but that's (much) harder to hit.I have suggested to make growEnabled less strict in [2], i.e. to
calculate the threshold as percentage of the batch, and not disable
growth permanently. But it was orthogonal to what that thread did.
+1, I also wrote a thread with a draft patch like that at some point,
which I'll also try to dig up after Montreal.
But more importantly, wasn't the issue discussed in [1] about parallel
hash joins? I got quite confused while reading the thread ... I'm asking
because growEnabled is checked only in ExecHashIncreaseNumBatches, not
in ExecParallelHashIncreaseNumBatches. So AFAICS the parallel hash joins
don't use growEnabled at all, no?
There is an equivalent mechanism, but it's slightly more complicated
as it requires consensus (see code near PHJ_GROW_BATCHES_DECIDE).
It's possible that it's not working quite as well as it should. It's
definitely less deterministic in some edge cases since tuples are
packed into chunks differently so the memory used can vary slightly
run-to-run, but the tuple count should be stable. I've made a note to
review that logic again too.
Note also that v16 is the first that could put NULLs in a shared
memory hash table (11c2d6fdf5 enabled Parallel Hash Right|Full Join),
while non-P HJ has had that for a long time, but it also couldn't be
used in a parallel query, so I guess it's possible that this stuff is
coming up now because it wasn't often picked for problems that would
generate interesting numbers of NULLs likely to exceed limits given
available plans in older releases. See also related bug fix in
98c7c7152, spotted soon after this plan type escaped into the field.
While thinking about that, I wanted to note that we have more things
to improve in PHRJ: (1) Parallelism of unmatched scan: a short but not
entirely satisfying patch was already shared on the PHRJ thread but
not committed with the main feature. I already had some inklings of
how to do much better which I recently described in a bit more detail
on the PBHS thread in vapourware form, where parallel fairness came up
again. "La perfection est le mortel ennemi du bien" or whatever it is
they say in the language of Montreal, but really the easy patch for
unmatched scan parallelism wasn't actually bon enough, because it was
non-deterministic how many processes could participate due to deadlock
avoidance arcana, creating run-to-run variation that I'd expect Tomáš
to find empirically and reject in one of his benchmarking expeditions
:-). (2) Bogus asymmetries in estimations/planning: I wrote some
analysis of why we don't use PHRJ as much as we could/should near
Richard Guo's work on anti/semi joins which went in around the same
time. My idea there is to try to debogify the parallel degree logic
more generally, it's just that PHRJ brought key aspects of it into
relief for me, ie bogosity of the rule-based "driving table" concept.
I'll try to write these projects up on the wiki, instead of in random
threads :-)
In other words if you just use local Tuplestores as you showed it
would actually be an improvement in fairness over the status quo due
to (1) not being solved yet... but it will be solved, hence mentioning
it in this context.
Thomas Munro <thomas.munro@gmail.com> writes:
On Tue, May 6, 2025 at 12:12 PM Tomas Vondra <tomas@vondra.me> wrote:
On 5/6/25 01:11, Tom Lane wrote:
The attached patch is a response to the discussion at [1], where
it emerged that lots of rows with null join keys can send a hash
join into too-many-batches hell, if they are on the outer side
of the join so that they must be null-extended not just discarded.
Good idea. I haven't reviewed it properly, but one observation is
that trapping the null-keys tuples in per-worker tuple stores creates
unfairness. That could be fixed by using a SharedTuplestore instead,
but unfortunately SharedTuplestore always spills to disk at the
moment, so maybe I should think about how to give it some memory for
small sets like regular Tuplestore. Will look more closely after
Montreal.
Hmm ... I'm unpersuaded that "fairness" is an argument for adding
overhead to the processing of these tuples. It's very hard to see
how shoving them into a shared tuplestore can beat not shoving them
into a shared tuplestore. But if you want to poke at that idea,
feel free.
In the meantime, I noticed that my patch was intermittently failing
in CI, and was able to reproduce that locally. It turns out I'd
missed the point that we might accumulate some null-keyed tuples
into local tuplestores during a parallel HJ_BUILD_HASHTABLE step.
Ordinarily that doesn't matter because we'll dump them anyway at
conclusion of the first batch. But with the right timing, we might
collect some tuples and yet, by the time we're ready to process a
batch, there are none left to do. Then the state machine fell out
without ever dumping those tuples. (For some reason this is way
easier to reproduce under FreeBSD than Linux --- scheduler quirk
I guess.)
v2 attached fixes that, and improves some comments.
regards, tom lane
Attachments:
v2-0001-Improve-hash-join-s-handling-of-tuples-with-null-.patchtext/x-diff; charset=us-ascii; name*0=v2-0001-Improve-hash-join-s-handling-of-tuples-with-null-.p; name*1=atchDownload+381-72
I downloaded the patch and tested all join types: inner, left, right, full, semi and anti. Basically my tests all passed. However, I didn't test any case of parallel query.
I have two nit comments:
1. In hashjoin.h, line 76-78, the added comment says "(In the unlikely but supported case of a non-strict join operator, we treat null keys as normal data.)". But I don't see where non-strict join is handled. So, how this patch impact non-strict joins?
2. Two new join states are added: HJ_FILL_OUTER_NULL_TUPLES, HJ_FILL_INNER_NULL_TUPLES. There are existing join states: HJ_FILL_OUTER_TUPLE and HJ_FILL_INNER_TUPLES. They all use "FILL". But I think that "FILL" in HJ_FILL_OUTER_NULL_TUPLES means different from in "HJ_FILL_OUTER_TUPLE". Because HJ_FILL_OUTER_TUPLE means that when returning an outer tuple, it needs to fill in null-extension of inner tables; while HJ_FILL_OUTER_NULL_TUPLES means to return outer tuples that have null join keys. I would suggest something like "APPEND" for the new states: HJ_APPEND_OUTER_NULL_TUPLES and HJ_APPEND_INNER_NULL_TUPLES.
The new status of this patch is: Waiting on Author
On Aug 13, 2025, at 17:16, Chao Li <li.evan.chao@gmail.com> wrote:
I downloaded the patch and tested all join types: inner, left, right, full, semi and anti. Basically my tests all passed. However, I didn't test any case of parallel query.
I have two nit comments:
1. In hashjoin.h, line 76-78, the added comment says "(In the unlikely but supported case of a non-strict join operator, we treat null keys as normal data.)". But I don't see where non-strict join is handled. So, how this patch impact non-strict joins?
I take back this comment, and I get a new comment related.
@@ -1015,11 +1144,19 @@ ExecHashJoinOuterGetTuple(PlanState *outerNode,
if (!isnull)
{
+ /* normal case with a non-null join key */
/* remember outer relation is not empty for possible rescan */
hjstate->hj_OuterNotEmpty = true;
return slot;
}
+ else if (hjstate->hj_KeepNullTuples)
+ {
+ /* null join key, but we must save tuple to be emitted later */
+ if (hjstate->hj_NullOuterTupleStore == NULL)
+ hjstate->hj_NullOuterTupleStore = ExecHashBuildNullTupleStore(hashtable);
+ tuplestore_puttupleslot(hjstate->hj_NullOuterTupleStore, slot);
+ }
When an outer tuple contains null join key, without this patch, “isnull” flag is false, so the tuple will still be returned, and for an outer join, the tuple slot will be returned to parent node immediately.
With this patch, “isnull” now becomes true because of the change of strict op. Then the outer null join key tuple must be stored in a tuplestore. When an outer table contains a lot of null join key tuples, then the tuplestore could bump to very large, in that case, it would be hard to say this patch really benefits.
I am think that, can we only do the first half of this patch? Only putting inner table’s null join key tuple into a tuplestore. So that inner hash table’s performance gets improved, and outer table’s logic keeps the same, then overall this patch makes a pure improvement without the potential memory burden from outerNullTupleStore.
—————————
I also got an idea for improving the hash logic.
/*
* If the outer relation is completely empty, and it's not
* right/right-anti/full join, we can quit without building
* the hash table. However, for an inner join it is only a
* win to check this when the outer relation's startup cost is
* less than the projected cost of building the hash table.
* Otherwise it's best to build the hash table first and see
* if the inner relation is empty. (When it's a left join, we
* should always make this check, since we aren't going to be
* able to skip the join on the strength of an empty inner
* relation anyway.)
*/
if (HJ_FILL_INNER(node))
{
/* no chance to not build the hash table */
node->hj_FirstOuterTupleSlot = NULL;
}
Based on this patch, if we are doing a left join, and outer table is empty, then all tuples from the inner table should be returned. In that case, we can skip building a hash table, instead, we can put all inner table tuples into hashtable.innerNullTupleStore. Building a tuplestore should be cheaper than building a hash table, so this way makes a little bit more performance improvement.
Regards,
Chao Li (Evan)
--------------------
HighGo Software Co., Ltd.
https://www.highgo.com/
Chao Li <li.evan.chao@gmail.com> writes:
With this patch, “isnull” now becomes true because of the change of strict op. Then the outer null join key tuple must be stored in a tuplestore. When an outer table contains a lot of null join key tuples, then the tuplestore could bump to very large, in that case, it would be hard to say this patch really benefits.
What's your point? If we don't divert those tuples into the
tuplestore, then they will end up in the main hash table instead,
and the consequences of bloat there are far worse.
Based on this patch, if we are doing a left join, and outer table is empty, then all tuples from the inner table should be returned. In that case, we can skip building a hash table, instead, we can put all inner table tuples into hashtable.innerNullTupleStore. Building a tuplestore should be cheaper than building a hash table, so this way makes a little bit more performance improvement.
I think that would make the logic completely unintelligible. Also,
a totally-empty input relation is not a common situation. We try to
optimize such cases when it's simple to do so, but we shouldn't let
that drive the fundamental design.
regards, tom lane
On Aug 16, 2025, at 00:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Chao Li <li.evan.chao@gmail.com> writes:
With this patch, “isnull” now becomes true because of the change of strict op. Then the outer null join key tuple must be stored in a tuplestore. When an outer table contains a lot of null join key tuples, then the tuplestore could bump to very large, in that case, it would be hard to say this patch really benefits.
What's your point? If we don't divert those tuples into the
tuplestore, then they will end up in the main hash table instead,
and the consequences of bloat there are far worse.
I might not state clearly. For this comments, I meant the outer table. For example:
SELECT a.*, b.* from a RIGHT JOIN b on a.id = b.a_id;
Let’s say table a is used to build hash, table b is the outer table.
And say, table b has 1000 tuples whose a_id are NULL.
Before this patch, when fetching such a tuple (a_id is null) from table b, the tuple will be returned to parent node immediately.
With this tuple, all of such tuples will be put into hj_NullOuterTupleStore, and only be returned after all non-null tuples are processed.
My comment was trying to say that if there are a lot of null join key tuples in outer table, then hj_NullOuterTupleStore might use a lot of memory or swap data to disk, which might lead to performance burden. So, I was thinking we could keep the original logic for outer table, and return null join key tuples immediately.
Based on this patch, if we are doing a left join, and outer table is empty, then all tuples from the inner table should be returned. In that case, we can skip building a hash table, instead, we can put all inner table tuples into hashtable.innerNullTupleStore. Building a tuplestore should be cheaper than building a hash table, so this way makes a little bit more performance improvement.
I think that would make the logic completely unintelligible. Also,
a totally-empty input relation is not a common situation. We try to
optimize such cases when it's simple to do so, but we shouldn't let
that drive the fundamental design.
I absolutely agree we should not touch the fundamental design for the tiny optimization, that’s why I mentioned “based on this patch”.
With this patch, you have introduced a change in MultiExecPrivateHash():
else if (node->keep_null_tuples)
{
/* null join key, but we must save tuple to be emitted later */
if (node->null_tuple_store == NULL)
node->null_tuple_store = ExecHashBuildNullTupleStore(hashtable);
tuplestore_puttupleslot(node->null_tuple_store, slot);
}
We can simply added a new flag to HashTable, say named skip_building_hash. Upon right join (join to the hash side), and outer table is empty, set the flag to true, then in the MultiExecPrivateHash(), if skip_building_hash is true, directly put all tuples into node->null_tuple_store without building a hash table.
Then in ExecHashJoinImpl(), after "(void) MultiExecProcNode()" is called, if hashtable->skip_building_hash is true, directly set node->hj_JoinState = HJ_FILL_INNER_NULL_TUPLES.
So, the tiny optimization is totally based on this patch, it depends on the HashTable.null_tuple_store (if you take this comment, then maybe rename this variable) and the new state HJ_FILL_INNER_NULL_TUPLES.
Best regards,
==
Chao Li (Evan)
--------------------
HighGo Software Co., Ltd.
https://www.highgo.com/
Chao Li <li.evan.chao@gmail.com> writes:
My comment was trying to say that if there are a lot of null join key tuples in outer table, then hj_NullOuterTupleStore might use a lot of memory or swap data to disk, which might lead to performance burden. So, I was thinking we could keep the original logic for outer table, and return null join key tuples immediately.
I don't think that works for the parallel-hash-join case, at least not
for the multi-batch code path. That path insists on putting every
potentially-outputtable tuple into some batch's shared tuplestore, cf
ExecParallelHashJoinPartitionOuter. We can make that function put
the tuple into a different tuplestore instead, but I think it's quite
unreasonable to think of returning the tuple immediately from there.
It certainly wouldn't be "keeping the original logic".
Yeah, we could make multi-batch PHJ do this differently from the other
cases, but I don't want to go there: too much complication and risk of
bugs for what is a purely hypothetical performance issue. Besides
which, if the join is large enough to be worth worrying over, it's
most likely taking that code path anyhow.
We can simply added a new flag to HashTable, say named skip_building_hash. Upon right join (join to the hash side), and outer table is empty, set the flag to true, then in the MultiExecPrivateHash(), if skip_building_hash is true, directly put all tuples into node->null_tuple_store without building a hash table.
Then in ExecHashJoinImpl(), after "(void) MultiExecProcNode()" is called, if hashtable->skip_building_hash is true, directly set node->hj_JoinState = HJ_FILL_INNER_NULL_TUPLES.
I'm not excited about this idea either. It's completely abusing the
data structure, because the "null_tuple_store" is now being used for
tuples that (probably) don't have null join keys. The fact that you
could cram it in with not very many lines of code does not mean that
the result will be understandable or maintainable --- and certainly,
hash join is on the hairy edge of being too complicated already.
regards, tom lane
On Aug 19, 2025, at 05:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Yeah, we could make multi-batch PHJ do this differently from the other
cases, but I don't want to go there: too much complication and risk of
bugs for what is a purely hypothetical performance issue. Besides
which, if the join is large enough to be worth worrying over, it's
most likely taking that code path anyhow.We can simply added a new flag to HashTable, say named skip_building_hash. Upon right join (join to the hash side), and outer table is empty, set the flag to true, then in the MultiExecPrivateHash(), if skip_building_hash is true, directly put all tuples into node->null_tuple_store without building a hash table.
Then in ExecHashJoinImpl(), after "(void) MultiExecProcNode()" is called, if hashtable->skip_building_hash is true, directly set node->hj_JoinState = HJ_FILL_INNER_NULL_TUPLES.I'm not excited about this idea either. It's completely abusing the
data structure, because the "null_tuple_store" is now being used for
tuples that (probably) don't have null join keys. The fact that you
could cram it in with not very many lines of code does not mean that
the result will be understandable or maintainable --- and certainly,
hash join is on the hairy edge of being too complicated already.regards, tom lane
Thanks for the explanation. Then these two comments are resolved.
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/
Bug #19030 [1]/messages/by-id/19030-944dd78d7ef94c0f@postgresql.org seems to be a fresh report of the problem this patch
aims to solve. While answering that, I realized that the v2 patch
causes null-keyed inner rows to not be included in EXPLAIN ANALYZE's
report of the number of rows output by the Hash node. Now on the
one hand, what it's reporting is an accurate reflection of the
number of rows in the hash table, which perhaps is useful. On the
other hand, it's almost surely going to confuse users, and it's
different from the number we produced before. Should we try to
preserve the old behavior here? (I've not looked at what code
changes would be needed for that.)
regards, tom lane
Tom Lane wrote:
Bug #19030 [1] seems to be a fresh report of the problem this patch
aims to solve.
I can confirm that the patch fixes the issue (Bug #19030). The memory usage remains within the expected range of work_mem.
This also applies to parallel hash joins.
The query also runs significantly faster.
I also tested cases with multiple left joins.
I have only observed this problem when there are many null values in the join column.
regards
Marc-Olaf Jaschke
Marc-Olaf Jaschke <moj@dshare.de> writes:
I can confirm that the patch fixes the issue (Bug #19030). The memory usage remains within the expected range of work_mem.
This also applies to parallel hash joins.
The query also runs significantly faster.
I also tested cases with multiple left joins.
I have only observed this problem when there are many null values in the join column.
Thanks for testing!
regards, tom lane
I wrote:
Bug #19030 [1] seems to be a fresh report of the problem this patch
aims to solve. While answering that, I realized that the v2 patch
causes null-keyed inner rows to not be included in EXPLAIN ANALYZE's
report of the number of rows output by the Hash node. Now on the
one hand, what it's reporting is an accurate reflection of the
number of rows in the hash table, which perhaps is useful. On the
other hand, it's almost surely going to confuse users, and it's
different from the number we produced before. Should we try to
preserve the old behavior here? (I've not looked at what code
changes would be needed for that.)
I got around to looking at that finally. It's not terribly difficult
to fix, but while figuring out which counters were used for what,
I noticed a pre-existing bug: when ExecHashRemoveNextSkewBucket moves
tuples into the main hash table from the skew hash table, it fails to
adjust hashtable->skewTuples, meaning that subsequent executions of
ExecHashTableInsert will have the wrong idea of how many tuples are in
the main table. The error is probably not very large because the
skew table is not supposed to be big relative to the main table,
but still, it's wrong. So I tried to clean that up here.
0001 attached is the same patch as before (brought up to HEAD, but
only line numbers change). 0002 is the new code to fix these
tuple-counting issues.
regards, tom lane
Attachments:
v3-0001-Improve-hash-join-s-handling-of-tuples-with-null-.patchtext/x-diff; charset=us-ascii; name*0=v3-0001-Improve-hash-join-s-handling-of-tuples-with-null-.p; name*1=atchDownload+381-72
v3-0002-Fix-tuple-counting-issues-in-hash-joins.patchtext/x-diff; charset=us-ascii; name=v3-0002-Fix-tuple-counting-issues-in-hash-joins.patchDownload+32-13
On Tue, Mar 3, 2026, at 21:58, Tom Lane wrote:
I wrote:
Bug #19030 [1] seems to be a fresh report of the problem this patch
aims to solve. While answering that, I realized that the v2 patch
causes null-keyed inner rows to not be included in EXPLAIN ANALYZE's
report of the number of rows output by the Hash node. Now on the
one hand, what it's reporting is an accurate reflection of the
number of rows in the hash table, which perhaps is useful. On the
other hand, it's almost surely going to confuse users, and it's
different from the number we produced before. Should we try to
preserve the old behavior here? (I've not looked at what code
changes would be needed for that.)I got around to looking at that finally. It's not terribly difficult
to fix, but while figuring out which counters were used for what,
I noticed a pre-existing bug: when ExecHashRemoveNextSkewBucket moves
tuples into the main hash table from the skew hash table, it fails to
adjust hashtable->skewTuples, meaning that subsequent executions of
ExecHashTableInsert will have the wrong idea of how many tuples are in
the main table. The error is probably not very large because the
skew table is not supposed to be big relative to the main table,
but still, it's wrong. So I tried to clean that up here.0001 attached is the same patch as before (brought up to HEAD, but
only line numbers change). 0002 is the new code to fix these
tuple-counting issues.regards, tom lane
I've tested v3-0001 and v3-0002 and can confirm the bug introduced
in v3-0001 is fixed in v3-0002:
% cat explain-analyze-problem.sql
CREATE TABLE ea_hash (id int);
INSERT INTO ea_hash SELECT g FROM generate_series(1, 10) g;
INSERT INTO ea_hash SELECT NULL FROM generate_series(1, 90);
ANALYZE ea_hash;
CREATE TABLE ea_probe (id int);
INSERT INTO ea_probe SELECT (g % 10) + 1 FROM generate_series(1, 10000) g;
ANALYZE ea_probe;
SET enable_nestloop = off;
SET enable_mergejoin = off;
EXPLAIN (COSTS OFF, ANALYZE, TIMING OFF, BUFFERS OFF, SUMMARY OFF)
SELECT count(*) FROM ea_probe FULL OUTER JOIN ea_hash ON ea_probe.id = ea_hash.id;
EXPLAIN (COSTS OFF, ANALYZE, TIMING OFF, BUFFERS OFF, SUMMARY OFF)
SELECT count(*) FROM ea_probe RIGHT OUTER JOIN ea_hash ON ea_probe.id = ea_hash.id;
% git diff --no-index master.out v3-0001.out
diff --git a/master.out b/v3-0001.out
index 1e05e7e39a6..54210c49757 100644
--- a/master.out
+++ b/v3-0001.out
@@ -17,8 +17,8 @@ SET
-> Hash Full Join (actual rows=10090.00 loops=1)
Hash Cond: (ea_probe.id = ea_hash.id)
-> Seq Scan on ea_probe (actual rows=10000.00 loops=1)
- -> Hash (actual rows=100.00 loops=1)
- Buckets: 1024 Batches: 1 Memory Usage: 12kB
+ -> Hash (actual rows=10.00 loops=1)
+ Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on ea_hash (actual rows=100.00 loops=1)
(7 rows)
@@ -29,8 +29,8 @@ SET
-> Hash Right Join (actual rows=10090.00 loops=1)
Hash Cond: (ea_probe.id = ea_hash.id)
-> Seq Scan on ea_probe (actual rows=10000.00 loops=1)
- -> Hash (actual rows=100.00 loops=1)
- Buckets: 1024 Batches: 1 Memory Usage: 12kB
+ -> Hash (actual rows=10.00 loops=1)
+ Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on ea_hash (actual rows=100.00 loops=1)
(7 rows)
% git diff --no-index v3-0001.out v3-0002.out
diff --git a/v3-0001.out b/v3-0002.out
index 54210c49757..17dfe335b0b 100644
--- a/v3-0001.out
+++ b/v3-0002.out
@@ -17,7 +17,7 @@ SET
-> Hash Full Join (actual rows=10090.00 loops=1)
Hash Cond: (ea_probe.id = ea_hash.id)
-> Seq Scan on ea_probe (actual rows=10000.00 loops=1)
- -> Hash (actual rows=10.00 loops=1)
+ -> Hash (actual rows=100.00 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on ea_hash (actual rows=100.00 loops=1)
(7 rows)
@@ -29,7 +29,7 @@ SET
-> Hash Right Join (actual rows=10090.00 loops=1)
Hash Cond: (ea_probe.id = ea_hash.id)
-> Seq Scan on ea_probe (actual rows=10000.00 loops=1)
- -> Hash (actual rows=10.00 loops=1)
+ -> Hash (actual rows=100.00 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on ea_hash (actual rows=100.00 loops=1)
(7 rows)
% git diff --no-index master.out v3-0002.out
diff --git a/master.out b/v3-0002.out
index 1e05e7e39a6..17dfe335b0b 100644
--- a/master.out
+++ b/v3-0002.out
@@ -18,7 +18,7 @@ SET
Hash Cond: (ea_probe.id = ea_hash.id)
-> Seq Scan on ea_probe (actual rows=10000.00 loops=1)
-> Hash (actual rows=100.00 loops=1)
- Buckets: 1024 Batches: 1 Memory Usage: 12kB
+ Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on ea_hash (actual rows=100.00 loops=1)
(7 rows)
@@ -30,7 +30,7 @@ SET
Hash Cond: (ea_probe.id = ea_hash.id)
-> Seq Scan on ea_probe (actual rows=10000.00 loops=1)
-> Hash (actual rows=100.00 loops=1)
- Buckets: 1024 Batches: 1 Memory Usage: 12kB
+ Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on ea_hash (actual rows=100.00 loops=1)
(7 rows)
/Joel