Bizarre coding in _bt_binsrch
I have been puzzling out the coding in _bt_binsrch() in
backend/access/nbtree/nbtsearch.c, with an eye to speeding it up for
the many-equal-keys case. I have finally worked out exactly what it's
doing, to wit:
* On a leaf page, we always return the first key >= scan key
* (which could be the last slot + 1).
*
* On a non-leaf page, there are special cases:
*
* For an insertion (srchtype != BT_DESCENT and natts == keysz)
* always return first key >= scan key (which could be off the end).
*
* For a standard search (srchtype == BT_DESCENT and natts == keysz)
* return the first equal key if one exists, else the last lesser key
* if one exists, else the first slot on the page.
*
* For a partial-match search (srchtype == BT_DESCENT and natts < keysz)
* return the last lesser key if one exists, else the first slot.
This strikes me as a tad bizarre --- in particular, the discrepancy
between treatment of equal keys in the normal and partial search cases.
I think I understand why the partial-match code works that way: there
could be matching keys in the sub-page belonging to the last lesser key.
For example, if our scan target is (x = 2) and we have internal keys
(x = 1, y = 2)
(x = 2, y = 1)
then we need to look at the first key's subpages, where we might find
matching keys like (x = 2, y = 0).
The full-width case appears to assume that that can't happen: if we
have a given key in an upper page, there can be *no* equal keys in
subpages to its left. That's a rather strong assumption about how
page splitting is done; is it correct?
Even more to the point, *should* it be correct? If we always returned
the last lesser key, then the code would work for any correctly
sequenced b-tree, but this code looks like it works only if splits occur
only at the leftmost of a group of equal keys. If there are a lot of
equal keys, that could result in a badly unbalanced tree, no? Maybe
that's the real reason why performance seems to be so bad for many
equal keys... maybe the split algorithm needs to be relaxed?
regards, tom lane
Tom Lane wrote:
The full-width case appears to assume that that can't happen: if we
have a given key in an upper page, there can be *no* equal keys in
subpages to its left. That's a rather strong assumption about how
page splitting is done; is it correct?Even more to the point, *should* it be correct? If we always returned
the last lesser key, then the code would work for any correctly
sequenced b-tree, but this code looks like it works only if splits occur
only at the leftmost of a group of equal keys. If there are a lot of
equal keys, that could result in a badly unbalanced tree, no? Maybe
that's the real reason why performance seems to be so bad for many
equal keys... maybe the split algorithm needs to be relaxed?
Our btree-s use Lehman-Yao algorithm which works in assumption
that there is no duplicates at all. It's just reminding.
It was ~ 2 years ago when I changed duplicates handling
to fix some rare bugs (this is why you see BTP_CHAIN stuff
there) and now I don't remember many things and so I can't
comment. But after I changed btree-s I learned how Oracle
handles duplicates problem - it just uses heap tuple id
as (last) part of index key! So simple! Unfortunately,
I had not time to re-implement btree-s in this way.
But this would:
1. get rid of duplicates problem;
2. simplify code (BTP_CHAIN stuff would be removed);
3. order index tuples in such way that in scan heap pages
would be read sequentially (from up of file to down);
4. speed up finding index tuple which corresponds to heap one
(good for index cleaning up).
The main problem is just programistic: you will have to add
heap tid to the end of index tuples on internal index pages,
but on leaf pages heap tid is in the begin of index tuples
(inside of btitem struct).
So, if you're going to change btree, please consider ability
to implement above.
Vadim
The main problem is just programistic: you will have to add
heap tid to the end of index tuples on internal index pages,
but on leaf pages heap tid is in the begin of index tuples
(inside of btitem struct).
While I absolutely like the idea of having the heap tid in the index,
I don't quite agree, that leaf pages need heap tid at the front of the key.
This would lead to Index access beeing not ordered (in terms of key) :-(
Having it in the front will only lead to "on disk ordered" fetches while
reading
tuples from one leaf page, when reading the next leaf page you will start
from the beginning.
So I think the leaf page needs heap tid at the end of each key, same as
in root pages.
For performance reasons a totally standalone "sort to tuple on disk order"
node could be implemented, that could also be handled by the
optimizer, and would be of wider performance use.
Andreas
Import Notes
Resolved by subject fallback
ZEUGSWETTER Andreas IZ5 wrote:
The main problem is just programistic: you will have to add
heap tid to the end of index tuples on internal index pages,
but on leaf pages heap tid is in the begin of index tuples
(inside of btitem struct).While I absolutely like the idea of having the heap tid in the index,
I don't quite agree, that leaf pages need heap tid at the front of the key.
Oh no - this is not what I meant to say.
First, there is no heap tid in index tuples in internal pages,
and so we'll have to add it to them. Actually, it doesn't matter
where to add it - just after btitem->bti_itup (i.e. header of
index tuple) or after field keys - it will be the last key used
in comparing.
But on leaf pages index tuples already keep heap tid - this is
btitem->bti_itup.t_tid - and so we shouldn't add heap tid there.
I just wanted to say that we'll have to differentiate
internal/leaf index tuples in _bt_compare, _bt_binsrch etc
to know from what part of index tuples heap tid should be fetched.
Sorry.
Vadim
ZEUGSWETTER Andreas IZ5 wrote:
For performance reasons a totally standalone "sort to tuple on disk order"
node could be implemented, that could also be handled by the
optimizer, and would be of wider performance use.
We'll get tuples from index scan sorted on [key, disk order].
Vadim