hash index improving v3
There's minor change against the previous one(
http://archives.postgresql.org/pgsql-hackers/2008-07/msg01183.php ).
* merge branch master(Aug 16) into the patch
* clean code and make some comment
Performance result is here
http://wiki.postgresql.org/wiki/Gsoc08-hashindex
It seems hash index is a little better on index creation and selection.
But maybe it's in the range of noise, I'm not sure.
I'd like to try it with a bigger dataset (e.g. table with 10GB) but there is
not enough space in my computer.
Anyone interest can make a test on a bigger data set.
--
Best Regards,
Xiao Meng
DKERC, Harbin Institute of Technology, China
Gtalk: mx.cogito@gmail.com
MSN: cnEnder@live.com
http://xiaomeng.yo2.cn
Attachments:
hash-v3.patchtext/x-diff; name=hash-v3.patchDownload+343-189
With the help of David Fetter, you can get the copy by
git clone http://git.postgresql.org/git/~davidfetter/hash/.git<http://git.postgresql.org/git/%7Edavidfetter/hash/.git>
It's in the branch gsoc-hash.
Thank you, David.
--
Best Regards,
Xiao Meng
DKERC, Harbin Institute of Technology, China
Gtalk: mx.cogito@gmail.com
MSN: cnEnder@live.com
http://xiaomeng.yo2.cn
On Mon, 2008-08-18 at 09:46 +0800, Xiao Meng wrote:
There's minor change against the previous
one( http://archives.postgresql.org/pgsql-hackers/2008-07/msg01183.php ).
* merge branch master(Aug 16) into the patch
* clean code and make some comment
Performance result is here
http://wiki.postgresql.org/wiki/Gsoc08-hashindexIt seems hash index is a little better on index creation and
selection.
But maybe it's in the range of noise, I'm not sure.
I'd like to try it with a bigger dataset (e.g. table with 10GB) but
there is not enough space in my computer.
Anyone interest can make a test on a bigger data set.
You don't give the text of the query used to do these performance tests,
so I can't validate your test results.
Right now it seems strange that the index is larger than a btree, yet
the performance tests show that 3 times as much I/O was used accessing
the btree.
--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support
On Wed, Sep 3, 2008 at 10:06 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
It seems hash index is a little better on index creation and
selection.
But maybe it's in the range of noise, I'm not sure.
I'd like to try it with a bigger dataset (e.g. table with 10GB) but
there is not enough space in my computer.
Anyone interest can make a test on a bigger data set.
I tried it earlier on a 500M row table and found a few bugs. In
particular, it doesn't seem like recheck is happening and the
performance/sizing is a bit *interesting*. I'll post stats tomorrow
when I'm in the office.
--
Jonah H. Harris, Senior DBA
myYearbook.com
Simon Riggs <simon@2ndQuadrant.com> writes:
Right now it seems strange that the index is larger than a btree, yet
the performance tests show that 3 times as much I/O was used accessing
the btree.
Well, in an ideal world a hash index probe is O(1) while a btree probe
is O(log N), so that result is exactly what hash proponents would hope
for. Whether it's real or not is another question, but it could be.
regards, tom lane
I performed code review and see my comments.
pgsql/src/backend/access/hash/hashpage.c
<http://reviewdemo.postgresql.org/r/26/#comment31>
use sizeof() or something better the 4.
pgsql/src/backend/access/hash/hashpage.c
<http://reviewdemo.postgresql.org/r/26/#comment32>
New empty line.
pgsql/src/backend/access/hash/hashutil.c
<http://reviewdemo.postgresql.org/r/26/#comment27>
It would be better remove #define from hash.h and setup it there
directly.
pgsql/src/backend/access/hash/hashutil.c
<http://reviewdemo.postgresql.org/r/26/#comment28>
Why not return directly uint32?
pgsql/src/backend/access/hash/hashutil.c
<http://reviewdemo.postgresql.org/r/26/#comment22>
Retype to correct return type.
pgsql/src/backend/access/hash/hashutil.c
<http://reviewdemo.postgresql.org/r/26/#comment29>
Whats about null values?
pgsql/src/backend/access/hash/hashutil.c
<http://reviewdemo.postgresql.org/r/26/#comment30>
I'm not sure if values modification is safe. Please, recheck.
pgsql/src/backend/access/hash/hashutil.c
<http://reviewdemo.postgresql.org/r/26/#comment23>
Return value is not much clear. I prefer to return InvalidOffset
when no record is found. However it seems that you use result also for
PageAddItem to put item on correct ordered position. I think better
explanation should help to understand how it works.
pgsql/src/backend/access/hash/hashutil.c
<http://reviewdemo.postgresql.org/r/26/#comment26>
It could return FirstOffset number in case when nothing interesting
is on the page.
pgsql/src/include/access/hash.h
<http://reviewdemo.postgresql.org/r/26/#comment34>
Why not define new datatype for example HashKey instead of uint32?
pgsql/src/include/access/hash.h
<http://reviewdemo.postgresql.org/r/26/#comment33>
It is not good place. See my other comment.
--------------
You also forgot to bump hash index version in meta page.
Zdenek
On Thu, Sep 4, 2008 at 10:06 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
You don't give the text of the query used to do these performance tests,
so I can't validate your test results.
The attachment is the code to generate the text.
Just
$g++ my-permu-code.cpp
$./a.out > /tmp/words
--
Best Regards,
Xiao Meng
DKERC, Harbin Institute of Technology, China
Gtalk: mx.cogito@gmail.com
MSN: cnEnder@live.com
http://xiaomeng.yo2.cn
Attachments:
my-permu-code.cpptext/x-c++src; name=my-permu-code.cppDownload
On Thu, 2008-09-04 at 21:03 +0800, Xiao Meng wrote:
On Thu, Sep 4, 2008 at 10:06 AM, Simon Riggs <simon@2ndquadrant.com>
wrote:
You don't give the text of the query used to do these
performance tests,so I can't validate your test results.
The attachment is the code to generate the text.
Just
$g++ my-permu-code.cpp
$./a.out > /tmp/words
That generates the data, fine. What about the test query?
You say you are running this command line
pgbench -U postgres -n -f /tmp/query.sql dict
Where is query.sql?
It doesn't seem a big ask to be shown the SQL statement that is being
executed, so we understand what is going on.
--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support
Zdenek Kotala <Zdenek.Kotala@Sun.COM> writes:
I performed code review and see my comments.
Thanks for the comments. I've incorporated all of these into an updated
patch that I'm preparing, except for
Why not define new datatype for example HashKey instead of uint32?
This seems like a good idea, but I think we should do it as a separate,
cosmetic-cleanup patch. It'll touch a lot of parts of access/hash/ that
the current patch doesn't need to change, and thus complicate reviewing.
regards, tom lane
Zdenek Kotala <Zdenek.Kotala@Sun.COM> writes:
pgsql/src/backend/access/hash/hashutil.c
<http://reviewdemo.postgresql.org/r/26/#comment27>
It would be better remove #define from hash.h and setup it there
directly.
Actually, I don't like this aspect of the patch one bit: it means that
the system catalogs are lying about what is stored in the index, which
seems likely to break something somewhere, either now or down the road.
I think the correct way to handle this is to make the pg_attribute entries
(and hence the index's relation descriptor) accurately match what is
stored in the index. For testing purposes I propose this crude hack
in catalog/index.c's ConstructTupleDescriptor():
*** src/backend/catalog/index.c.orig Mon Aug 25 18:42:32 2008
--- src/backend/catalog/index.c Thu Sep 4 16:20:12 2008
***************
*** 133,138 ****
--- 133,139 ----
Form_pg_attribute to = indexTupDesc->attrs[i];
HeapTuple tuple;
Form_pg_type typeTup;
+ Form_pg_opclass opclassTup;
Oid keyType;
if (atnum != 0)
***************
*** 240,246 ****
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for opclass %u",
classObjectId[i]);
! keyType = ((Form_pg_opclass) GETSTRUCT(tuple))->opckeytype;
ReleaseSysCache(tuple);
if (OidIsValid(keyType) && keyType != to->atttypid)
--- 241,252 ----
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for opclass %u",
classObjectId[i]);
! opclassTup = (Form_pg_opclass) GETSTRUCT(tuple);
! /* HACK: make hash always use int4 as storage (really it's uint32) */
! if (opclassTup->opcmethod == HASH_AM_OID)
! keyType = INT4OID;
! else
! keyType = opclassTup->opckeytype;
ReleaseSysCache(tuple);
if (OidIsValid(keyType) && keyType != to->atttypid)
Assuming the patch gets accepted, we should devise some cleaner way
of letting index AMs adjust their indexes' reldescs; maybe declare a
new entry point column in pg_am that lets the AM modify the tupledesc
constructed by this function before it gets used to create the index.
But that is irrelevant to performance testing, so I'm not going to do
it right now.
regards, tom lane
Here is an updated patch incorporating Zdenek's review, my own
observation that we should make the index tupledesc tell the truth,
and some other fixes/improvements such as making backwards scans
work as expected.
The main thing lacking before this could be committed, from a code
standpoint, is a cleaner solution to the problem of adjusting the
index tupledesc (see the ugly hack in catalog/index.c). However,
that complaint is irrelevant for functionality or performance testing,
so I'm throwing this back out there in hopes someone will do some...
I thought a little bit about how to extend this to store both hashcode
and original index key, and realized that the desire to have a truthful
index tupledesc makes that a *whole* lot harder. The planner, and
really even the pg_index catalog representation, assume that the visible
columns of an index are one-for-one with the index keys. We can slide
through with the attached patch because this is still true ---
effectively we're just using a "storage type" different from the indexed
column's type for hash indexes, as already works for GIST and GIN.
But having two visible columns would bollix up quite a lot of stuff.
So I think if we actually want to do that, we'd need to revert to the
concept of cheating on the tupledesc. Aside from the various uglinesses
that I was able to remove from the original patch by not having that,
I'm still quite concerned that we'd find something else wrong with
doing that, further down the road.
So my thinking right now is that we should just test this patch as-is.
If it doesn't show really horrid performance when there are lots of
hash key collisions, we should forget the store-both-things idea and
just go with this.
regards, tom lane
On Thu, Sep 4, 2008 at 5:11 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
So my thinking right now is that we should just test this patch as-is.
If it doesn't show really horrid performance when there are lots of
hash key collisions, we should forget the store-both-things idea and
just go with this.
Ok let me know if this is to naive of an approach or not hitting the
right cases you want tested.
create table hash_a (same text, uniq text);
insert into hash_a (same, uniq) select 'same', n from
generate_series(0, 5000) as n;
create table hash_b (uniq text);
insert into hash_b (uniq) select n from generate_series(5000, 10000) as n;
pgbench -c 1 -t 100 -n -f of the following
hash_same.sql:
set enable_seqscan to off;
set enable_mergejoin to off;
select 1 from hash_a as a inner join hash_a as aa on aa.same = a.same;
hash_uniq.sql:
set enable_seqscan to off;
set enable_mergejoin to off;
select 1 from hash_a as a inner join hash_b as b on b.uniq = a.uniq;
"Alex Hunsaker" <badalex@gmail.com> writes:
Ok let me know if this is to naive of an approach or not hitting the
right cases you want tested.
You have the unique-versus-not dimension, but I'm also wondering about
narrow vs wide index keys (say about 8 bytes vs 50-100 or so). In the
former case we're not saving any index space by storing only the hash
code, so these could be expected to have different performance
behaviors.
As for specifics of the suggested scripts:
* might be better to do select count(*) not select 1, so that client
communication is minimized
* check that the queries actually use the indexes (not sure that the
proposed switch settings ensure this, not to mention you didn't create
the indexes)
* make sure the pgbench transaction counts are large enough to ensure
significant runtime
* the specific table sizes suggested are surely not large enough
regards, tom lane
I wrote:
You have the unique-versus-not dimension,
On second thought, actually not. What we want to look at is the penalty
for false matches due to *distinct* key values that happen to have the
same hash codes. Your test case for all-the-same is using all the same
key values, which means it'll hit the heap a lot, but none of those will
be wasted trips.
So what we need for testing is a few different key values that hash to
the same code. Not sure about an easy way to find such.
regards, tom lane
On Thu, Sep 4, 2008 at 7:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Alex Hunsaker" <badalex@gmail.com> writes:
Ok let me know if this is to naive of an approach or not hitting the
right cases you want tested.You have the unique-versus-not dimension, but I'm also wondering about
narrow vs wide index keys (say about 8 bytes vs 50-100 or so). In the
former case we're not saving any index space by storing only the hash
code, so these could be expected to have different performance
behaviors.
Arg yes... I just read the last part of your mail in this thread. I
think it was the one on -hackers that talked about narrow vs wide...
so I figured I would just try to do what the thread where you posted
the patch talked about namley the below:
So my thinking right now is that we should just test this patch as-is.
If it doesn't show really horrid performance when there are lots of
hash key collisions, we should forget the store-both-things idea and
just go with this.
So I thought, lets try to generate lots of hash collisions... obviosly
though using the same key wont do that... Not sure what I was thinking
As for specifics of the suggested scripts:
* might be better to do select count(*) not select 1, so that client
communication is minimized
Yar.
* check that the queries actually use the indexes (not sure that the
proposed switch settings ensure this, not to mention you didn't create
the indexes)
Well I was assuming I could just test the speed of a hash join...
* make sure the pgbench transaction counts are large enough to ensure
significant runtime
* the specific table sizes suggested are surely not large enough
Ok
On Thu, Sep 4, 2008 at 7:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
So what we need for testing is a few different key values that hash to
the same code. Not sure about an easy way to find such.
Hrm, well I have not really looked at the hash algorithm but I assume
we could just reduce the number of buckets?
"Alex Hunsaker" <badalex@gmail.com> writes:
On Thu, Sep 4, 2008 at 7:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
So what we need for testing is a few different key values that hash to
the same code. Not sure about an easy way to find such.
Hrm, well I have not really looked at the hash algorithm but I assume
we could just reduce the number of buckets?
No, we need fully equal hash keys, else the code won't visit the heap.
I guess one thing we could do for testing purposes is lobotomize one of
the datatype-specific hash functions. For instance, make int8_hash
return the input mod 2^32, ignoring the upper bytes. Then it'd be easy
to compute different int8s that hash to the same thing.
regards, tom lane
"Alex Hunsaker" <badalex@gmail.com> writes:
On Thu, Sep 4, 2008 at 7:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
* check that the queries actually use the indexes (not sure that the
proposed switch settings ensure this, not to mention you didn't create
the indexes)
Well I was assuming I could just test the speed of a hash join...
Uh, no, hash joins have nearly zip to do with hash indexes. They rely
on the same per-datatype support functions but that's the end of the
commonality.
regards, tom lane
On Thu, Sep 4, 2008 at 8:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I guess one thing we could do for testing purposes is lobotomize one of
the datatype-specific hash functions. For instance, make int8_hash
return the input mod 2^32, ignoring the upper bytes. Then it'd be easy
to compute different int8s that hash to the same thing.
Heh Ok im slowly getting there... So we lobotomize hashint8 and then
time how long it takes to make an index on a table... something like:
create table test_hash(num int8);
(obviously on a 64 bit machine)
int main(void)
{
unsigned long y = 0;
unsigned cnt = 0;
printf("insert into test_hash (num) values ");
//while(cnt != LONG_MAX/UINT_MAX)
while(cnt < 10000000)
{
y += UINT_MAX;
printf("(%ld), ", y);
cnt++;
}
printf("(0);\n");
}
./a.out | psql
pgbench -c 1 -t1000 -n -f test.sql
test.sql:
create index test_hash_num_idx on test_hash using hash (num);
drop index test_hash_num_idx;
For both pre and post patch just to make sure post patch is not worse
than pre patch???
If im still way off and its not to much trouble want to give me a test
case to run =) ?
Or maybe because hash collisions should be fairly rare its not
something to really worry about?
On Thu, Sep 4, 2008 at 9:48 PM, Alex Hunsaker <badalex@gmail.com> wrote:
Ok here are the results:
(data generated from the c program before)
select count(1) from test_hash;
count
-----------
100000011
create index test_hash_num_idx on test_hash using hash (num);
CVS: Time: 698065.180 ms
patch: Time: 565982.099 ms
./pgbench -c 1 -t 100000 -n -f bench.sql
bench.sql
select count(1) from test_hash where num = 110034304728896610;
CVS: tps = 7232.375875 (excluding connections establishing)
patch: tps = 7913.700150 (excluding connections establishing)
EXPLAIN ANALYZE select count(1) from test_hash where num = 110034304728896610;
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=29.24..29.25 rows=1 width=0) (actual
time=0.066..0.067 rows=1 loops=1)
-> Index Scan using test_hash_num_idx on test_hash
(cost=0.00..29.24 rows=1 width=0) (actual time=0.051..0.054 rows=1
loops=1)
Index Cond: (num = 110034304728896610::bigint)
Total runtime: 0.153 ms
Oddly the index sizes were the same (4096 MB) is that to be expected?
Here is the change I made to hashint8
--- a/src/backend/access/hash/hashfunc.c
+++ b/src/backend/access/hash/hashfunc.c
@@ -61,12 +61,14 @@ hashint8(PG_FUNCTION_ARGS)
*/
#ifndef INT64_IS_BUSTED
int64 val = PG_GETARG_INT64(0);
- uint32 lohalf = (uint32) val;
+/* uint32 lohalf = (uint32) val;
uint32 hihalf = (uint32) (val >> 32);
lohalf ^= (val >= 0) ? hihalf : ~hihalf;
return hash_uint32(lohalf);
+*/
+ return val % 4294967296;
#else
/* here if we can't count on "x >> 32" to work sanely */
return hash_uint32((int32) PG_GETARG_INT64(0));