texteq/byteaeq: avoid detoast
texteq, textne, byteaeq and byteane detoast their arguments, then check for
equality of length. Unequal lengths imply the answer trivially; given equal
lengths, the functions proceed to compare the actual bytes. We can skip
detoasting entirely when the lengths are unequal. The attached patch implements
this. As submitted, it applies atop of my recent strncmp->memcmp patch, but
they are logically independent. To benchmark some optimal and pessimal cases, I
used the attached "bench-skip-texteq.sql". It uses a few datum sizes and varies
whether the length check succeeds:
bench-skip-texteq.sql, 10 MiB nomatch: 58.4s previous, 0.00664s patched
bench-skip-texteq.sql, 144 B match: 73.0s previous, 71.9s patched
bench-skip-texteq.sql, 3 B match: 68.8s previous, 67.3s patched
bench-skip-texteq.sql, 3 B nomatch: 45.0s previous, 46.0s patched
The timing differences in the smaller-length test cases are probably not
statistically significant.
Thanks,
nm
On Mon, Dec 20, 2010 at 1:19 PM, Noah Misch <noah@leadboat.com> wrote:
texteq, textne, byteaeq and byteane detoast their arguments, then check for
equality of length. Unequal lengths imply the answer trivially; given equal
lengths, the functions proceed to compare the actual bytes. We can skip
detoasting entirely when the lengths are unequal. The attached patch implements
this. As submitted, it applies atop of my recent strncmp->memcmp patch, but
they are logically independent. To benchmark some optimal and pessimal cases, I
used the attached "bench-skip-texteq.sql". It uses a few datum sizes and varies
whether the length check succeeds:bench-skip-texteq.sql, 10 MiB nomatch: 58.4s previous, 0.00664s patched
bench-skip-texteq.sql, 144 B match: 73.0s previous, 71.9s patched
bench-skip-texteq.sql, 3 B match: 68.8s previous, 67.3s patched
bench-skip-texteq.sql, 3 B nomatch: 45.0s previous, 46.0s patchedThe timing differences in the smaller-length test cases are probably not
statistically significant.
Can you add this to the currently-open CommitFest, so we don't lose track of it?
https://commitfest.postgresql.org/action/commitfest_view/open
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Mon, Jan 03, 2011 at 10:23:03PM -0500, Robert Haas wrote:
Can you add this to the currently-open CommitFest, so we don't lose track of it?
https://commitfest.postgresql.org/action/commitfest_view/open
Done.
Hello
I looked on patch
does work toast_raw_datum_size on packed varlena corectly?
regards
Pavel Stehule
2011/1/4 Noah Misch <noah@leadboat.com>:
Show quoted text
On Mon, Jan 03, 2011 at 10:23:03PM -0500, Robert Haas wrote:
Can you add this to the currently-open CommitFest, so we don't lose track of it?
https://commitfest.postgresql.org/action/commitfest_view/open
Done.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Pavel,
On Tue, Jan 04, 2011 at 03:13:11PM +0100, Pavel Stehule wrote:
I looked on patch
Thanks.
does work toast_raw_datum_size on packed varlena corectly?
Yes, as best I can tell.
This is a review of:
https://commitfest.postgresql.org/action/patch_view?id=468
Purpose:
========
Equal and not-equal _may_ be quickly determined if their lengths are different. This _may_ be a huge speed up if we dont have to detoat.
The Patch:
==========
I was able to read and understand the patch, its a simple change and looked correct to me (a non PG hacker).
It applies clean to git head, compiles and runs fine with debug enabled.
make check passes
Usability:
==========
I used _may_ above. The benchmark included with the patch, showing huge speedups, is really contrived. It uses a where clause with a thousand character constant: (where c = 'long...long...long...long...ConstantText...etc'). In my opinion this is very uncommon (the author does note this is a "best case"). If you have a field large enough to be toasted you are not going to be using that to search on, you are going to have an ID field that is indexed. (select c where id = 7)
This also only touches = and <>. > < and like wont be touched. So I think the scope of this is limited.
THAT being said, the patch is simple, and if you do happen to hit the code, it will speed things up. As a user of PG I'd like to have this included. Its a corner case, but a big corner, and its a small, simple change, and it wont slow anything else down.
Performance:
============
I created myself a more real world test, with a table with indexes and id's and a large toasted field.
create table junk(id serial primary key, xgroup integer, c text);
create index junk_group on junk(xgroup);
I filled it full of junk:
do $$
declare i integer;
declare j integer;
begin
for i in 1..100 loop
for j in 1..500 loop
insert into junk(xgroup, c) values (j, 'c'||i);
insert into junk (xgroup, c) select j, repeat('abc', 2000)|| n from generate_series(1, 5) n;
end loop;
end loop;
end$$;
This will make about 600 records within the same xgroup. As well as a simple 'c15' type of value in c we can search for. My thinking is you may not know the exact unique id, but you do know what group its in, so that'll cut out 90% of the records, and then you'll have to add " and c = 'c15'" to get the exact one you want.
I still saw a nice performance boost.
Old PG:
$ psql < bench3.sql
Timing is on.
DO
Time: 2010.412 ms
Patched:
$ psql < bench3.sql
Timing is on.
DO
Time: 184.602 ms
bench3.sql:
do $$
declare i integer;
begin
for i in 1..400 loop
perform count(*) from junk where xgroup = i and c like 'c' || i;
end loop;
end$$;
Summary:
========
Performance speed-up: Oh yeah! If you just happen to hit it, and if you do hit it, you might want to re-think your layout a little bit.
Do I want it? Yes please.
Hello
I looked on this patch too.
It's good idea.
I think, so we can have a function or macro that compare a varlena
sizes. Some like
Datum texteq(..)
{
if (!datumsHasSameLength(PG_GETARG_DATUM(0), PG_GETARG_DATUM(1))
PG_RETURN_FALSE();
... actual code ..
}
Regards
Pavel Stehule
2011/1/16 Andy Colson <andy@squeakycode.net>:
Show quoted text
This is a review of:
https://commitfest.postgresql.org/action/patch_view?id=468Purpose:
========
Equal and not-equal _may_ be quickly determined if their lengths are
different. This _may_ be a huge speed up if we dont have to detoat.The Patch:
==========
I was able to read and understand the patch, its a simple change and looked
correct to me (a non PG hacker).
It applies clean to git head, compiles and runs fine with debug enabled.make check passes
Usability:
==========
I used _may_ above. The benchmark included with the patch, showing huge
speedups, is really contrived. It uses a where clause with a thousand
character constant: (where c =
'long...long...long...long...ConstantText...etc'). In my opinion this is
very uncommon (the author does note this is a "best case"). If you have a
field large enough to be toasted you are not going to be using that to
search on, you are going to have an ID field that is indexed. (select c
where id = 7)This also only touches = and <>. > < and like wont be touched. So I think
the scope of this is limited.THAT being said, the patch is simple, and if you do happen to hit the code,
it will speed things up. As a user of PG I'd like to have this included.
Its a corner case, but a big corner, and its a small, simple change, and it
wont slow anything else down.Performance:
============
I created myself a more real world test, with a table with indexes and id's
and a large toasted field.create table junk(id serial primary key, xgroup integer, c text);
create index junk_group on junk(xgroup);I filled it full of junk:
do $$
declare i integer;
declare j integer;
begin
for i in 1..100 loop
for j in 1..500 loop
insert into junk(xgroup, c) values (j, 'c'||i);
insert into junk (xgroup, c) select j, repeat('abc',
2000)|| n from generate_series(1, 5) n;
end loop;
end loop;
end$$;This will make about 600 records within the same xgroup. As well as a
simple 'c15' type of value in c we can search for. My thinking is you may
not know the exact unique id, but you do know what group its in, so that'll
cut out 90% of the records, and then you'll have to add " and c = 'c15'" to
get the exact one you want.I still saw a nice performance boost.
Old PG:
$ psql < bench3.sql
Timing is on.
DO
Time: 2010.412 msPatched:
$ psql < bench3.sql
Timing is on.
DO
Time: 184.602 msbench3.sql:
do $$
declare i integer;
begin
for i in 1..400 loop
perform count(*) from junk where xgroup = i and c like 'c' ||
i;
end loop;
end$$;Summary:
========
Performance speed-up: Oh yeah! If you just happen to hit it, and if you do
hit it, you might want to re-think your layout a little bit.Do I want it? Yes please.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sun, Jan 16, 2011 at 01:05:11PM -0600, Andy Colson wrote:
This is a review of:
https://commitfest.postgresql.org/action/patch_view?id=468
Thanks!
I created myself a more real world test, with a table with indexes and id's and a large toasted field.
This will make about 600 records within the same xgroup. As well as a simple 'c15' type of value in c we can search for. My thinking is you may not know the exact unique id, but you do know what group its in, so that'll cut out 90% of the records, and then you'll have to add " and c = 'c15'" to get the exact one you want.
Good to have a benchmark like that, rather than just looking at the extrema.
On Sun, Jan 16, 2011 at 10:07:13PM +0100, Pavel Stehule wrote:
I think, so we can have a function or macro that compare a varlena
sizes. Some likeDatum texteq(..)
{
if (!datumsHasSameLength(PG_GETARG_DATUM(0), PG_GETARG_DATUM(1))
PG_RETURN_FALSE();... actual code ..
}
Good point. Is this something that would be useful many places? One thing that
bugged me slightly writing this patch is that texteq, textne, byteaeq and
byteane all follow the same pattern rather tightly. (Indeed, I think one could
easily implement texteq and byteaeq with the exact same C function.) I like how
we handle this for tsvector (see TSVECTORCMPFUNC in tsvector_op.c) to avoid the
redundancy. If datumHasSameLength would mainly apply to these four functions or
ones very similar to them, maybe we should abstract out the entire function body
like we do for tsvector?
A topic for a different patch in any case, I think.
2011/1/16 Noah Misch <noah@leadboat.com>:
On Sun, Jan 16, 2011 at 10:07:13PM +0100, Pavel Stehule wrote:
I think, so we can have a function or macro that compare a varlena
sizes. Some likeDatum texteq(..)
{
if (!datumsHasSameLength(PG_GETARG_DATUM(0), PG_GETARG_DATUM(1))
PG_RETURN_FALSE();... actual code ..
}Good point. Is this something that would be useful many places? One thing that
bugged me slightly writing this patch is that texteq, textne, byteaeq and
byteane all follow the same pattern rather tightly. (Indeed, I think one could
easily implement texteq and byteaeq with the exact same C function.).
It isn't good idea. Theoretically, there can be differencies between
text and bytea in future - there can be important collations. Now,
these types are distinct and some basic methods should be distinct
too. Different situation is on varlena level.
Regards
Pavel Stehule
I like how
Show quoted text
we handle this for tsvector (see TSVECTORCMPFUNC in tsvector_op.c) to avoid the
redundancy. If datumHasSameLength would mainly apply to these four functions or
ones very similar to them, maybe we should abstract out the entire function body
like we do for tsvector?A topic for a different patch in any case, I think.
On Mon, Jan 17, 2011 at 04:05, Andy Colson <andy@squeakycode.net> wrote:
This is a review of:
https://commitfest.postgresql.org/action/patch_view?id=468Purpose:
========
Equal and not-equal _may_ be quickly determined if their lengths are
different. This _may_ be a huge speed up if we don't have to detoast.
We can skip detoast to compare lengths of two text/bytea values
with the patch, but we still need detoast to compare the contents
of the values.
If we always generate same toasted byte sequences from the same raw
values, we don't need to detoast at all to compare the contents.
Is it possible or not?
--
Itagaki Takahiro
(2011/01/17 14:51), Itagaki Takahiro wrote:
On Mon, Jan 17, 2011 at 04:05, Andy Colson<andy@squeakycode.net> wrote:
This is a review of:
https://commitfest.postgresql.org/action/patch_view?id=468Purpose:
========
Equal and not-equal _may_ be quickly determined if their lengths are
different. This _may_ be a huge speed up if we don't have to detoast.We can skip detoast to compare lengths of two text/bytea values
with the patch, but we still need detoast to compare the contents
of the values.If we always generate same toasted byte sequences from the same raw
values, we don't need to detoast at all to compare the contents.
Is it possible or not?
Are you talking about an idea to apply toast id as an alternative key?
I did similar idea to represent security label on user tables for row
level security in the v8.4/9.0 based implementation. Because a small
number of security labels are shared by massive tuples, it is waste of
space if we have all the text data being toasted individually, not only
performance loss in toast/detoast.
In this case, I represented security label (text) using security-id (oid)
which is a primary key pointing out a certain text data in catalog table.
It well reduced storage consumption and achieved good performance in
comparison operation.
One challenge was to reclaim orphan texts. In this case, we needed to
lock out a user table referencing the toast values, then we delete all
the orphan entries.
Thanks,
--
KaiGai Kohei <kaigai@ak.jp.nec.com>
On Mon, Jan 17, 2011 at 06:51, Itagaki Takahiro
<itagaki.takahiro@gmail.com> wrote:
On Mon, Jan 17, 2011 at 04:05, Andy Colson <andy@squeakycode.net> wrote:
This is a review of:
https://commitfest.postgresql.org/action/patch_view?id=468Purpose:
========
Equal and not-equal _may_ be quickly determined if their lengths are
different. This _may_ be a huge speed up if we don't have to detoast.We can skip detoast to compare lengths of two text/bytea values
with the patch, but we still need detoast to compare the contents
of the values.If we always generate same toasted byte sequences from the same raw
values, we don't need to detoast at all to compare the contents.
Is it possible or not?
For bytea, it seems it would be possible.
For text, I think locales may make that impossible. Aren't there
locale rules where two different characters can "behave the same" when
comparing them? I know in Swedish at least w and v behave the same
when sorting (but not when comparing) in some variants of the locale.
In fact, aren't there cases where the *length test* also fails? I
don't know this for sure, but unless we know for certain that two
different length strings can never be the same *independent of
locale*, this whole patch has a big problem...
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
2011/1/17 Magnus Hagander <magnus@hagander.net>:
On Mon, Jan 17, 2011 at 06:51, Itagaki Takahiro
<itagaki.takahiro@gmail.com> wrote:On Mon, Jan 17, 2011 at 04:05, Andy Colson <andy@squeakycode.net> wrote:
This is a review of:
https://commitfest.postgresql.org/action/patch_view?id=468Purpose:
========
Equal and not-equal _may_ be quickly determined if their lengths are
different. This _may_ be a huge speed up if we don't have to detoast.We can skip detoast to compare lengths of two text/bytea values
with the patch, but we still need detoast to compare the contents
of the values.If we always generate same toasted byte sequences from the same raw
values, we don't need to detoast at all to compare the contents.
Is it possible or not?For bytea, it seems it would be possible.
For text, I think locales may make that impossible. Aren't there
locale rules where two different characters can "behave the same" when
comparing them? I know in Swedish at least w and v behave the same
when sorting (but not when comparing) in some variants of the locale.In fact, aren't there cases where the *length test* also fails? I
don't know this for sure, but unless we know for certain that two
different length strings can never be the same *independent of
locale*, this whole patch has a big problem...
Some string's comparation operations are binary now too. But it is
question what will be new with collate support.
Regards
Pavel Stehule
Show quoted text
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jan 17, 2011 at 16:13, Pavel Stehule <pavel.stehule@gmail.com> wrote:
If we always generate same toasted byte sequences from the same raw
values, we don't need to detoast at all to compare the contents.
Is it possible or not?For bytea, it seems it would be possible.
For text, I think locales may make that impossible. Aren't there
locale rules where two different characters can "behave the same" when
comparing them? I know in Swedish at least w and v behave the same
when sorting (but not when comparing) in some variants of the locale.Some string's comparation operations are binary now too. But it is
question what will be new with collate support.
Right. We are using memcmp() in texteq and textne now. We consider
collations only in <, <=, =>, > and compare support functions.
So, I think there is no regression here as long as raw values and
toasted byte sequences have one-to-one correspondence.
--
Itagaki Takahiro
On mån, 2011-01-17 at 07:35 +0100, Magnus Hagander wrote:
For text, I think locales may make that impossible. Aren't there
locale rules where two different characters can "behave the same" when
comparing them? I know in Swedish at least w and v behave the same
when sorting (but not when comparing) in some variants of the locale.In fact, aren't there cases where the *length test* also fails? I
don't know this for sure, but unless we know for certain that two
different length strings can never be the same *independent of
locale*, this whole patch has a big problem...
Currently, two text values are only equal of strcoll() considers them
equal and the bits are the same. So this patch is safe in that regard.
There is, however, some desire to loosen this. Possible applications
are case-insensitive comparison and Unicode normalization. It's not
going to happen soon, but it may be worth considering not putting in an
optimization that we'll end up having to rip out again in a year
perhaps.
2011/1/17 Itagaki Takahiro <itagaki.takahiro@gmail.com>:
On Mon, Jan 17, 2011 at 16:13, Pavel Stehule <pavel.stehule@gmail.com> wrote:
If we always generate same toasted byte sequences from the same raw
values, we don't need to detoast at all to compare the contents.
Is it possible or not?For bytea, it seems it would be possible.
For text, I think locales may make that impossible. Aren't there
locale rules where two different characters can "behave the same" when
comparing them? I know in Swedish at least w and v behave the same
when sorting (but not when comparing) in some variants of the locale.Some string's comparation operations are binary now too. But it is
question what will be new with collate support.Right. We are using memcmp() in texteq and textne now. We consider
collations only in <, <=, =>, > and compare support functions.
So, I think there is no regression here as long as raw values and
toasted byte sequences have one-to-one correspondence.
I am sure, so this isn't a problem in Czech locale, but I am not sure
about German or Turkish.
There was issue (if I remember well with German "ss" char) ?
Pavel
Show quoted text
--
Itagaki Takahiro
2011/1/17 KaiGai Kohei <kaigai@ak.jp.nec.com>:
Are you talking about an idea to apply toast id as an alternative key?
No, probably. I'm just talking about whether "diff -q A.txt B.txt" and
"diff -q A.gz B.gz" always returns the same result or not.
... I found it depends on version of gzip. So, if we use such logic,
we cannot improve toast compression logic because the data is migrated
by pg_upgrade.
I did similar idea to represent security label on user tables for row
level security in the v8.4/9.0 based implementation. Because a small
number of security labels are shared by massive tuples, it is waste of
space if we have all the text data being toasted individually, not only
performance loss in toast/detoast.
It looks the same issue as large object rather than the discussion here.
We have vacuumlo in contrib to solve the issue. So, we could have
vacuumlo-like special sweeping logic for the security label table.
--
Itagaki Takahiro
On Mon, Jan 17, 2011 at 09:13, Itagaki Takahiro
<itagaki.takahiro@gmail.com> wrote:
2011/1/17 KaiGai Kohei <kaigai@ak.jp.nec.com>:
Are you talking about an idea to apply toast id as an alternative key?
No, probably. I'm just talking about whether "diff -q A.txt B.txt" and
"diff -q A.gz B.gz" always returns the same result or not.... I found it depends on version of gzip. So, if we use such logic,
we cannot improve toast compression logic because the data is migrated
by pg_upgrade.
Yeah, that might be a bad tradeoff.
I wonder if we can trust the *equality* test, but not the inequality?
E.g. if compressed(A) == compressed(B) we know they're the same, but
if compressed(A) != compressed(B) we don't know they're not they still
might be.
I guess with two different versions or even completely different
algorithms we could end up with exactly the same compressed value for
different plaintexts (it's not a cryptographic hash after all), so
that's probably not an acceptable comparison either.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Mon, Jan 17, 2011 at 2:56 AM, Peter Eisentraut <peter_e@gmx.net> wrote:
On mån, 2011-01-17 at 07:35 +0100, Magnus Hagander wrote:
For text, I think locales may make that impossible. Aren't there
locale rules where two different characters can "behave the same" when
comparing them? I know in Swedish at least w and v behave the same
when sorting (but not when comparing) in some variants of the locale.In fact, aren't there cases where the *length test* also fails? I
don't know this for sure, but unless we know for certain that two
different length strings can never be the same *independent of
locale*, this whole patch has a big problem...Currently, two text values are only equal of strcoll() considers them
equal and the bits are the same. So this patch is safe in that regard.There is, however, some desire to loosen this. Possible applications
are case-insensitive comparison and Unicode normalization. It's not
going to happen soon, but it may be worth considering not putting in an
optimization that we'll end up having to rip out again in a year
perhaps.
Hmm. I hate to give up on this - it's a nice optimization for the
cases to which it applies. Would it be possible to jigger things so
that we can still do it byte-for-byte when case-insensitive comparison
or Unicode normalization AREN'T in use?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company