Small improvement to compactify_tuples
Good day, everyone.
I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort of
itemidbase items.
itemidbase array is bounded by number of tuples in a page, and
itemIdSortData
structure is simple, so specialized version could be a better choice.
Attached patch adds combination of one pass of prefix sort with
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be
reused.
I've tested following table:
create unlogged table test3 (
id integer PRIMARY KEY with (fillfactor=85),
val text
) WITH (fillfactor=85);
insert into test3 select i, '!'||i from generate_series(1, 10000000)
as i;
With pgbench script:
\set id1 RANDOM(1, :scale)
\set id2 RANDOM(1, :scale)
select * from test3 where id = :id1;
update test3 set val = '!'|| :id2 where id = :id1;
And command:
pgbench -M prepared -c 3 -s 10000000 -T 1000 -P 3 -n -f test3.sql
testdb
Using 1GB shared_buffers and synchronous_commit=off.
On my notebook improvement is:
before patch:
progress: 63.0 s, 15880.1 tps, lat 0.189 ms stddev 0.127
progress: 66.0 s, 15975.8 tps, lat 0.188 ms stddev 0.122
progress: 69.0 s, 15904.1 tps, lat 0.189 ms stddev 0.152
progress: 72.0 s, 15000.9 tps, lat 0.200 ms stddev 0.213
progress: 75.0 s, 15101.7 tps, lat 0.199 ms stddev 0.192
progress: 78.0 s, 15854.2 tps, lat 0.189 ms stddev 0.158
progress: 81.0 s, 15803.3 tps, lat 0.190 ms stddev 0.158
progress: 84.0 s, 15242.9 tps, lat 0.197 ms stddev 0.203
progress: 87.0 s, 15184.1 tps, lat 0.198 ms stddev 0.215
after patch:
progress: 63.0 s, 17108.5 tps, lat 0.175 ms stddev 0.140
progress: 66.0 s, 17271.9 tps, lat 0.174 ms stddev 0.155
progress: 69.0 s, 17243.5 tps, lat 0.174 ms stddev 0.143
progress: 72.0 s, 16675.3 tps, lat 0.180 ms stddev 0.206
progress: 75.0 s, 17187.4 tps, lat 0.175 ms stddev 0.157
progress: 78.0 s, 17293.0 tps, lat 0.173 ms stddev 0.159
progress: 81.0 s, 16289.8 tps, lat 0.184 ms stddev 0.180
progress: 84.0 s, 16131.2 tps, lat 0.186 ms stddev 0.170
progress: 87.0 s, 16741.1 tps, lat 0.179 ms stddev 0.165
I understand that it is quite degenerate test case.
But probably this improvement still has sense.
With regards,
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0001-storage-page-bufpage.c-improve-compactify_tuples.patchtext/x-diff; name=0001-storage-page-bufpage.c-improve-compactify_tuples.patchDownload+98-4
On 05/14/2017 09:47 PM, Sokolov Yura wrote:
Good day, everyone.
I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort of
itemidbase items.
Ah, I played with this too a couple of years ago, see
/messages/by-id/546B89DE.7030906@vmware.com, but
got distracted by other things and never got around to commit that.
itemidbase array is bounded by number of tuples in a page, and
itemIdSortData
structure is simple, so specialized version could be a better choice.Attached patch adds combination of one pass of prefix sort with
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be
reused.
Cool! Could you compare that against the bucket sort I posted in the
above thread, please?
At a quick glance, your "prefix sort" seems to be the the same algorithm
as the bucket sort that I implemented. You chose 256 buckets, where I
picked 32. And you're adding a shell sort implementation, for small
arrays, while I used a straight insertion sort. Not sure what these
differences mean in practice.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Heikki Linnakangas писал 2017-05-15 12:06:
On 05/14/2017 09:47 PM, Sokolov Yura wrote:
Good day, everyone.
I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a
compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort
of
itemidbase items.Ah, I played with this too a couple of years ago, see
/messages/by-id/546B89DE.7030906@vmware.com,
but got distracted by other things and never got around to commit
that.itemidbase array is bounded by number of tuples in a page, and
itemIdSortData
structure is simple, so specialized version could be a better choice.Attached patch adds combination of one pass of prefix sort with
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be
reused.Cool! Could you compare that against the bucket sort I posted in the
above thread, please?At a quick glance, your "prefix sort" seems to be the the same
algorithm as the bucket sort that I implemented. You chose 256
buckets, where I picked 32. And you're adding a shell sort
implementation, for small arrays, while I used a straight insertion
sort. Not sure what these differences mean in practice.- Heikki
Thank you for attention.
My first version of big page sort was almost exactly same to yours.
I had a bug in my insertion sort, so I had to refactor it.
(bug were fixed)
I found that items in itemidbase are almost sorted, so it is important
to try keep its order in prefix sort. So I've changed --count[i] to
count[i+1]++.
And it looks like it is better to have more buckets:
- with 256 buckets, size of single bucket is almost always less than 2,
so array is almost always sorted after prefix sort pass.
But it looks like it is better to sort each bucket separately, as you
did, and as it was in my early version.
Also I used your names for functions and some comments.
I attached new version of the patch.
I left memcpy intact cause it looks like it doesn't take noticable
cpu time.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian PostgreSQL Company
Attachments:
0001-storage-page-bufpage.c-improve-compactify_tuples-v02.patchtext/x-diff; name=0001-storage-page-bufpage.c-improve-compactify_tuples-v02.patchDownload+134-6
Sokolov Yura писал 2017-05-15 15:08:
Heikki Linnakangas писал 2017-05-15 12:06:
On 05/14/2017 09:47 PM, Sokolov Yura wrote:
Good day, everyone.
I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a
compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort
of
itemidbase items.Ah, I played with this too a couple of years ago, see
/messages/by-id/546B89DE.7030906@vmware.com,
but got distracted by other things and never got around to commit
that.itemidbase array is bounded by number of tuples in a page, and
itemIdSortData
structure is simple, so specialized version could be a better choice.Attached patch adds combination of one pass of prefix sort with
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be
reused.Cool! Could you compare that against the bucket sort I posted in the
above thread, please?At a quick glance, your "prefix sort" seems to be the the same
algorithm as the bucket sort that I implemented. You chose 256
buckets, where I picked 32. And you're adding a shell sort
implementation, for small arrays, while I used a straight insertion
sort. Not sure what these differences mean in practice.- Heikki
Thank you for attention.
My first version of big page sort was almost exactly same to yours.
I had a bug in my insertion sort, so I had to refactor it.
(bug were fixed)I found that items in itemidbase are almost sorted, so it is important
to try keep its order in prefix sort. So I've changed --count[i] to
count[i+1]++.And it looks like it is better to have more buckets:
- with 256 buckets, size of single bucket is almost always less than 2,
so array is almost always sorted after prefix sort pass.But it looks like it is better to sort each bucket separately, as you
did, and as it was in my early version.Also I used your names for functions and some comments.
I attached new version of the patch.
I left memcpy intact cause it looks like it doesn't take noticable
cpu time.
In a sequel, I propose to simplify PageRepairFragmentation in attached
patch.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0002-bufpage.c-simplify-PageRepairFragmentation.patchtext/x-diff; name=0002-bufpage.c-simplify-PageRepairFragmentation.patchDownload+18-29
Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Alvaro Herrera писал 2017-05-15 18:04:
Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/
Thank you for suggestion.
I've created https://commitfest.postgresql.org/14/1138/
As I could understand, I should attach both patches to single email
to be show correctly in commitfest topic. So I do it with this email.
Please, correct me, if I do something wrong.
With regards.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0002-bufpage.c-simplify-PageRepairFragmentation.patchtext/x-diff; name=0002-bufpage.c-simplify-PageRepairFragmentation.patchDownload+18-29
0001-storage-page-bufpage.c-improve-compactify_tuples-v02.patchtext/x-diff; name=0001-storage-page-bufpage.c-improve-compactify_tuples-v02.patchDownload+134-6
Sokolov Yura писал 2017-05-15 18:23:
Alvaro Herrera писал 2017-05-15 18:04:
Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/Thank you for suggestion.
I've created https://commitfest.postgresql.org/14/1138/
As I could understand, I should attach both patches to single email
to be show correctly in commitfest topic. So I do it with this email.Please, correct me, if I do something wrong.
With regards.
Looks like it should be single file.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
improve-repairing-fragmentation.patchtext/x-diff; name=improve-repairing-fragmentation.patchDownload+152-35
Sokolov Yura wrote:
Sokolov Yura писал 2017-05-15 18:23:
Alvaro Herrera писал 2017-05-15 18:04:
Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/Thank you for suggestion.
I've created https://commitfest.postgresql.org/14/1138/
As I could understand, I should attach both patches to single email
to be show correctly in commitfest topic. So I do it with this email.
Looks like it should be single file.
As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea. If you don't edit
the patches further, then you're all set because we already have the
previously archived patches. Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files. (Some would even argue
that each should be its own thread, but I don't think that's necessary.)
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea. If you don't edit
the patches further, then you're all set because we already have the
previously archived patches. Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files. (Some would even argue
that each should be its own thread, but I don't think that's
necessary.)
Thank you for explanation.
I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
(ie 0 or 1 element). In this case no need to sort buckets at all.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0001-Improve-compactify_tuples.patchtext/x-diff; name=0001-Improve-compactify_tuples.patchDownload+142-6
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea. If you don't edit
the patches further, then you're all set because we already have the
previously archived patches. Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files. (Some would even argue
that each should be its own thread, but I don't think that's
necessary.)Thank you for explanation.
I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
(ie 0 or 1 element). In this case no need to sort buckets at all.
I'm putting rebased version of second patch.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0002-Simplify-PageRepairFragmentation.patchtext/x-diff; name=0002-Simplify-PageRepairFragmentation.patchDownload+19-29
On 2017-07-21 13:49, Sokolov Yura wrote:
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting
them
together in a single file isn't such a great idea. If you don't edit
the patches further, then you're all set because we already have the
previously archived patches. Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files. (Some would even argue
that each should be its own thread, but I don't think that's
necessary.)Thank you for explanation.
I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
(ie 0 or 1 element). In this case no need to sort buckets at all.I'm putting rebased version of second patch.
Again rebased version of both patches.
Now second patch applies cleanly independent of first patch.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
On Tue, Sep 12, 2017 at 12:49 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
On 2017-07-21 13:49, Sokolov Yura wrote:
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea. If you don't edit
the patches further, then you're all set because we already have the
previously archived patches. Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files. (Some would even argue
that each should be its own thread, but I don't think that's necessary.)Thank you for explanation.
I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
(ie 0 or 1 element). In this case no need to sort buckets at all.I'm putting rebased version of second patch.
Again rebased version of both patches.
Now second patch applies cleanly independent of first patch.
Patch 1 applies cleanly, builds, and make check runs fine.
The code looks similar in style to surrounding code too, so I'm not
going to complain about the abundance of underscores in the macros :-p
I can reproduce the results in the OP's benchmark, with slightly
different numbers, but an overall improvement of ~6%, which matches
the OP's relative improvement.
Algorithmically, everything looks sound.
A few minor comments about patch 1:
+ if (max == 1)
+ goto end;
That goto is unnecessary, you could just as simply say
if (max > 1)
{
...
}
+#define pg_shell_sort_pass(elem_t, cmp, off) \
+ do { \
+ int _i, _j; \
+ elem_t _temp; \
+ for (_i = off; _i < _n; _i += off) \
+ { \
_n right there isn't declared in the macro, and it isn't an argument
either. It should be an argument, having stuff inherited from the
enclosing context like that is confusing.
Same with _arr, btw.
Patch 2 LGTM.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello, Claudio.
Thank you for review and confirm of improvement.
On 2017-09-23 01:12, Claudio Freire wrote:
On Tue, Sep 12, 2017 at 12:49 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:On 2017-07-21 13:49, Sokolov Yura wrote:
On 2017-05-17 17:46, Sokolov Yura wrote:
Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting
them
together in a single file isn't such a great idea. If you don't
edit
the patches further, then you're all set because we already have
the
previously archived patches. Next commitfest starts in a few
months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files. (Some would even
argue
that each should be its own thread, but I don't think that's
necessary.)Thank you for explanation.
I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
(ie 0 or 1 element). In this case no need to sort buckets at all.I'm putting rebased version of second patch.
Again rebased version of both patches.
Now second patch applies cleanly independent of first patch.Patch 1 applies cleanly, builds, and make check runs fine.
The code looks similar in style to surrounding code too, so I'm not
going to complain about the abundance of underscores in the macros :-pI can reproduce the results in the OP's benchmark, with slightly
different numbers, but an overall improvement of ~6%, which matches
the OP's relative improvement.Algorithmically, everything looks sound.
A few minor comments about patch 1:
+ if (max == 1)
+ goto end;That goto is unnecessary, you could just as simply say
if (max > 1)
{
...
}
Done.
(I don't like indentation, though :-( )
+#define pg_shell_sort_pass(elem_t, cmp, off) \ + do { \ + int _i, _j; \ + elem_t _temp; \ + for (_i = off; _i < _n; _i += off) \ + { \_n right there isn't declared in the macro, and it isn't an argument
either. It should be an argument, having stuff inherited from the
enclosing context like that is confusing.Same with _arr, btw.
pg_shell_sort_pass is not intended to be used outside pg_shell_sort
and ph_insertion_sort, so I think, stealing from their context is ok.
Nonetheless, done.
Patch 2 LGTM.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0001-Improve-compactify_tuples.patchtext/x-diff; name=0001-Improve-compactify_tuples.patchDownload+147-6
On Sat, Sep 23, 2017 at 5:56 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
Hello, Claudio.
Thank you for review and confirm of improvement.
On 2017-09-23 01:12, Claudio Freire wrote:
Patch 1 applies cleanly, builds, and make check runs fine.
The code looks similar in style to surrounding code too, so I'm not
going to complain about the abundance of underscores in the macros :-pI can reproduce the results in the OP's benchmark, with slightly
different numbers, but an overall improvement of ~6%, which matches
the OP's relative improvement.Algorithmically, everything looks sound.
A few minor comments about patch 1:
+ if (max == 1)
+ goto end;That goto is unnecessary, you could just as simply say
if (max > 1)
{
...
}Done.
(I don't like indentation, though :-( )+#define pg_shell_sort_pass(elem_t, cmp, off) \ + do { \ + int _i, _j; \ + elem_t _temp; \ + for (_i = off; _i < _n; _i += off) \ + { \_n right there isn't declared in the macro, and it isn't an argument
either. It should be an argument, having stuff inherited from the
enclosing context like that is confusing.Same with _arr, btw.
pg_shell_sort_pass is not intended to be used outside pg_shell_sort
and ph_insertion_sort, so I think, stealing from their context is ok.
Nonetheless, done.
Looks good.
Marking this patch as ready for committer
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Sokolov Yura <funny.falcon@postgrespro.ru> writes:
[ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
I started to review this patch. I spent a fair amount of time on
beautifying the code, because I found it rather ugly and drastically
undercommented. Once I had it to the point where it seemed readable,
I went to check the shellsort algorithm against Wikipedia's entry,
and found that this appears to be an incorrect implementation of
shellsort: where pg_shell_sort_pass has
for (_i = off; _i < _n; _i += off) \
it seems to me that we need to have
for (_i = off; _i < _n; _i += 1) \
or maybe just _i++. As-is, this isn't h-sorting the whole file,
but just the subset of entries that have multiple-of-h indexes
(ie, the first of the h distinct subfiles that should get sorted).
The bug is masked by the final pass of plain insertion sort, but
we are not getting the benefit we should get from the earlier passes.
However, I'm a bit dubious that it's worth fixing that; instead
my inclination would be to rip out the shellsort implementation
entirely. The code is only using it for the nitems <= 48 case
(which makes the first three offset steps certainly no-ops) and
I am really unconvinced that it's worth expending the code space
for a shellsort rather than plain insertion sort in that case,
especially when we have good reason to think that the input data
is nearly sorted.
BTW, the originally given test case shows no measurable improvement
on my box. I was eventually able to convince myself by profiling
that the patch makes us spend less time in compactify_tuples, but
this test case isn't a very convincing one.
So, quite aside from the bug, I'm not excited about committing the
attached as-is. I think we should remove pg_shell_sort and just
use pg_insertion_sort. If somebody can show a test case that
provides a measurable speed improvement from the extra code,
I could be persuaded to reconsider.
I also wonder if the nitems <= 48 cutoff needs to be reconsidered
in light of this. But since I can hardly measure any benefit from
the patch at all, I'm not in the best position to test different
values for that cutoff.
Have not looked at the 0002 patch yet.
regards, tom lane
Attachments:
improve-compactify_tuples-6.patchtext/x-diff; charset=us-ascii; name=improve-compactify_tuples-6.patchDownload+201-11
On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Sokolov Yura <funny.falcon@postgrespro.ru> writes:
[ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
I started to review this patch. I spent a fair amount of time on
beautifying the code, because I found it rather ugly and drastically
undercommented. Once I had it to the point where it seemed readable,
I went to check the shellsort algorithm against Wikipedia's entry,
and found that this appears to be an incorrect implementation of
shellsort: where pg_shell_sort_pass hasfor (_i = off; _i < _n; _i += off) \
it seems to me that we need to have
for (_i = off; _i < _n; _i += 1) \
or maybe just _i++. As-is, this isn't h-sorting the whole file,
but just the subset of entries that have multiple-of-h indexes
(ie, the first of the h distinct subfiles that should get sorted).
The bug is masked by the final pass of plain insertion sort, but
we are not getting the benefit we should get from the earlier passes.However, I'm a bit dubious that it's worth fixing that; instead
my inclination would be to rip out the shellsort implementation
entirely. The code is only using it for the nitems <= 48 case
(which makes the first three offset steps certainly no-ops) and
I am really unconvinced that it's worth expending the code space
for a shellsort rather than plain insertion sort in that case,
especially when we have good reason to think that the input data
is nearly sorted.
I actually noticed that and benchmarked some variants. Neither
made any noticeable difference in performance, so I decided not
to complain about them.
I guess the same case can be made for removing the shell sort.
So I'm inclined to agree.
BTW, the originally given test case shows no measurable improvement
on my box.
I did manage to reproduce the original test and got a consistent improvement.
I was eventually able to convince myself by profiling
that the patch makes us spend less time in compactify_tuples, but
this test case isn't a very convincing one.So, quite aside from the bug, I'm not excited about committing the
attached as-is. I think we should remove pg_shell_sort and just
use pg_insertion_sort. If somebody can show a test case that
provides a measurable speed improvement from the extra code,
I could be persuaded to reconsider.
My tests modifying the shell sort didn't produce any measurable
difference, but I didn't test removing it altogether.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Claudio Freire <klaussfreire@gmail.com> writes:
On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
BTW, the originally given test case shows no measurable improvement
on my box.
I did manage to reproduce the original test and got a consistent improvement.
It occurred to me that I could force the issue by hacking bufpage.c to
execute compactify_tuples multiple times each time it was called, as in
the first patch attached below. This has nothing directly to do with real
performance of course, but it's making use of the PG system to provide
realistic test data for microbenchmarking compactify_tuples. I was a bit
surprised to find that I had to set the repeat count to 1000 to make
compactify_tuples really dominate the runtime (while using the originally
posted test case ... maybe there's a better one?). But once I did get it
to dominate the runtime, perf gave me this for the CPU hotspots:
+ 27.97% 27.88% 229040 postmaster libc-2.12.so [.] memmove
+ 14.61% 14.57% 119704 postmaster postgres [.] compactify_tuples
+ 12.40% 12.37% 101566 postmaster libc-2.12.so [.] _wordcopy_bwd_aligned
+ 11.68% 11.65% 95685 postmaster libc-2.12.so [.] _wordcopy_fwd_aligned
+ 7.67% 7.64% 62801 postmaster postgres [.] itemoffcompare
+ 7.00% 6.98% 57303 postmaster postgres [.] compactify_tuples_loop
+ 4.53% 4.52% 37111 postmaster postgres [.] pg_qsort
+ 1.71% 1.70% 13992 postmaster libc-2.12.so [.] memcpy
which says that micro-optimizing the sort step is a complete, utter waste
of time, and what we need to be worried about is the data copying part.
The memcpy part of the above is presumably from the scaffolding memcpy's
in compactify_tuples_loop, which is interesting because that's moving as
much data as the memmove's are. So at least with RHEL6's version of
glibc, memmove is apparently a lot slower than memcpy.
This gave me the idea to memcpy the page into some workspace and then use
memcpy, not memmove, to put the tuples back into the caller's copy of the
page. That gave me about a 50% improvement in observed TPS, and a perf
profile like this:
+ 38.50% 38.40% 299520 postmaster postgres [.] compactify_tuples
+ 31.11% 31.02% 241975 postmaster libc-2.12.so [.] memcpy
+ 8.74% 8.72% 68022 postmaster postgres [.] itemoffcompare
+ 6.51% 6.49% 50625 postmaster postgres [.] compactify_tuples_loop
+ 4.21% 4.19% 32719 postmaster postgres [.] pg_qsort
+ 1.70% 1.69% 13213 postmaster postgres [.] memcpy@plt
There still doesn't seem to be any point in replacing the qsort,
but it does seem like something like the second attached patch
might be worth doing.
So I'm now wondering why my results seem to be so much different
from those of other people who have tried this, both as to whether
compactify_tuples is worth working on at all and as to what needs
to be done to it if so. Thoughts?
regards, tom lane
I wrote:
Have not looked at the 0002 patch yet.
I looked at that one, and it seems to be a potential win with no
downside, so pushed. (I tweaked it slightly to avoid an unnecessary
conflict with the test patch I posted earlier.)
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Nov 3, 2017 at 4:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Claudio Freire <klaussfreire@gmail.com> writes:
On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
BTW, the originally given test case shows no measurable improvement
on my box.I did manage to reproduce the original test and got a consistent improvement.
This gave me the idea to memcpy the page into some workspace and then use
memcpy, not memmove, to put the tuples back into the caller's copy of the
page. That gave me about a 50% improvement in observed TPS, and a perf
profile like this:+ 38.50% 38.40% 299520 postmaster postgres [.] compactify_tuples + 31.11% 31.02% 241975 postmaster libc-2.12.so [.] memcpy + 8.74% 8.72% 68022 postmaster postgres [.] itemoffcompare + 6.51% 6.49% 50625 postmaster postgres [.] compactify_tuples_loop + 4.21% 4.19% 32719 postmaster postgres [.] pg_qsort + 1.70% 1.69% 13213 postmaster postgres [.] memcpy@pltThere still doesn't seem to be any point in replacing the qsort,
but it does seem like something like the second attached patch
might be worth doing.So I'm now wondering why my results seem to be so much different
from those of other people who have tried this, both as to whether
compactify_tuples is worth working on at all and as to what needs
to be done to it if so. Thoughts?regards, tom lane
I'm going to venture a guess that the version of gcc and libc, and
build options used both in the libc (ie: the distro) and postgres may
play a part here.
I'm running with glibc 2.22, for instance, and building with gcc 4.8.5.
I will try and benchmark memcpy vs memmove and see what the
performance difference is there with my versions, too. This may
heavily depend on compiler optimizations that may vary between
versions, since memcpy/memmove tend to be inlined a lot.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2017-11-03 5:46 GMT+03:00 Tom Lane <tgl@sss.pgh.pa.us>:
Sokolov Yura <funny.falcon@postgrespro.ru> writes:
[ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
I went to check the shellsort algorithm against Wikipedia's entry,
and found that this appears to be an incorrect implementation of
shellsort: where pg_shell_sort_pass hasfor (_i = off; _i < _n; _i += off) \
it seems to me that we need to have
for (_i = off; _i < _n; _i += 1) \
or maybe just _i++.
Shame on me :-(
I've wrote shell sort several times, so I forgot to recheck myself once
again.
And looks like best gap sequence from wikipedia is really best
( {301, 132, 57, 23, 10 , 4} in my notation),
2017-11-03 17:37 GMT+03:00 Claudio Freire <klaussfreire@gmail.com>:
On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
BTW, the originally given test case shows no measurable improvement
on my box.I did manage to reproduce the original test and got a consistent
improvement.
I've rechecked my self using my benchmark.
Without memmove, compactify_tuples comsumes:
- with qsort 11.66% cpu (pg_qsort + med3 + swapfunc + itemoffcompare +
compactify_tuples = 5.97 + 0.51 + 2.87 + 1.88 + 0.44)
- with just insertion sort 6.65% cpu (sort is inlined, itemoffcompare also
inlined, so whole is compactify_tuples)
- with just shell sort 5,98% cpu (sort is inlined again)
- with bucket sort 1,76% cpu (sort_itemIds + compactify_tuples = 1.30 +
0.46)
(memmove consumes 1.29% cpu)
tps is also reflects changes:
~17ktps with qsort
~19ktps with bucket sort
Also vacuum of benchmark's table is also improved:
~3s with qsort,
~2.4s with bucket sort
Of course, this benchmark is quite synthetic: table is unlogged, and tuple
is small,
and synchronous commit is off. Though, such table is still useful in some
situations
(think of not-too-important, but useful counters, like "photo watch count").
And patch affects not only this synthetic benchmark. It affects restore
performance,
as Heikki mentioned, and cpu consumption of Vacuum (though vacuum is more io
bound).
I think we should remove pg_shell_sort and just use pg_insertion_sort.
Using shell sort is just a bit safer. Doubtfully worst pattern (for
insertion sort) will
appear, but what if? Shellsort is a bit better on whole array (5.98% vs
6.65%).
Though on small array difference will be much smaller.
With regards,
Sokolov Yura aka funny_falcon