[WIP] [B-Tree] Retail IndexTuple deletion

Started by Andrei Lepikhovalmost 8 years ago47 messageshackers
Jump to latest
#1Andrei Lepikhov
lepihov@gmail.com

Hi,
I have written a code for quick indextuple deletion from an relation by
heap tuple TID. The code relate to "Retail IndexTuple deletion"
enhancement of btree index on postgresql wiki [1]https://wiki.postgresql.org/wiki/Key_normalization#Retail_IndexTuple_deletion.
Briefly, it includes three steps:
1. Key generation for index tuple searching.
2. Index relation search for tuple with the heap tuple TID.
3. Deletion of the tuple from the index relation.

Now, index relation cleaning performs by vacuum which scan all index
relation for dead entries sequentially, tuple-by-tuple. This simplistic
and safe method can be significantly surpasses in the cases, than a
number of dead entries is not large by retail deletions which uses index
scans for search dead entries. Also, it can be used by distributed
systems for reduce cost of a global index vacuum.

Patch '0001-retail-indextuple-deletion' introduce new function
amtargetdelete() in access method interface. Patch
'0002-quick-vacuum-strategy' implements this function for an alternative
strategy of lazy index vacuum, called 'Quick Vacuum'.

The code demands hold DEAD tuple storage until scan key will be created.
In this version I add 'target_index_deletion_factor' option. If it more
than 0, heap_page_prune() uses ItemIdMarkDead() instead of
ItemIdSetDead() function for set DEAD flag and hold the tuple storage.
Next step is developing background worker which will collect pairs (tid,
scankey) of DEAD tuples from heap_page_prune() function.

Here are test description and some execution time measurements results
showing the benefit of this patches:

Test:
-----
create table test_range(id serial primary key, value integer);
insert into test_range (value) select random()*1e7/10^N from
generate_series(1, 1e7);
DELETE FROM test_range WHERE value=1;
VACUUM test_range;

Results:
--------

| n | t1, s | t2, s | speedup |
|---|---------------------------|
| 0 | 0.00003| 0.4476 | 1748.4 |
| 1 | 0.00006| 0.5367 | 855.99 |
| 2 | 0.0004 | 0.9804 | 233.99 |
| 3 | 0.0048 | 1.6493 | 34.576 |
| 4 | 0.5600 | 2.4854 | 4.4382 |
| 5 | 3.3300 | 3.9555 | 1.2012 |
| 6 | 17.700 | 5.6000 | 0.3164 |
|---|---------------------------|
in the table, t1 - measured execution time of lazy_vacuum_index()
function by Quick-Vacuum strategy; t2 - measured execution time of
lazy_vacuum_index() function by Lazy-Vacuum strategy;

Note, guaranteed allowable time of index scans (used for quick deletion)
will be achieved by storing equal-key index tuples in physical TID order
[2]: https://wiki.postgresql.org/wiki/Key_normalization#Making_all_items_in_the_index_unique_by_treating_heap_TID_as_an_implicit_last_attribute

[1]: https://wiki.postgresql.org/wiki/Key_normalization#Retail_IndexTuple_deletion
https://wiki.postgresql.org/wiki/Key_normalization#Retail_IndexTuple_deletion
[2]: https://wiki.postgresql.org/wiki/Key_normalization#Making_all_items_in_the_index_unique_by_treating_heap_TID_as_an_implicit_last_attribute
https://wiki.postgresql.org/wiki/Key_normalization#Making_all_items_in_the_index_unique_by_treating_heap_TID_as_an_implicit_last_attribute
[3]: /messages/by-id/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
/messages/by-id/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com

--
Andrey Lepikhov
Postgres Professional:
https://postgrespro.com
The Russian Postgres Company

Attachments:

0001-retail-indextuple-deletion.patchtext/x-patch; name=0001-retail-indextuple-deletion.patchDownload+235-0
0002-quick-vacuum-strategy.patchtext/x-patch; name=0002-quick-vacuum-strategy.patchDownload+118-8
In reply to: Andrei Lepikhov (#1)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Sun, Jun 17, 2018 at 9:39 PM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

I have written a code for quick indextuple deletion from an relation by heap
tuple TID. The code relate to "Retail IndexTuple deletion" enhancement of
btree index on postgresql wiki [1].

I knew that somebody would eventually read that Wiki page. :-)

Now, index relation cleaning performs by vacuum which scan all index
relation for dead entries sequentially, tuple-by-tuple. This simplistic and
safe method can be significantly surpasses in the cases, than a number of
dead entries is not large by retail deletions which uses index scans for
search dead entries.

I assume that the lazy vacuum bulk delete thing is much faster for the
situation where you have many dead tuples in the index. However,
allowing a B-Tree index to accumulate so much bloat in the first place
often has consequences that cannot be reversed with anything less than
a REINDEX. It's true that "prevention is better than cure" for all
types of bloat. However, this could perhaps be as much as 100x more
important for B-Tree bloat, since we cannot place new tuples in any
place that is convenient. It's very different to bloat in the heap,
even at a high level.

The code demands hold DEAD tuple storage until scan key will be created. In
this version I add 'target_index_deletion_factor' option. If it more than 0,
heap_page_prune() uses ItemIdMarkDead() instead of ItemIdSetDead() function
for set DEAD flag and hold the tuple storage.

Makes sense.

Next step is developing background worker which will collect pairs (tid,
scankey) of DEAD tuples from heap_page_prune() function.

Makes sense.

| n | t1, s | t2, s | speedup |
|---|---------------------------|
| 0 | 0.00003| 0.4476 | 1748.4 |
| 1 | 0.00006| 0.5367 | 855.99 |
| 2 | 0.0004 | 0.9804 | 233.99 |
| 3 | 0.0048 | 1.6493 | 34.576 |
| 4 | 0.5600 | 2.4854 | 4.4382 |
| 5 | 3.3300 | 3.9555 | 1.2012 |
| 6 | 17.700 | 5.6000 | 0.3164 |
|---|---------------------------|
in the table, t1 - measured execution time of lazy_vacuum_index() function
by Quick-Vacuum strategy; t2 - measured execution time of
lazy_vacuum_index() function by Lazy-Vacuum strategy;

The speedup looks promising. However, the real benefit should be in
query performance, especially when we have heavy contention. Very
eager retail index tuple deletion could help a lot there. It already
makes sense to make autovacuum extremely aggressive in this case, to
the point when it's running almost constantly. A more targeted cleanup
process that can run much faster could do the same job, but be much
more eager, and so be much more effective at *preventing* bloating of
the key space [1]http://pgeoghegan.blogspot.com/2017/07/postgresql-index-bloat-microscope.html[2]/messages/by-id/CAH2-Wzmf6intNY1ggiNzOziiO5Eq=DsXfeptODGxO=2j-i1NGQ@mail.gmail.com -- Peter Geoghegan.

Note, guaranteed allowable time of index scans (used for quick deletion)
will be achieved by storing equal-key index tuples in physical TID order [2]
with patch [3].

I now have greater motivation to develop that patch into a real project.

I bet that my heap-tid-sort patch will allow you to refine your
interface when there are many logical duplicates: You can create one
explicit scan key, but have a list of heap TIDs that need to be killed
within the range of matching index tuples. That could be a lot more
efficient in the event of many non-HOT updates, where most index tuple
values won't actually change. You can sort the list of heap TIDs that
you want to kill once, and then "merge" it with the tuples at the leaf
level as they are matched/killed. It should be safe to avoid
rechecking anything other than the heap TID values.

[1]: http://pgeoghegan.blogspot.com/2017/07/postgresql-index-bloat-microscope.html
[2]: /messages/by-id/CAH2-Wzmf6intNY1ggiNzOziiO5Eq=DsXfeptODGxO=2j-i1NGQ@mail.gmail.com -- Peter Geoghegan
--
Peter Geoghegan

#3Claudio Freire
klaussfreire@gmail.com
In reply to: Peter Geoghegan (#2)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Mon, Jun 18, 2018 at 4:59 PM Peter Geoghegan <pg@bowt.ie> wrote:

Note, guaranteed allowable time of index scans (used for quick deletion)
will be achieved by storing equal-key index tuples in physical TID order [2]
with patch [3].

I now have greater motivation to develop that patch into a real project.

I bet that my heap-tid-sort patch will allow you to refine your
interface when there are many logical duplicates: You can create one
explicit scan key, but have a list of heap TIDs that need to be killed
within the range of matching index tuples. That could be a lot more
efficient in the event of many non-HOT updates, where most index tuple
values won't actually change. You can sort the list of heap TIDs that
you want to kill once, and then "merge" it with the tuples at the leaf
level as they are matched/killed. It should be safe to avoid
rechecking anything other than the heap TID values.

[1] http://pgeoghegan.blogspot.com/2017/07/postgresql-index-bloat-microscope.html
[2] /messages/by-id/CAH2-Wzmf6intNY1ggiNzOziiO5Eq=DsXfeptODGxO=2j-i1NGQ@mail.gmail.com

Actually, once btree tids are sorted, you can continue tree descent
all the way to the exact leaf page that contains the tuple to be
deleted.

Thus, the single-tuple interface ends up being quite OK. Sure, you can
optimize things a bit by scanning a range, but only if vacuum is able
to group keys in order to produce the optimized calls, and I don't see
that terribly likely.

So, IMHO the current interface may be quite enough.

In reply to: Claudio Freire (#3)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Mon, Jun 18, 2018 at 1:42 PM, Claudio Freire <klaussfreire@gmail.com> wrote:

Actually, once btree tids are sorted, you can continue tree descent
all the way to the exact leaf page that contains the tuple to be
deleted.

Thus, the single-tuple interface ends up being quite OK. Sure, you can
optimize things a bit by scanning a range, but only if vacuum is able
to group keys in order to produce the optimized calls, and I don't see
that terribly likely.

Andrey talked about a background worker that did processing + index
tuple deletion when handed off work by a user backend after it
performs HOT pruning of a heap page. I consider that idea to be a good
place to go with the patch, because in practice the big problem is
workloads that suffer from so-called "write amplification", where most
index tuples are created despite being "logically unnecessary" (e.g.
one index among several prevents an UPDATE being HOT-safe, making
inserts into most of the indexes "logically unnecessary").

I think that it's likely that only descending the tree once in order
to kill multiple duplicate index tuples in one shot will in fact be
*very* important (unless perhaps you assume that that problem is
solved by something else, such as zheap). The mechanism that Andrey
describes is rather unlike VACUUM as we know it today, but that's the
whole point.

--
Peter Geoghegan

In reply to: Andrei Lepikhov (#1)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Sun, Jun 17, 2018 at 9:39 PM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

Patch '0001-retail-indextuple-deletion' introduce new function
amtargetdelete() in access method interface. Patch
'0002-quick-vacuum-strategy' implements this function for an alternative
strategy of lazy index vacuum, called 'Quick Vacuum'.

My compiler shows the following warnings:

/code/postgresql/root/build/../source/src/backend/access/nbtree/nbtree.c:
In function ‘bttargetdelete’:
/code/postgresql/root/build/../source/src/backend/access/nbtree/nbtree.c:1053:3:
warning: this ‘if’ clause does not guard... [-Wmisleading-indentation]
if (needLock)
^~
/code/postgresql/root/build/../source/src/backend/access/nbtree/nbtree.c:1055:4:
note: ...this statement, but the latter is misleadingly indented as if
it were guarded by the ‘if’
npages = RelationGetNumberOfBlocks(irel);
^~~~~~
/code/postgresql/root/build/../source/src/backend/access/nbtree/nbtree.c:1074:3:
warning: ‘blkno’ may be used uninitialized in this function
[-Wmaybe-uninitialized]
cleanup_block(info, stats, blkno);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I think that they're both harmless, though.

--
Peter Geoghegan

In reply to: Peter Geoghegan (#5)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Mon, Jun 18, 2018 at 2:54 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Sun, Jun 17, 2018 at 9:39 PM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

Patch '0001-retail-indextuple-deletion' introduce new function
amtargetdelete() in access method interface. Patch
'0002-quick-vacuum-strategy' implements this function for an alternative
strategy of lazy index vacuum, called 'Quick Vacuum'.

My compiler shows the following warnings:

Some real feedback:

What we probably want to end up with here is new lazyvacuum.c code
that does processing for one heap page (and associated indexes) that
is really just a "partial" lazy vacuum. Though it won't do things like
advance relfrozenxid, it will do pruning for the heap page, index
tuple killing, and finally heap tuple killing. It will do all of these
things reliably, just like traditional lazy vacuum. This will be what
your background worker eventually uses.

I doubt that you can use routines like index_beginscan() within
bttargetdelete() at all. I think that you need something closer to
_bt_doinsert() or _bt_pagedel(), that manages its own scan (your code
should probably live in nbtpage.c). It does not make sense to teach
external, generic routines like index_beginscan() about heap TID being
an implicit final index attribute, which is one reason for this (I'm
assuming that this patch relies on my own patch). Another reason is
that you need to hold an exclusive buffer lock at the point that you
identify the tuple to be killed, until the point that you actually
kill it. You don't do that now.

IOW, the approach you've taken in bttargetdelete() isn't quite correct
because you imagine that it's okay to occasionally "lose" the index
tuple that you originally found, and just move on. That needs to be
100% reliable, or else we'll end up with index tuples that point to
the wrong heap tuples in rare cases with concurrent insertions. As I
said, we want a "partial" lazy vacuum here, which must mean that it's
reliable. Note that _bt_pagedel() actually calls _bt_search() when it
deletes a page. Your patch will not be the first patch that makes
nbtree vacuuming do an index scan. You should be managing your own
insertion scan key, much like _bt_pagedel() does. If you use my patch,
_bt_search() can be taught to look for a specific heap TID.

Finally, doing things this way would let you delete multiple
duplicates in one shot, as I described in an earlier e-mail. Only a
single descent of the tree is needed to delete quite a few index
tuples, provided that they all happen to be logical duplicates. Again,
your background worker will take advantage of this.

This code does not follow the Postgres style:

-       else
+       }
+       else {
+           if (rootoffnum != latestdead)
+               heap_prune_record_unused(prstate, latestdead);
heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);
+       }
}

Please be more careful about that. I find it very distracting.

--
Peter Geoghegan

In reply to: Peter Geoghegan (#6)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Mon, Jun 18, 2018 at 4:05 PM, Peter Geoghegan <pg@bowt.ie> wrote:

Finally, doing things this way would let you delete multiple
duplicates in one shot, as I described in an earlier e-mail. Only a
single descent of the tree is needed to delete quite a few index
tuples, provided that they all happen to be logical duplicates. Again,
your background worker will take advantage of this.

BTW, when you do this you should make sure that there is only one call
to _bt_delitems_vacuum(), so that there aren't too many WAL records.
Actually, that's not quite correct -- there should be one
_bt_delitems_vacuum() call *per leaf page* per bttargetdelete() call,
which is slightly different. There should rarely be more than one or
two calls to _bt_delitems_vacuum() in total, because your background
worker is only going to delete one heap page's duplicates per
bttargetdelete() call, and because there will be locality/correlation
with my TID order patch.

--
Peter Geoghegan

In reply to: Peter Geoghegan (#6)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Mon, Jun 18, 2018 at 4:05 PM, Peter Geoghegan <pg@bowt.ie> wrote:

IOW, the approach you've taken in bttargetdelete() isn't quite correct
because you imagine that it's okay to occasionally "lose" the index
tuple that you originally found, and just move on. That needs to be
100% reliable, or else we'll end up with index tuples that point to
the wrong heap tuples in rare cases with concurrent insertions.

Attached patch adds a new amcheck check within
bt_index_parent_check(). With the patch, heap TIDs are accumulated in
a tuplesort and sorted at the tail end of verification (before
optional heapallindexed verification runs). This will reliably detect
the kind of corruption I noticed was possible with your patch.

Note that the amcheck enhancement that went along with my
heap-tid-btree-sort patch may not have detected this issue, which is
why I wrote this patch -- the heap-tid-btree-sort amcheck stuff could
detect duplicates, but only when all other attributes happened to be
identical when comparing sibling index tuples (i.e. only when we must
actually compare TIDs across sibling index tuples). If you add this
check, I'm pretty sure that you can detect any possible problem. You
should think about using this to debug your patch.

I may get around to submitting this to a CF, but that isn't a priority
right now.

--
Peter Geoghegan

Attachments:

0001-Detect-duplicate-heap-TIDs-using-a-tuplesort.patchapplication/octet-stream; name=0001-Detect-duplicate-heap-TIDs-using-a-tuplesort.patchDownload+88-1
#9Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Peter Geoghegan (#6)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Tue, Jun 19, 2018 at 8:05 AM, Peter Geoghegan <pg@bowt.ie> wrote:

On Mon, Jun 18, 2018 at 2:54 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Sun, Jun 17, 2018 at 9:39 PM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

Patch '0001-retail-indextuple-deletion' introduce new function
amtargetdelete() in access method interface. Patch
'0002-quick-vacuum-strategy' implements this function for an alternative
strategy of lazy index vacuum, called 'Quick Vacuum'.

Great!

My compiler shows the following warnings:

Some real feedback:

What we probably want to end up with here is new lazyvacuum.c code
that does processing for one heap page (and associated indexes) that
is really just a "partial" lazy vacuum. Though it won't do things like
advance relfrozenxid, it will do pruning for the heap page, index
tuple killing, and finally heap tuple killing. It will do all of these
things reliably, just like traditional lazy vacuum. This will be what
your background worker eventually uses.

I think that we do the partial lazy vacuum using visibility map even
now. That does heap pruning, index tuple killing but doesn't advance
relfrozenxid. Since this patch adds an ability to delete small amount
of index tuples quickly, what I'd like to do with this patch is to
invoke autovacuum more frequently, and do the target index deletion or
the index bulk-deletion depending on amount of garbage and index size
etc. That is, it might be better if lazy vacuum scans heap in ordinary
way and then plans and decides a method of index deletion based on
costs similar to what query planning does.

Regards,

--
Masahiko Sawada

In reply to: Masahiko Sawada (#9)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Tue, Jun 19, 2018 at 2:33 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I think that we do the partial lazy vacuum using visibility map even
now. That does heap pruning, index tuple killing but doesn't advance
relfrozenxid.

Right, that's what I was thinking. Opportunistic HOT pruning isn't
like vacuuming because it doesn't touch indexes. This patch adds an
alternative strategy for conventional lazy vacuum that is also able to
run a page at a time if needed. Perhaps page-at-a-time operation could
later be used for doing something that is opportunistic in the same
way that pruning is opportunistic, but it's too early to worry about
that.

Since this patch adds an ability to delete small amount
of index tuples quickly, what I'd like to do with this patch is to
invoke autovacuum more frequently, and do the target index deletion or
the index bulk-deletion depending on amount of garbage and index size
etc. That is, it might be better if lazy vacuum scans heap in ordinary
way and then plans and decides a method of index deletion based on
costs similar to what query planning does.

That seems to be what Andrey wants to do, though right now the
prototype patch actually just always uses its alternative strategy
while doing any kind of lazy vacuuming (some simple costing code is
commented out right now). It shouldn't be too hard to add some costing
to it. Once we do that, and once we polish the patch some more, we can
do performance testing. Maybe that alone will be enough to make the
patch worth committing; "opportunistic microvacuuming" can come later,
if at all.

--
Peter Geoghegan

#11Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#6)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On 19.06.2018 04:05, Peter Geoghegan wrote:

On Mon, Jun 18, 2018 at 2:54 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Sun, Jun 17, 2018 at 9:39 PM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

Patch '0001-retail-indextuple-deletion' introduce new function
amtargetdelete() in access method interface. Patch
'0002-quick-vacuum-strategy' implements this function for an alternative
strategy of lazy index vacuum, called 'Quick Vacuum'.

My compiler shows the following warnings:

Some real feedback:

What we probably want to end up with here is new lazyvacuum.c code
that does processing for one heap page (and associated indexes) that
is really just a "partial" lazy vacuum. Though it won't do things like
advance relfrozenxid, it will do pruning for the heap page, index
tuple killing, and finally heap tuple killing. It will do all of these
things reliably, just like traditional lazy vacuum. This will be what
your background worker eventually uses.

It is final goal of the patch.

I doubt that you can use routines like index_beginscan() within
bttargetdelete() at all. I think that you need something closer to
_bt_doinsert() or _bt_pagedel(), that manages its own scan (your code
should probably live in nbtpage.c). It does not make sense to teach
external, generic routines like index_beginscan() about heap TID being
an implicit final index attribute, which is one reason for this (I'm
assuming that this patch relies on my own patch). Another reason is
that you need to hold an exclusive buffer lock at the point that you
identify the tuple to be killed, until the point that you actually
kill it. You don't do that now.

IOW, the approach you've taken in bttargetdelete() isn't quite correct
because you imagine that it's okay to occasionally "lose" the index
tuple that you originally found, and just move on. That needs to be
100% reliable, or else we'll end up with index tuples that point to
the wrong heap tuples in rare cases with concurrent insertions. As I
said, we want a "partial" lazy vacuum here, which must mean that it's
reliable. Note that _bt_pagedel() actually calls _bt_search() when it
deletes a page. Your patch will not be the first patch that makes
nbtree vacuuming do an index scan. You should be managing your own
insertion scan key, much like _bt_pagedel() does. If you use my patch,
_bt_search() can be taught to look for a specific heap TID.

Agree with this notes. Corrections will made in the next version of the
patch.

Finally, doing things this way would let you delete multiple
duplicates in one shot, as I described in an earlier e-mail. Only a
single descent of the tree is needed to delete quite a few index
tuples, provided that they all happen to be logical duplicates. Again,
your background worker will take advantage of this.

It is very interesting idea. According to this, I plan to change
bttargetdelete() interface as follows:
IndexTargetDeleteStats*
amtargetdelete(IndexTargetDeleteInfo *info,
IndexTargetDeleteStats *stats,
Datum *values, bool *isnull);
where structure IndexTargetDeleteInfo contains a TID list of dead heap
tuples. All index entries, corresponding to this list, may be deleted
(or only some of it) by one call of amtargetdelete() function with
single descent of the tree.

This code does not follow the Postgres style:

-       else
+       }
+       else {
+           if (rootoffnum != latestdead)
+               heap_prune_record_unused(prstate, latestdead);
heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);
+       }
}

Please be more careful about that. I find it very distracting.

Done

--
Andrey Lepikhov
Postgres Professional:
https://postgrespro.com
The Russian Postgres Company

#12Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#10)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

Hi,
According to your feedback, i develop second version of the patch.
In this version:
1. High-level functions index_beginscan(), index_rescan() not used. Tree
descent made by _bt_search(). _bt_binsrch() used for positioning on the
page.
2. TID list introduced in amtargetdelete() interface. Now only one tree
descent needed for deletion all tid's from the list with equal scan key
value - logical duplicates deletion problem.
3. Only one WAL record for index tuple deletion per leaf page per
amtargetdelete() call.
4. VACUUM can sort TID list preliminary for more quick search of duplicates.

Background worker will come later.

On 19.06.2018 22:38, Peter Geoghegan wrote:

On Tue, Jun 19, 2018 at 2:33 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I think that we do the partial lazy vacuum using visibility map even
now. That does heap pruning, index tuple killing but doesn't advance
relfrozenxid.

Right, that's what I was thinking. Opportunistic HOT pruning isn't
like vacuuming because it doesn't touch indexes. This patch adds an
alternative strategy for conventional lazy vacuum that is also able to
run a page at a time if needed. Perhaps page-at-a-time operation could
later be used for doing something that is opportunistic in the same
way that pruning is opportunistic, but it's too early to worry about
that.

Since this patch adds an ability to delete small amount
of index tuples quickly, what I'd like to do with this patch is to
invoke autovacuum more frequently, and do the target index deletion or
the index bulk-deletion depending on amount of garbage and index size
etc. That is, it might be better if lazy vacuum scans heap in ordinary
way and then plans and decides a method of index deletion based on
costs similar to what query planning does.

That seems to be what Andrey wants to do, though right now the
prototype patch actually just always uses its alternative strategy
while doing any kind of lazy vacuuming (some simple costing code is
commented out right now). It shouldn't be too hard to add some costing
to it. Once we do that, and once we polish the patch some more, we can
do performance testing. Maybe that alone will be enough to make the
patch worth committing; "opportunistic microvacuuming" can come later,
if at all.

--
Andrey Lepikhov
Postgres Professional:
https://postgrespro.com
The Russian Postgres Company

Attachments:

0001-retail-indextuple-deletion.patchtext/x-patch; name=0001-retail-indextuple-deletion.patchDownload+220-0
0002-quick-vacuum-strategy.patchtext/x-patch; name=0002-quick-vacuum-strategy.patchDownload+211-9
In reply to: Andrei Lepikhov (#12)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Fri, Jun 22, 2018 at 4:24 AM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

According to your feedback, i develop second version of the patch.
In this version:
1. High-level functions index_beginscan(), index_rescan() not used. Tree
descent made by _bt_search(). _bt_binsrch() used for positioning on the
page.
2. TID list introduced in amtargetdelete() interface. Now only one tree
descent needed for deletion all tid's from the list with equal scan key
value - logical duplicates deletion problem.
3. Only one WAL record for index tuple deletion per leaf page per
amtargetdelete() call.

Cool.

What is this "race" code about?

+   buffer = ReadBufferExtended(rel, MAIN_FORKNUM, ItemPointerGetBlockNumber(tid), RBM_NORMAL, NULL);
+   LockBuffer(buffer, BUFFER_LOCK_SHARE);
+
+   page = (Page) BufferGetPage(buffer);
+   offnum = ItemPointerGetOffsetNumber(tid);
+   lp = PageGetItemId(page, offnum);
+
+   /*
+    * VACUUM Races: someone already remove the tuple from HEAP. Ignore it.
+    */
+   if (!ItemIdIsUsed(lp))
+       return NULL;

It looks wrong -- why should another process have set the item as
unused? And if we assume that that's possible at all, what's to stop a
third process from actually reusing the item pointer before we arrive
(at get_tuple_by_tid()), leaving us to find a tuple that is totally
unrelated to the original tuple to be deleted?

(Also, you're not releasing the buffer lock before you return.)

4. VACUUM can sort TID list preliminary for more quick search of duplicates.

This version of the patch prevents my own "force unique keys" patch
from working, since you're not using my proposed new
_bt_search()/_bt_binsrch()/_bt_compare() interface (you're not passing
them a heap TID). It is essential that your patch be able to quickly
reach any tuple that it needs to kill. Otherwise, the worst case
performance is totally unacceptable; it will never be okay to go
through 10%+ of the index to kill a single tuple hundreds or even
thousands of times per VACUUM. It seems to me that doing this
tid_list_search() binary search is pointless -- you should instead be
relying on logical duplicates using their heap TID as a tie-breaker.
Rather than doing a binary search within tid_list_search(), you should
instead match the presorted heap TIDs at the leaf level against the
sorted in-memory TID list. You know, a bit like a merge join.

I suggest that you go even further than this: I think that you should
just start distributing my patch as part of your patch series. You can
change my code if you need to. I also suggest using "git format patch"
with simple, short commit messages to produce patches. This makes it a
lot easier to track the version of the patch, changes over time, etc.

I understand why you'd hesitate to take ownership of my code (it has
big problems!), but the reality is that all the problems that my patch
has are also problems for your patch. One patch cannot get committed
without the other, so they are already the same project. As a bonus,
my patch will probably improve the best case performance for your
patch, since multi-deletions will now have much better locality of
access.

--
Peter Geoghegan

In reply to: Peter Geoghegan (#13)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Fri, Jun 22, 2018 at 12:43 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Fri, Jun 22, 2018 at 4:24 AM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

According to your feedback, i develop second version of the patch.
In this version:
1. High-level functions index_beginscan(), index_rescan() not used. Tree
descent made by _bt_search(). _bt_binsrch() used for positioning on the
page.
2. TID list introduced in amtargetdelete() interface. Now only one tree
descent needed for deletion all tid's from the list with equal scan key
value - logical duplicates deletion problem.
3. Only one WAL record for index tuple deletion per leaf page per
amtargetdelete() call.

Cool.

What is this "race" code about?

I noticed another bug in your patch, when running a
"wal_consistency_checking=all" smoke test. I do this simple, generic
test for anything that touches WAL-logging, actually -- it's a good
practice to adopt.

I enable "wal_consistency_checking=all" on the installation, create a
streaming replica with pg_basebackup (which also has
"wal_consistency_checking=all"), and then run "make installcheck"
against the primary. Here is what I see on the standby when I do this
with v2 of your patch applied:

9524/2018-06-22 13:03:12 PDT LOG: entering standby mode
9524/2018-06-22 13:03:12 PDT LOG: consistent recovery state reached
at 0/30000D0
9524/2018-06-22 13:03:12 PDT LOG: invalid record length at 0/30000D0:
wanted 24, got 0
9523/2018-06-22 13:03:12 PDT LOG: database system is ready to accept
read only connections
9528/2018-06-22 13:03:12 PDT LOG: started streaming WAL from primary
at 0/3000000 on timeline 1
9524/2018-06-22 13:03:12 PDT LOG: redo starts at 0/30000D0
9524/2018-06-22 13:03:32 PDT FATAL: inconsistent page found, rel
1663/16384/1259, forknum 0, blkno 0
9524/2018-06-22 13:03:32 PDT CONTEXT: WAL redo at 0/3360B00 for
Heap2/CLEAN: remxid 599
9523/2018-06-22 13:03:32 PDT LOG: startup process (PID 9524) exited
with exit code 1
9523/2018-06-22 13:03:32 PDT LOG: terminating any other active server processes
9523/2018-06-22 13:03:32 PDT LOG: database system is shut down

I haven't investigated this at all, but I assume that the problem is a
simple oversight. The new ItemIdSetDeadRedirect() concept that you've
introduced probably necessitates changes in both the WAL logging
routines and the redo/recovery routines. You need to go make those
changes. (By the way, I don't think you should be using the constant
"3" with the ItemIdIsDeadRedirection() macro definition.)

Let me know if you get stuck on this, or need more direction.

--
Peter Geoghegan

#15Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#14)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On 23.06.2018 01:14, Peter Geoghegan wrote:

On Fri, Jun 22, 2018 at 12:43 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Fri, Jun 22, 2018 at 4:24 AM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

According to your feedback, i develop second version of the patch.
In this version:
1. High-level functions index_beginscan(), index_rescan() not used. Tree
descent made by _bt_search(). _bt_binsrch() used for positioning on the
page.
2. TID list introduced in amtargetdelete() interface. Now only one tree
descent needed for deletion all tid's from the list with equal scan key
value - logical duplicates deletion problem.
3. Only one WAL record for index tuple deletion per leaf page per
amtargetdelete() call.

Cool.

What is this "race" code about?

I introduce this check because keep in mind about another vacuum
workers, which can make cleaning a relation concurrently. may be it is
redundant.

I noticed another bug in your patch, when running a
"wal_consistency_checking=all" smoke test. I do this simple, generic
test for anything that touches WAL-logging, actually -- it's a good
practice to adopt.

I enable "wal_consistency_checking=all" on the installation, create a
streaming replica with pg_basebackup (which also has
"wal_consistency_checking=all"), and then run "make installcheck"
against the primary. Here is what I see on the standby when I do this
with v2 of your patch applied:

9524/2018-06-22 13:03:12 PDT LOG: entering standby mode
9524/2018-06-22 13:03:12 PDT LOG: consistent recovery state reached
at 0/30000D0
9524/2018-06-22 13:03:12 PDT LOG: invalid record length at 0/30000D0:
wanted 24, got 0
9523/2018-06-22 13:03:12 PDT LOG: database system is ready to accept
read only connections
9528/2018-06-22 13:03:12 PDT LOG: started streaming WAL from primary
at 0/3000000 on timeline 1
9524/2018-06-22 13:03:12 PDT LOG: redo starts at 0/30000D0
9524/2018-06-22 13:03:32 PDT FATAL: inconsistent page found, rel
1663/16384/1259, forknum 0, blkno 0
9524/2018-06-22 13:03:32 PDT CONTEXT: WAL redo at 0/3360B00 for
Heap2/CLEAN: remxid 599
9523/2018-06-22 13:03:32 PDT LOG: startup process (PID 9524) exited
with exit code 1
9523/2018-06-22 13:03:32 PDT LOG: terminating any other active server processes
9523/2018-06-22 13:03:32 PDT LOG: database system is shut down

I haven't investigated this at all, but I assume that the problem is a
simple oversight. The new ItemIdSetDeadRedirect() concept that you've
introduced probably necessitates changes in both the WAL logging
routines and the redo/recovery routines. You need to go make those
changes. (By the way, I don't think you should be using the constant
"3" with the ItemIdIsDeadRedirection() macro definition.)

Let me know if you get stuck on this, or need more direction.

I was investigated the bug of the simple smoke test. You're right: make
any manipulations with line pointer in heap_page_prune() without
reflection in WAL record is no good idea.
But this consistency problem arises even on clean PostgreSQL
installation (without my patch) with ItemIdSetDead() -> ItemIdMarkDead()
replacement.
Byte-by-byte comparison of master and replay pages shows only 2 bytes
difference in the tuple storage part of page.
I don't stuck on yet, but good ideas are welcome.

--
Andrey Lepikhov
Postgres Professional:
https://postgrespro.com
The Russian Postgres Company

#16Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Andrei Lepikhov (#12)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Fri, Jun 22, 2018 at 8:24 PM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

Hi,
According to your feedback, i develop second version of the patch.
In this version:
1. High-level functions index_beginscan(), index_rescan() not used. Tree
descent made by _bt_search(). _bt_binsrch() used for positioning on the
page.
2. TID list introduced in amtargetdelete() interface. Now only one tree
descent needed for deletion all tid's from the list with equal scan key
value - logical duplicates deletion problem.
3. Only one WAL record for index tuple deletion per leaf page per
amtargetdelete() call.
4. VACUUM can sort TID list preliminary for more quick search of duplicates.

Background worker will come later.

Thank you for updating patches! Here is some comments for the latest patch.

+static void
+quick_vacuum_index(Relation irel, Relation hrel,
+                                  IndexBulkDeleteResult **overall_stats,
+                                  LVRelStats *vacrelstats)
+{
(snip)
+       /*
+        * Collect statistical info
+        */
+       lazy_cleanup_index(irel, *overall_stats, vacrelstats);
+}

I think that we should not call lazy_cleanup_index at the end of
quick_vacuum_index because we call it multiple times during a lazy
vacuum and index statistics can be changed during vacuum. We already
call lazy_cleanup_index at the end of lazy_scan_heap.

bttargetdelete doesn't delete btree pages even if pages become empty.
I think we should do that. Otherwise empty page never be recycled. But
please note that if we delete btree pages during bttargetdelete,
recyclable pages might not be recycled. That is, if we choose the
target deletion method every time then the deleted-but-not-recycled
pages could never be touched, unless reaching
vacuum_cleanup_index_scale_factor. So I think we need to either run
bulk-deletion method or do cleanup index before btpo.xact wraparound.

+               ivinfo.indexRelation = irel;
+               ivinfo.heapRelation = hrel;
+               qsort((void *)vacrelstats->dead_tuples,
vacrelstats->num_dead_tuples, sizeof(ItemPointerData),
tid_comparator);

I think the sorting vacrelstats->dead_tuples is not necessary because
garbage TIDs are stored in a sorted order.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

#17Andrei Lepikhov
lepihov@gmail.com
In reply to: Masahiko Sawada (#16)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On 26.06.2018 15:31, Masahiko Sawada wrote:

On Fri, Jun 22, 2018 at 8:24 PM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

Hi,
According to your feedback, i develop second version of the patch.
In this version:
1. High-level functions index_beginscan(), index_rescan() not used. Tree
descent made by _bt_search(). _bt_binsrch() used for positioning on the
page.
2. TID list introduced in amtargetdelete() interface. Now only one tree
descent needed for deletion all tid's from the list with equal scan key
value - logical duplicates deletion problem.
3. Only one WAL record for index tuple deletion per leaf page per
amtargetdelete() call.
4. VACUUM can sort TID list preliminary for more quick search of duplicates.

Background worker will come later.

Thank you for updating patches! Here is some comments for the latest patch.

+static void
+quick_vacuum_index(Relation irel, Relation hrel,
+                                  IndexBulkDeleteResult **overall_stats,
+                                  LVRelStats *vacrelstats)
+{
(snip)
+       /*
+        * Collect statistical info
+        */
+       lazy_cleanup_index(irel, *overall_stats, vacrelstats);
+}

I think that we should not call lazy_cleanup_index at the end of
quick_vacuum_index because we call it multiple times during a lazy
vacuum and index statistics can be changed during vacuum. We already
call lazy_cleanup_index at the end of lazy_scan_heap.

Ok

bttargetdelete doesn't delete btree pages even if pages become empty.
I think we should do that. Otherwise empty page never be recycled. But
please note that if we delete btree pages during bttargetdelete,
recyclable pages might not be recycled. That is, if we choose the
target deletion method every time then the deleted-but-not-recycled
pages could never be touched, unless reaching
vacuum_cleanup_index_scale_factor. So I think we need to either run
bulk-deletion method or do cleanup index before btpo.xact wraparound.

+               ivinfo.indexRelation = irel;
+               ivinfo.heapRelation = hrel;
+               qsort((void *)vacrelstats->dead_tuples,
vacrelstats->num_dead_tuples, sizeof(ItemPointerData),
tid_comparator);

Ok. I think caller of bttargetdelete() must decide when to make index
cleanup.

I think the sorting vacrelstats->dead_tuples is not necessary because
garbage TIDs are stored in a sorted order.

Sorting was introduced because I keep in mind background worker and more
flexible cleaning strategies, not only full tuple-by-tuple relation and
block scan.
Caller of bttargetdelete() can set info->isSorted to prevent sorting
operation.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

--
Andrey Lepikhov
Postgres Professional:
https://postgrespro.com
The Russian Postgres Company

#18Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#13)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On 23.06.2018 00:43, Peter Geoghegan wrote:

On Fri, Jun 22, 2018 at 4:24 AM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

According to your feedback, i develop second version of the patch.
In this version:
1. High-level functions index_beginscan(), index_rescan() not used. Tree
descent made by _bt_search(). _bt_binsrch() used for positioning on the
page.
2. TID list introduced in amtargetdelete() interface. Now only one tree
descent needed for deletion all tid's from the list with equal scan key
value - logical duplicates deletion problem.
3. Only one WAL record for index tuple deletion per leaf page per
amtargetdelete() call.

Cool.

What is this "race" code about?

+   buffer = ReadBufferExtended(rel, MAIN_FORKNUM, ItemPointerGetBlockNumber(tid), RBM_NORMAL, NULL);
+   LockBuffer(buffer, BUFFER_LOCK_SHARE);
+
+   page = (Page) BufferGetPage(buffer);
+   offnum = ItemPointerGetOffsetNumber(tid);
+   lp = PageGetItemId(page, offnum);
+
+   /*
+    * VACUUM Races: someone already remove the tuple from HEAP. Ignore it.
+    */
+   if (!ItemIdIsUsed(lp))
+       return NULL;

It looks wrong -- why should another process have set the item as
unused? And if we assume that that's possible at all, what's to stop a
third process from actually reusing the item pointer before we arrive
(at get_tuple_by_tid()), leaving us to find a tuple that is totally
unrelated to the original tuple to be deleted?

(Also, you're not releasing the buffer lock before you return.)

4. VACUUM can sort TID list preliminary for more quick search of duplicates.

This version of the patch prevents my own "force unique keys" patch
from working, since you're not using my proposed new
_bt_search()/_bt_binsrch()/_bt_compare() interface (you're not passing
them a heap TID). It is essential that your patch be able to quickly
reach any tuple that it needs to kill. Otherwise, the worst case
performance is totally unacceptable; it will never be okay to go
through 10%+ of the index to kill a single tuple hundreds or even
thousands of times per VACUUM. It seems to me that doing this
tid_list_search() binary search is pointless -- you should instead be
relying on logical duplicates using their heap TID as a tie-breaker.
Rather than doing a binary search within tid_list_search(), you should
instead match the presorted heap TIDs at the leaf level against the
sorted in-memory TID list. You know, a bit like a merge join.

I agree with you. Binary search was developed in awaiting your patch.

I suggest that you go even further than this: I think that you should
just start distributing my patch as part of your patch series. You can
change my code if you need to.

Good. I am ready to start distributing your patch. At the beginning of
the work I planned to make patch for physical TID ordering in the btree
index. Your patch will make it much easier.

I also suggest using "git format patch"

with simple, short commit messages to produce patches. This makes it a
lot easier to track the version of the patch, changes over time, etc.

Ok

I understand why you'd hesitate to take ownership of my code (it has
big problems!), but the reality is that all the problems that my patch
has are also problems for your patch. One patch cannot get committed
without the other, so they are already the same project. As a bonus,
my patch will probably improve the best case performance for your
patch, since multi-deletions will now have much better locality of
access.

I still believe that the patch for physical TID ordering in btree:
1) has its own value, not only for target deletion,
2) will require only a few local changes in my code,
and this patches can be developed independently.

I prepare third version of the patches. Summary:
1. Mask DEAD tuples at a page during consistency checking (See comments
for the mask_dead_tuples() function).
2. Still not using physical TID ordering.
3. Index cleanup() after each quick_vacuum_index() call was excluded.

--
Andrey Lepikhov
Postgres Professional:
https://postgrespro.com
The Russian Postgres Company

Attachments:

0002-Quick-Vacuum-Strategy-patch-v.3.patchtext/x-patch; name=0002-Quick-Vacuum-Strategy-patch-v.3.patchDownload+241-10
0001-Retail-IndexTuple-Deletion-patch-v.3.patchtext/x-patch; name=0001-Retail-IndexTuple-Deletion-patch-v.3.patchDownload+220-1
In reply to: Masahiko Sawada (#16)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Tue, Jun 26, 2018 at 3:31 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

bttargetdelete doesn't delete btree pages even if pages become empty.
I think we should do that. Otherwise empty page never be recycled. But
please note that if we delete btree pages during bttargetdelete,
recyclable pages might not be recycled. That is, if we choose the
target deletion method every time then the deleted-but-not-recycled
pages could never be touched, unless reaching
vacuum_cleanup_index_scale_factor. So I think we need to either run
bulk-deletion method or do cleanup index before btpo.xact wraparound.

As you pointed out, we can certainly never fully delete or recycle
half-dead pages using bttargetdelete. We already need to make some
kind of compromise around page deletion, and it may not be necessary
to insist that bttargetdelete does any kind of page deletion. I'm
unsure of that, though.

--
Peter Geoghegan

In reply to: Andrei Lepikhov (#18)
Re: [WIP] [B-Tree] Retail IndexTuple deletion

On Tue, Jun 26, 2018 at 11:40 PM, Andrey V. Lepikhov
<a.lepikhov@postgrespro.ru> wrote:

I still believe that the patch for physical TID ordering in btree:
1) has its own value, not only for target deletion,
2) will require only a few local changes in my code,
and this patches can be developed independently.

I want to be clear on something now: I just don't think that this
patch has any chance of getting committed without something like my
own patch to go with it. The worst case for your patch without that
component is completely terrible. It's not really important for you to
actually formally make it part of your patch, so I'm not going to
insist on that or anything, but the reality is that my patch does not
have independent value -- and neither does yours.

I'm sorry if that sounds harsh, but this is a difficult, complicated
project. It's better to be clear about this stuff earlier on.

I prepare third version of the patches. Summary:
1. Mask DEAD tuples at a page during consistency checking (See comments for
the mask_dead_tuples() function).
2. Still not using physical TID ordering.
3. Index cleanup() after each quick_vacuum_index() call was excluded.

How does this patch affect opportunistic pruning in particular? Not
being able to immediately reclaim tuple space in the event of a dead
hot chain that is marked LP_DEAD could hurt quite a lot, including
with very common workloads, such as pgbench (pgbench accounts tuples
are quite a lot wider than a raw item pointer, and opportunistic
pruning is much more important than vacuuming). Is that going to be
acceptable, do you think? Have you measured the effects? Can we do
something about it, like make pruning behave differently when it's
opportunistic?

Are you aware of the difference between _bt_delitems_delete() and
_bt_delitems_vacuum(), and the considerations for hot standby? I think
that that's another TODO list item for this patch.

--
Peter Geoghegan

#21Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#20)
#22Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Andrei Lepikhov (#18)
#23Andrei Lepikhov
lepihov@gmail.com
In reply to: Kuntal Ghosh (#22)
#24Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Andrei Lepikhov (#23)
#25Dilip Kumar
dilipbalaut@gmail.com
In reply to: Andrei Lepikhov (#18)
#26Юрий Соколов
funny.falcon@gmail.com
In reply to: Andrei Lepikhov (#21)
#27Andrei Lepikhov
lepihov@gmail.com
In reply to: Юрий Соколов (#26)
In reply to: Andrei Lepikhov (#27)
In reply to: Peter Geoghegan (#28)
#30Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#29)
In reply to: Andrei Lepikhov (#30)
#32Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Peter Geoghegan (#31)
In reply to: Masahiko Sawada (#32)
#34Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#31)
#35Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#31)
#36Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Peter Geoghegan (#33)
#37Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#33)
In reply to: Andrei Lepikhov (#37)
#39Andrei Lepikhov
lepihov@gmail.com
In reply to: Peter Geoghegan (#38)
#40Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Andrei Lepikhov (#37)
#41Andrei Lepikhov
lepihov@gmail.com
In reply to: Masahiko Sawada (#40)
#42Andrei Lepikhov
lepihov@gmail.com
In reply to: Masahiko Sawada (#40)
#43Andrei Lepikhov
lepihov@gmail.com
In reply to: Andrei Lepikhov (#42)
#44Andrei Lepikhov
lepihov@gmail.com
In reply to: Masahiko Sawada (#40)
#45Andrei Lepikhov
lepihov@gmail.com
In reply to: Masahiko Sawada (#40)
#46Dmitry Dolgov
9erthalion6@gmail.com
In reply to: Andrei Lepikhov (#45)
#47Andres Freund
andres@anarazel.de
In reply to: Dmitry Dolgov (#46)