Questions/Observations related to Gist vacuum

Started by Amit Kapilaover 6 years ago34 messageshackers
Jump to latest
#1Amit Kapila
amit.kapila16@gmail.com

While reviewing a parallel vacuum patch [1]/messages/by-id/CAA4eK1JEQ2y3uNucNopDjK8pse6xSe5=_oknoWfRQvAF=VqsBA@mail.gmail.com, we noticed a few things
about $SUBJECT implemented in commit -
7df159a620b760e289f1795b13542ed1b3e13b87.

1. A new memory context GistBulkDeleteResult->page_set_context has
been introduced, but it doesn't seem to be used.
2. Right now, in gistbulkdelete we make a note of empty leaf pages and
internals pages and then in the second pass during gistvacuumcleanup,
we delete all the empty leaf pages. I was thinking why unlike nbtree,
we have delayed the deletion of empty pages till gistvacuumcleanup. I
don't see any problem if we do this during gistbulkdelete itself
similar to nbtree, also I think there is some advantage in marking the
pages as deleted as early as possible. Basically, if the vacuum
operation is canceled or errored out between gistbulkdelete and
gistvacuumcleanup, then I think the deleted pages could be marked as
recyclable very early in next vacuum operation. The other advantage
of doing this during gistbulkdelete is we can avoid sharing
information between gistbulkdelete and gistvacuumcleanup which is
quite helpful for a parallel vacuum as the information is not trivial
(it is internally stored as in-memory Btree). OTOH, there might be
some advantage for delaying the deletion of pages especially in the
case of multiple scans during a single VACUUM command. We can
probably delete all empty leaf pages in one go which could in some
cases lead to fewer internal page reads. However, I am not sure if it
is really advantageous to postpone the deletion as there seem to be
some downsides to it as well. I don't see it documented why unlike
nbtree we consider delaying deletion of empty pages till
gistvacuumcleanup, but I might be missing something.

Thoughts?

[1]: /messages/by-id/CAA4eK1JEQ2y3uNucNopDjK8pse6xSe5=_oknoWfRQvAF=VqsBA@mail.gmail.com

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#2Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Amit Kapila (#1)
Re: Questions/Observations related to Gist vacuum

On 15/10/2019 09:37, Amit Kapila wrote:

While reviewing a parallel vacuum patch [1], we noticed a few things
about $SUBJECT implemented in commit -
7df159a620b760e289f1795b13542ed1b3e13b87.

1. A new memory context GistBulkDeleteResult->page_set_context has
been introduced, but it doesn't seem to be used.

Oops. internal_page_set and empty_leaf_set were supposed to be allocated
in that memory context. As things stand, we leak them until end of
vacuum, in a multi-pass vacuum.

2. Right now, in gistbulkdelete we make a note of empty leaf pages and
internals pages and then in the second pass during gistvacuumcleanup,
we delete all the empty leaf pages. I was thinking why unlike nbtree,
we have delayed the deletion of empty pages till gistvacuumcleanup. I
don't see any problem if we do this during gistbulkdelete itself
similar to nbtree, also I think there is some advantage in marking the
pages as deleted as early as possible. Basically, if the vacuum
operation is canceled or errored out between gistbulkdelete and
gistvacuumcleanup, then I think the deleted pages could be marked as
recyclable very early in next vacuum operation. The other advantage
of doing this during gistbulkdelete is we can avoid sharing
information between gistbulkdelete and gistvacuumcleanup which is
quite helpful for a parallel vacuum as the information is not trivial
(it is internally stored as in-memory Btree). OTOH, there might be
some advantage for delaying the deletion of pages especially in the
case of multiple scans during a single VACUUM command. We can
probably delete all empty leaf pages in one go which could in some
cases lead to fewer internal page reads. However, I am not sure if it
is really advantageous to postpone the deletion as there seem to be
some downsides to it as well. I don't see it documented why unlike
nbtree we consider delaying deletion of empty pages till
gistvacuumcleanup, but I might be missing something.

Hmm. The thinking is/was that removing the empty pages is somewhat
expensive, because it has to scan all the internal nodes to find the
downlinks to the to-be-deleted pages. Furthermore, it needs to scan all
the internal pages (or at least until it has found all the downlinks),
regardless of how many empty pages are being deleted. So it makes sense
to do it only once, for all the empty pages. You're right though, that
there would be advantages, too, in doing it after each pass. All things
considered, I'm not sure which is better.

- Heikki

#3Dilip Kumar
dilipbalaut@gmail.com
In reply to: Heikki Linnakangas (#2)
Re: Questions/Observations related to Gist vacuum

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 15/10/2019 09:37, Amit Kapila wrote:

While reviewing a parallel vacuum patch [1], we noticed a few things
about $SUBJECT implemented in commit -
7df159a620b760e289f1795b13542ed1b3e13b87.

1. A new memory context GistBulkDeleteResult->page_set_context has
been introduced, but it doesn't seem to be used.

Oops. internal_page_set and empty_leaf_set were supposed to be allocated
in that memory context. As things stand, we leak them until end of
vacuum, in a multi-pass vacuum.

Here is a patch to fix this issue.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachments:

user_correct_memorycontext_gist_stat.patchapplication/octet-stream; name=user_correct_memorycontext_gist_stat.patchDownload+10-0
#4Amit Kapila
amit.kapila16@gmail.com
In reply to: Heikki Linnakangas (#2)
Re: Questions/Observations related to Gist vacuum

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 15/10/2019 09:37, Amit Kapila wrote:

2. Right now, in gistbulkdelete we make a note of empty leaf pages and
internals pages and then in the second pass during gistvacuumcleanup,
we delete all the empty leaf pages. I was thinking why unlike nbtree,
we have delayed the deletion of empty pages till gistvacuumcleanup. I
don't see any problem if we do this during gistbulkdelete itself
similar to nbtree, also I think there is some advantage in marking the
pages as deleted as early as possible. Basically, if the vacuum
operation is canceled or errored out between gistbulkdelete and
gistvacuumcleanup, then I think the deleted pages could be marked as
recyclable very early in next vacuum operation. The other advantage
of doing this during gistbulkdelete is we can avoid sharing
information between gistbulkdelete and gistvacuumcleanup which is
quite helpful for a parallel vacuum as the information is not trivial
(it is internally stored as in-memory Btree). OTOH, there might be
some advantage for delaying the deletion of pages especially in the
case of multiple scans during a single VACUUM command. We can
probably delete all empty leaf pages in one go which could in some
cases lead to fewer internal page reads. However, I am not sure if it
is really advantageous to postpone the deletion as there seem to be
some downsides to it as well. I don't see it documented why unlike
nbtree we consider delaying deletion of empty pages till
gistvacuumcleanup, but I might be missing something.

Hmm. The thinking is/was that removing the empty pages is somewhat
expensive, because it has to scan all the internal nodes to find the
downlinks to the to-be-deleted pages. Furthermore, it needs to scan all
the internal pages (or at least until it has found all the downlinks),
regardless of how many empty pages are being deleted. So it makes sense
to do it only once, for all the empty pages. You're right though, that
there would be advantages, too, in doing it after each pass.

I was thinking more about this and it seems that there could be more
cases where delaying the delete mark for pages can further delay the
recycling of pages. It is quite possible that immediately after bulk
delete the value of nextFullXid (used as deleteXid) is X and during
vacuum clean up it can be X + N where the chances of N being large is
more when there are multiple passes of vacuum scan. Now, if we would
have set the value of deleteXid as X, then there are more chances for
the next vacuum to recycle it. I am not sure but it might be that in
the future, we could come up with something (say if we can recompute
RecentGlobalXmin again) where we can recycle pages of first index scan
in the next scan of the index during single vacuum operation.

This is just to emphasize the point that doing the delete marking of
pages in the same pass has advantages, otherwise, I understand that
there are advantages in delaying it as well.

All things
considered, I'm not sure which is better.

Yeah, this is a tough call to make, but if we can allow it to delete
the pages in bulkdelete conditionally for parallel vacuum workers,
then it would be better.

I think we have below option w.r.t Gist indexes for parallel vacuum
a. don't allow Gist Index to participate in parallel vacuum
b. allow delete of leaf pages in bulkdelete for parallel worker
c. always allow deleting leaf pages in bulkdelete
d. Invent some mechanism to share all the Gist stats via shared memory

(a) is not a very good option, but it is a safe option as we can
extend it in the future and we might decide to go with it especially
if we can't decide among any other options. (b) would serve the need
but would add some additional checks in gistbulkdelete and will look
more like a hack. (c) would be best, but I think it will be difficult
to be sure that is a good decision for all type of cases. (d) can
require a lot of effort and I am not sure if it is worth adding
complexity in the proposed patch.

Do you have any thoughts on this?

Just to give you an idea of the current parallel vacuum patch, the
master backend scans the heap and forms the dead tuple array in dsm
and then we launch one worker for each index based on the availability
of workers and share the dead tuple array with each worker. Each
worker performs bulkdelete for the index. In the end, we perform
cleanup of all the indexes either via worker or master backend based
on some conditions.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#5Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Amit Kapila (#4)
Re: Questions/Observations related to Gist vacuum

On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>
wrote:

All things
considered, I'm not sure which is better.

Yeah, this is a tough call to make, but if we can allow it to delete
the pages in bulkdelete conditionally for parallel vacuum workers,
then it would be better.

Yeah, if it's needed for parallel vacuum, maybe that tips the scale.

Hopefully, multi-pass vacuums are rare in practice. And we should lift the current 1 GB limit on the dead TID array, replacing it with something more compact and expandable, to make multi-pass vacuums even more rare. So I don't think we need to jump through many hoops to optimize the multi-pass case.

- Heikki

#6Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#3)
Re: Questions/Observations related to Gist vacuum

On Wed, Oct 16, 2019 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 15/10/2019 09:37, Amit Kapila wrote:

While reviewing a parallel vacuum patch [1], we noticed a few things
about $SUBJECT implemented in commit -
7df159a620b760e289f1795b13542ed1b3e13b87.

1. A new memory context GistBulkDeleteResult->page_set_context has
been introduced, but it doesn't seem to be used.

Oops. internal_page_set and empty_leaf_set were supposed to be allocated
in that memory context. As things stand, we leak them until end of
vacuum, in a multi-pass vacuum.

Here is a patch to fix this issue.

The patch looks good to me. I have slightly modified the comments and
removed unnecessary initialization.

Heikki, are you fine me committing and backpatching this to 12? Let
me know if you have a different idea to fix.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachments:

0001-Fix-memory-leak-introduced-in-commit-7df159a620.patchapplication/octet-stream; name=0001-Fix-memory-leak-introduced-in-commit-7df159a620.patchDownload+10-1
#7Amit Kapila
amit.kapila16@gmail.com
In reply to: Heikki Linnakangas (#5)
Re: Questions/Observations related to Gist vacuum

On Wed, Oct 16, 2019 at 7:21 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>
wrote:

All things
considered, I'm not sure which is better.

Yeah, this is a tough call to make, but if we can allow it to delete
the pages in bulkdelete conditionally for parallel vacuum workers,
then it would be better.

Yeah, if it's needed for parallel vacuum, maybe that tips the scale.

makes sense. I think we can write a patch for it and prepare the
parallel vacuum patch on top of it. Once the parallel vacuum is in a
committable shape, we can commit the gist-index related patch first
followed by parallel vacuum patch.

Hopefully, multi-pass vacuums are rare in practice. And we should lift the current 1 GB limit on the dead TID array, replacing it with something more compact and expandable, to make multi-pass vacuums even more rare. So I don't think we need to jump through many hoops to optimize the multi-pass case.

Yeah, that will be a good improvement.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#8Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#7)
Re: Questions/Observations related to Gist vacuum

On Thu, Oct 17, 2019 at 9:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 16, 2019 at 7:21 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>
wrote:

All things
considered, I'm not sure which is better.

Yeah, this is a tough call to make, but if we can allow it to delete
the pages in bulkdelete conditionally for parallel vacuum workers,
then it would be better.

Yeah, if it's needed for parallel vacuum, maybe that tips the scale.

makes sense. I think we can write a patch for it and prepare the
parallel vacuum patch on top of it. Once the parallel vacuum is in a
committable shape, we can commit the gist-index related patch first
followed by parallel vacuum patch.

+1
I can write a patch for the same.

Hopefully, multi-pass vacuums are rare in practice. And we should lift the current 1 GB limit on the dead TID array, replacing it with something more compact and expandable, to make multi-pass vacuums even more rare. So I don't think we need to jump through many hoops to optimize the multi-pass case.

Yeah, that will be a good improvement.

+1

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#9Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Amit Kapila (#6)
Re: Questions/Observations related to Gist vacuum

On 17/10/2019 05:31, Amit Kapila wrote:

On Wed, Oct 16, 2019 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 15/10/2019 09:37, Amit Kapila wrote:

While reviewing a parallel vacuum patch [1], we noticed a few things
about $SUBJECT implemented in commit -
7df159a620b760e289f1795b13542ed1b3e13b87.

1. A new memory context GistBulkDeleteResult->page_set_context has
been introduced, but it doesn't seem to be used.

Oops. internal_page_set and empty_leaf_set were supposed to be allocated
in that memory context. As things stand, we leak them until end of
vacuum, in a multi-pass vacuum.

Here is a patch to fix this issue.

The patch looks good to me. I have slightly modified the comments and
removed unnecessary initialization.

Heikki, are you fine me committing and backpatching this to 12? Let
me know if you have a different idea to fix.

Thanks! Looks good to me. Did either of you test it, though, with a
multi-pass vacuum?

- Heikki

#10Dilip Kumar
dilipbalaut@gmail.com
In reply to: Heikki Linnakangas (#9)
Re: Questions/Observations related to Gist vacuum

On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 17/10/2019 05:31, Amit Kapila wrote:

On Wed, Oct 16, 2019 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 15/10/2019 09:37, Amit Kapila wrote:

While reviewing a parallel vacuum patch [1], we noticed a few things
about $SUBJECT implemented in commit -
7df159a620b760e289f1795b13542ed1b3e13b87.

1. A new memory context GistBulkDeleteResult->page_set_context has
been introduced, but it doesn't seem to be used.

Oops. internal_page_set and empty_leaf_set were supposed to be allocated
in that memory context. As things stand, we leak them until end of
vacuum, in a multi-pass vacuum.

Here is a patch to fix this issue.

The patch looks good to me. I have slightly modified the comments and
removed unnecessary initialization.

Heikki, are you fine me committing and backpatching this to 12? Let
me know if you have a different idea to fix.

Thanks! Looks good to me. Did either of you test it, though, with a
multi-pass vacuum?

From my side, I have tested it with the multi-pass vacuum using the
gist index and after the fix, it's using expected memory context.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#11Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#10)
Re: Questions/Observations related to Gist vacuum

On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 17/10/2019 05:31, Amit Kapila wrote:

The patch looks good to me. I have slightly modified the comments and
removed unnecessary initialization.

Heikki, are you fine me committing and backpatching this to 12? Let
me know if you have a different idea to fix.

Thanks! Looks good to me. Did either of you test it, though, with a
multi-pass vacuum?

From my side, I have tested it with the multi-pass vacuum using the
gist index and after the fix, it's using expected memory context.

I have also verified that, but I think what additionally we can test
here is that without the patch it will leak the memory in
TopTransactionContext (CurrentMemoryContext), but after patch it
shouldn't leak it during multi-pass vacuum.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#12Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#11)
Re: Questions/Observations related to Gist vacuum

On Thu, 17 Oct 2019, 14:59 Amit Kapila, <amit.kapila16@gmail.com> wrote:

Show quoted text

On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi>

wrote:

On 17/10/2019 05:31, Amit Kapila wrote:

The patch looks good to me. I have slightly modified the comments

and

removed unnecessary initialization.

Heikki, are you fine me committing and backpatching this to 12? Let
me know if you have a different idea to fix.

Thanks! Looks good to me. Did either of you test it, though, with a
multi-pass vacuum?

From my side, I have tested it with the multi-pass vacuum using the
gist index and after the fix, it's using expected memory context.

I have also verified that, but I think what additionally we can test
here is that without the patch it will leak the memory in
TopTransactionContext (CurrentMemoryContext), but after patch it
shouldn't leak it during multi-pass vacuum.

Make sense to me, I will test that by tomorrow.

#13Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#12)
Re: Questions/Observations related to Gist vacuum

On Thu, Oct 17, 2019 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, 17 Oct 2019, 14:59 Amit Kapila, <amit.kapila16@gmail.com> wrote:

On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 17/10/2019 05:31, Amit Kapila wrote:

The patch looks good to me. I have slightly modified the comments and
removed unnecessary initialization.

Heikki, are you fine me committing and backpatching this to 12? Let
me know if you have a different idea to fix.

Thanks! Looks good to me. Did either of you test it, though, with a
multi-pass vacuum?

From my side, I have tested it with the multi-pass vacuum using the
gist index and after the fix, it's using expected memory context.

I have also verified that, but I think what additionally we can test
here is that without the patch it will leak the memory in
TopTransactionContext (CurrentMemoryContext), but after patch it
shouldn't leak it during multi-pass vacuum.

Make sense to me, I will test that by tomorrow.

I have performed the test to observe the memory usage in the
TopTransactionContext using the MemoryContextStats function from gdb.

For testing this, in gistvacuumscan every time, after it resets the
page_set_context, I have collected the sample. I have collected 3
samples for both the head and the patch. We can clearly see that on
the head the memory is getting accumulated in the
TopTransactionContext whereas with the patch there is no memory
getting accumulated.

head:
TopTransactionContext: 1056832 total in 2 blocks; 3296 free (5
chunks); 1053536 used
GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0
free (0 chunks); 112 used
Grand total: 1056944 bytes in 2 blocks; 3296 free (5 chunks); 1053648 used

TopTransactionContext: 1089600 total in 4 blocks; 19552 free (14
chunks); 1070048 used
GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0
free (0 chunks); 112 used
Grand total: 1089712 bytes in 4 blocks; 19552 free (14 chunks); 1070160 used

TopTransactionContext: 1122368 total in 5 blocks; 35848 free (20
chunks); 1086520 used
GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0
free (0 chunks); 112 used
Grand total: 1122480 bytes in 5 blocks; 35848 free (20 chunks); 1086632 used

With Patch:
TopTransactionContext: 1056832 total in 2 blocks; 3296 free (1
chunks); 1053536 used
GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0
free (0 chunks); 112 used
Grand total: 1056944 bytes in 2 blocks; 3296 free (1 chunks); 1053648 used

TopTransactionContext: 1056832 total in 2 blocks; 3296 free (1
chunks); 1053536 used
GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0
free (0 chunks); 112 used
Grand total: 1056944 bytes in 2 blocks; 3296 free (1 chunks); 1053648 used

TopTransactionContext: 1056832 total in 2 blocks; 3296 free (1
chunks); 1053536 used
GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0
free (0 chunks); 112 used
Grand total: 1056944 bytes in 2 blocks; 3296 free (1 chunks); 1053648 used

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#14Dilip Kumar
dilipbalaut@gmail.com
In reply to: Heikki Linnakangas (#5)
Re: Questions/Observations related to Gist vacuum

On Wed, Oct 16, 2019 at 7:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>
wrote:

All things
considered, I'm not sure which is better.

Yeah, this is a tough call to make, but if we can allow it to delete
the pages in bulkdelete conditionally for parallel vacuum workers,
then it would be better.

Yeah, if it's needed for parallel vacuum, maybe that tips the scale.

Are we planning to do this only if it is called from parallel vacuum
workers or in general?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#15Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#13)
Re: Questions/Observations related to Gist vacuum

On Fri, Oct 18, 2019 at 9:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 17, 2019 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, 17 Oct 2019, 14:59 Amit Kapila, <amit.kapila16@gmail.com> wrote:

On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

Thanks! Looks good to me. Did either of you test it, though, with a
multi-pass vacuum?

From my side, I have tested it with the multi-pass vacuum using the
gist index and after the fix, it's using expected memory context.

I have also verified that, but I think what additionally we can test
here is that without the patch it will leak the memory in
TopTransactionContext (CurrentMemoryContext), but after patch it
shouldn't leak it during multi-pass vacuum.

Make sense to me, I will test that by tomorrow.

I have performed the test to observe the memory usage in the
TopTransactionContext using the MemoryContextStats function from gdb.

For testing this, in gistvacuumscan every time, after it resets the
page_set_context, I have collected the sample. I have collected 3
samples for both the head and the patch. We can clearly see that on
the head the memory is getting accumulated in the
TopTransactionContext whereas with the patch there is no memory
getting accumulated.

Thanks for the test. It shows that prior to patch the memory was
getting leaked in TopTransactionContext during multi-pass vacuum and
after patch, there is no leak. I will commit the patch early next
week unless Heikki or someone wants some more tests.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#16Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#14)
Re: Questions/Observations related to Gist vacuum

On Fri, Oct 18, 2019 at 9:41 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Oct 16, 2019 at 7:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>
wrote:

All things
considered, I'm not sure which is better.

Yeah, this is a tough call to make, but if we can allow it to delete
the pages in bulkdelete conditionally for parallel vacuum workers,
then it would be better.

Yeah, if it's needed for parallel vacuum, maybe that tips the scale.

Are we planning to do this only if it is called from parallel vacuum
workers or in general?

I think we can do it in general as adding some check for parallel
vacuum there will look bit hackish. It is not clear if we get enough
benefit by keeping it for cleanup phase of the index as discussed in
emails above. Heikki, others, let us know if you don't agree here.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#17Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#16)
Re: Questions/Observations related to Gist vacuum

On Fri, Oct 18, 2019 at 10:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Oct 18, 2019 at 9:41 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Oct 16, 2019 at 7:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>
wrote:

All things
considered, I'm not sure which is better.

Yeah, this is a tough call to make, but if we can allow it to delete
the pages in bulkdelete conditionally for parallel vacuum workers,
then it would be better.

Yeah, if it's needed for parallel vacuum, maybe that tips the scale.

Are we planning to do this only if it is called from parallel vacuum
workers or in general?

I think we can do it in general as adding some check for parallel
vacuum there will look bit hackish.

I agree with that point.
It is not clear if we get enough

benefit by keeping it for cleanup phase of the index as discussed in
emails above. Heikki, others, let us know if you don't agree here.

I have prepared a first version of the patch. Currently, I am
performing an empty page deletion for all the cases.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachments:

delete_emptypages_in_gistbulkdelete_v1.patchapplication/octet-stream; name=delete_emptypages_in_gistbulkdelete_v1.patchDownload+52-45
#18Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#15)
Re: Questions/Observations related to Gist vacuum

On Fri, Oct 18, 2019 at 10:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

Thanks for the test. It shows that prior to patch the memory was
getting leaked in TopTransactionContext during multi-pass vacuum and
after patch, there is no leak. I will commit the patch early next
week unless Heikki or someone wants some more tests.

Pushed.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#19Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#18)
Re: Questions/Observations related to Gist vacuum

On Mon, Oct 21, 2019 at 11:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Oct 18, 2019 at 10:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

Thanks for the test. It shows that prior to patch the memory was
getting leaked in TopTransactionContext during multi-pass vacuum and
after patch, there is no leak. I will commit the patch early next
week unless Heikki or someone wants some more tests.

Pushed.

Thanks.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#20Andrey Borodin
amborodin@acm.org
In reply to: Dilip Kumar (#17)
Re: Questions/Observations related to Gist vacuum

Hi!

18 окт. 2019 г., в 13:21, Dilip Kumar <dilipbalaut@gmail.com> написал(а):

On Fri, Oct 18, 2019 at 10:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think we can do it in general as adding some check for parallel
vacuum there will look bit hackish.

I agree with that point.
It is not clear if we get enough

benefit by keeping it for cleanup phase of the index as discussed in
emails above. Heikki, others, let us know if you don't agree here.

I have prepared a first version of the patch. Currently, I am
performing an empty page deletion for all the cases.

I've took a look into the patch, and cannot understand one simple thing...
We should not call gistvacuum_delete_empty_pages() for same gist_stats twice.
Another way once the function is called we should somehow update or zero empty_leaf_set.
Does this invariant hold in your patch?

Best regards, Andrey Borodin.

#21Dilip Kumar
dilipbalaut@gmail.com
In reply to: Andrey Borodin (#20)
#22Andrey Borodin
amborodin@acm.org
In reply to: Dilip Kumar (#21)
#23Dilip Kumar
dilipbalaut@gmail.com
In reply to: Andrey Borodin (#22)
#24Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#17)
#25Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#24)
#26Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#25)
#27Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#26)
#28Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#27)
#29Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#28)
#30Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#29)
#31Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#30)
#32Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#31)
#33Mahendra Singh Thalor
mahi6run@gmail.com
In reply to: Amit Kapila (#31)
#34Dilip Kumar
dilipbalaut@gmail.com
In reply to: Mahendra Singh Thalor (#33)