Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Started by Michail Nikolaevabout 2 years ago112 messages
Jump to latest
#1Michail Nikolaev
michail.nikolaev@gmail.com

Hello, hackers!

I think about revisiting (1) ({CREATE INDEX, REINDEX} CONCURRENTLY
improvements) in some lighter way.

Yes, a serious bug was (2) caused by this optimization and now it reverted.

But what about a more safe idea in that direction:
1) add new horizon which ignores PROC_IN_SAFE_IC backends and standbys queries
2) use this horizon for settings LP_DEAD bit in indexes (excluding
indexes being built of course)

Index LP_DEAD hints are not used by standby in any way (they are just
ignored), also heap scan done by index building does not use them as
well.

But, at the same time:
1) index scans will be much faster during index creation or standby
reporting queries
2) indexes can keep them fit using different optimizations
3) less WAL due to a huge amount of full pages writes (which caused by
tons of LP_DEAD in indexes)

The patch seems more-less easy to implement.
Does it worth being implemented? Or to scary?

[1]: /messages/by-id/20210115133858.GA18931@alvherre.pgsql
[2]: /messages/by-id/17485-396609c6925b982d@postgresql.org

#2Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#1)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

On Fri, 15 Dec 2023, 20:07 Michail Nikolaev, <michail.nikolaev@gmail.com>
wrote:

Hello, hackers!

I think about revisiting (1) ({CREATE INDEX, REINDEX} CONCURRENTLY
improvements) in some lighter way.

Yes, a serious bug was (2) caused by this optimization and now it reverted.

But what about a more safe idea in that direction:
1) add new horizon which ignores PROC_IN_SAFE_IC backends and standbys
queries
2) use this horizon for settings LP_DEAD bit in indexes (excluding
indexes being built of course)

Index LP_DEAD hints are not used by standby in any way (they are just
ignored), also heap scan done by index building does not use them as
well.

But, at the same time:
1) index scans will be much faster during index creation or standby
reporting queries
2) indexes can keep them fit using different optimizations
3) less WAL due to a huge amount of full pages writes (which caused by
tons of LP_DEAD in indexes)

The patch seems more-less easy to implement.
Does it worth being implemented? Or to scary?

I hihgly doubt this is worth the additional cognitive overhead of another
liveness state, and I think there might be other issues with marking index
tuples dead in indexes before the table tuple is dead that I can't think of
right now.

I've thought about alternative solutions, too: how about getting a new
snapshot every so often?
We don't really care about the liveness of the already-scanned data; the
snapshots used for RIC are used only during the scan. C/RIC's relation's
lock level means vacuum can't run to clean up dead line items, so as long
as we only swap the backend's reported snapshot (thus xmin) while the scan
is between pages we should be able to reduce the time C/RIC is the one
backend holding back cleanup of old tuples.

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

Show quoted text

[1]: /messages/by-id/20210115133858.GA18931@alvherre.pgsql
[2]: /messages/by-id/17485-396609c6925b982d@postgresql.org

#3Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#2)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hello!

I've thought about alternative solutions, too: how about getting a new snapshot every so often?
We don't really care about the liveness of the already-scanned data; the snapshots used for RIC
are used only during the scan. C/RIC's relation's lock level means vacuum can't run to clean up
dead line items, so as long as we only swap the backend's reported snapshot (thus xmin) while
the scan is between pages we should be able to reduce the time C/RIC is the one backend
holding back cleanup of old tuples.

Hm, it looks like an interesting idea! It may be more dangerous, but
at least it feels much more elegant than an LP_DEAD-related way.
Also, feels like we may apply this to both phases (first and the second scans).
The original patch (1) was helping only to the second one (after call
to set_indexsafe_procflags).

But for the first scan we allowed to do so only for non-unique indexes
because of:

* The reason for doing that is to avoid
* bogus unique-index failures due to concurrent UPDATEs (we might see
* different versions of the same row as being valid when we pass over them,
* if we used HeapTupleSatisfiesVacuum). This leaves us with an index that
* does not contain any tuples added to the table while we built the index.

Also, (1) was limited to indexes without expressions and predicates
(2) because such may execute queries to other tables (sic!).
One possible solution is to add some checks to make sure no
user-defined functions are used.
But as far as I understand, it affects only CIC for now and does not
affect the ability to use the proposed technique (updating snapshot
time to time).

However, I think we need some more-less formal proof it is safe - it
is really challenging to keep all the possible cases in the head. I’ll
try to do something here.
Another possible issue may be caused by the new locking pattern - we
will be required to wait for all transaction started before the ending
of the phase to exit.

[1]: /messages/by-id/20210115133858.GA18931@alvherre.pgsql
[2]: /messages/by-id/CAAaqYe_tq_Mtd9tdeGDsgQh+wMvouithAmcOXvCbLaH2PPGHvA@mail.gmail.com

#4Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#3)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

On Sun, 17 Dec 2023, 21:14 Michail Nikolaev, <michail.nikolaev@gmail.com> wrote:

Hello!

I've thought about alternative solutions, too: how about getting a new snapshot every so often?
We don't really care about the liveness of the already-scanned data; the snapshots used for RIC
are used only during the scan. C/RIC's relation's lock level means vacuum can't run to clean up
dead line items, so as long as we only swap the backend's reported snapshot (thus xmin) while
the scan is between pages we should be able to reduce the time C/RIC is the one backend
holding back cleanup of old tuples.

Hm, it looks like an interesting idea! It may be more dangerous, but
at least it feels much more elegant than an LP_DEAD-related way.
Also, feels like we may apply this to both phases (first and the second scans).
The original patch (1) was helping only to the second one (after call
to set_indexsafe_procflags).

But for the first scan we allowed to do so only for non-unique indexes
because of:

* The reason for doing that is to avoid
* bogus unique-index failures due to concurrent UPDATEs (we might see
* different versions of the same row as being valid when we pass over them,
* if we used HeapTupleSatisfiesVacuum). This leaves us with an index that
* does not contain any tuples added to the table while we built the index.

Yes, for that we'd need an extra scan of the index that validates
uniqueness. I think there was a proposal (though it may only have been
an idea) some time ago, about turning existing non-unique indexes into
unique ones by validating the data. Such a system would likely be very
useful to enable this optimization.

Also, (1) was limited to indexes without expressions and predicates
(2) because such may execute queries to other tables (sic!).

Note that the use of such expressions would be a violation of the
function's definition; it would depend on data from other tables which
makes the function behave like a STABLE function, as opposed to the
IMMUTABLE that is required for index expressions. So, I don't think we
should specially care about being correct for incorrectly marked
function definitions.

One possible solution is to add some checks to make sure no
user-defined functions are used.
But as far as I understand, it affects only CIC for now and does not
affect the ability to use the proposed technique (updating snapshot
time to time).

However, I think we need some more-less formal proof it is safe - it
is really challenging to keep all the possible cases in the head. I’ll
try to do something here.

I just realised there is one issue with this design: We can't cheaply
reset the snapshot during the second table scan:
It is critically important that the second scan of R/CIC uses an index
contents summary (made with index_bulk_delete) that was created while
the current snapshot was already registered. If we didn't do that, the
following would occur:

1. The index is marked ready for inserts from concurrent backends, but
not yet ready for queries.
2. We get the bulkdelete data
3. A concurrent backend inserts a new tuple T on heap page P, inserts
it into the index, and commits. This tuple is not in the summary, but
has been inserted into the index.
4. R/CIC resets the snapshot, making T visible.
5. R/CIC scans page P, finds that tuple T has to be indexed but is not
present in the summary, thus inserts that tuple into the index (which
already had it inserted at 3)

This thus would be a logic bug, as indexes assume at-most-once
semantics for index tuple insertion; duplicate insertion are an error.

So, the "reset the snapshot every so often" trick cannot be applied in
phase 3 (the rescan), or we'd have to do an index_bulk_delete call
every time we reset the snapshot. Rescanning might be worth the cost
(e.g. when using BRIN), but that is very unlikely.

Alternatively, we'd need to find another way to prevent us from
inserting these duplicate entries - maybe by storing the scan's data
in a buffer to later load into the index after another
index_bulk_delete()? Counterpoint: for BRIN indexes that'd likely
require a buffer much larger than the result index would be.

Either way, for the first scan (i.e. phase 2 "build new indexes") this
is not an issue: we don't care about what transaction adds/deletes
tuples at that point.
For all we know, all tuples of the table may be deleted concurrently
before we even allow concurrent backends to start inserting tuples,
and the algorithm would still work as it does right now.

Another possible issue may be caused by the new locking pattern - we
will be required to wait for all transaction started before the ending
of the phase to exit.

What do you mean by "new locking pattern"? We already keep an
ShareUpdateExclusiveLock on every heap table we're accessing during
R/CIC, and that should already prevent any concurrent VACUUM
operations, right?

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

#5Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#4)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hello!

Also, feels like we may apply this to both phases (first and the second scans).
The original patch (1) was helping only to the second one (after call
to set_indexsafe_procflags).

Oops, I was wrong here. The original version of the patch was also applied to
both phases.

Note that the use of such expressions would be a violation of the
function's definition; it would depend on data from other tables which
makes the function behave like a STABLE function, as opposed to the
IMMUTABLE that is required for index expressions. So, I don't think we
should specially care about being correct for incorrectly marked
function definitions.

Yes, but such cases could probably cause crashes also...
So, I think it is better to check them for custom functions. But I
still not sure -
if such limitations still required for proposed optimization or not.

I just realised there is one issue with this design: We can't cheaply
reset the snapshot during the second table scan:
It is critically important that the second scan of R/CIC uses an index
contents summary (made with index_bulk_delete) that was created while
the current snapshot was already registered.

So, the "reset the snapshot every so often" trick cannot be applied in
phase 3 (the rescan), or we'd have to do an index_bulk_delete call
every time we reset the snapshot. Rescanning might be worth the cost
(e.g. when using BRIN), but that is very unlikely.

Hm, I think it is still possible. We could just manually recheck the
tuples we see
to the snapshot currently used for the scan. If an "old" snapshot can see
the tuple also (HeapTupleSatisfiesHistoricMVCC) then search for it in the
index summary.

What do you mean by "new locking pattern"? We already keep an
ShareUpdateExclusiveLock on every heap table we're accessing during
R/CIC, and that should already prevent any concurrent VACUUM
operations, right?

I was thinking not about "classical" locking, but about waiting for
other backends
by WaitForLockers(heaplocktag, ShareLock, true). But I think
everything should be
fine.

Best regards,
Michail.

#6Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#5)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

On Wed, 20 Dec 2023 at 10:56, Michail Nikolaev
<michail.nikolaev@gmail.com> wrote:

Note that the use of such expressions would be a violation of the
function's definition; it would depend on data from other tables which
makes the function behave like a STABLE function, as opposed to the
IMMUTABLE that is required for index expressions. So, I don't think we
should specially care about being correct for incorrectly marked
function definitions.

Yes, but such cases could probably cause crashes also...
So, I think it is better to check them for custom functions. But I
still not sure -
if such limitations still required for proposed optimization or not.

I think contents could be inconsistent, but not more inconsistent than
if the index was filled across multiple transactions using inserts.
Either way I don't see it breaking more things that are not already
broken in that way in other places - at most it will introduce another
path that exposes the broken state caused by mislabeled functions.

I just realised there is one issue with this design: We can't cheaply
reset the snapshot during the second table scan:
It is critically important that the second scan of R/CIC uses an index
contents summary (made with index_bulk_delete) that was created while
the current snapshot was already registered.

So, the "reset the snapshot every so often" trick cannot be applied in
phase 3 (the rescan), or we'd have to do an index_bulk_delete call
every time we reset the snapshot. Rescanning might be worth the cost
(e.g. when using BRIN), but that is very unlikely.

Hm, I think it is still possible. We could just manually recheck the
tuples we see
to the snapshot currently used for the scan. If an "old" snapshot can see
the tuple also (HeapTupleSatisfiesHistoricMVCC) then search for it in the
index summary.

That's an interesting method.

How would this deal with tuples not visible to the old snapshot?
Presumably we can assume they're newer than that snapshot (the old
snapshot didn't have it, but the new one does, so it's committed after
the old snapshot, making them newer), so that backend must have
inserted it into the index already, right?

HeapTupleSatisfiesHistoricMVCC

That function has this comment marker:
"Only usable on tuples from catalog tables!"
Is that correct even for this?

Should this deal with any potential XID wraparound, too?
How does this behave when the newly inserted tuple's xmin gets frozen?
This would be allowed to happen during heap page pruning, afaik - no
rules that I know of which are against that - but it would create
issues where normal snapshot visibility rules would indicate it
visible to both snapshots regardless of whether it actually was
visible to the older snapshot when that snapshot was created...

Either way, "Historic snapshot" isn't something I've worked with
before, so that goes onto my "figure out how it works" pile.

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

#7Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#6)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hello!

How would this deal with tuples not visible to the old snapshot?
Presumably we can assume they're newer than that snapshot (the old
snapshot didn't have it, but the new one does, so it's committed after
the old snapshot, making them newer), so that backend must have
inserted it into the index already, right?

Yes, exactly.

HeapTupleSatisfiesHistoricMVCC

That function has this comment marker:
"Only usable on tuples from catalog tables!"
Is that correct even for this?

Yeah, we just need HeapTupleSatisfiesVisibility (which calls
HeapTupleSatisfiesMVCC) instead.

Should this deal with any potential XID wraparound, too?

Yeah, looks like we should care about such case somehow.

Possible options here:

1) Skip vac_truncate_clog while CIC is running. In fact, I think it's
not that much worse than the current state - datfrozenxid is still
updated in the catalog and will be considered the next time
vac_update_datfrozenxid is called (the next VACCUM on any table).

2) Delay vac_truncate_clog while CIC is running.
In such a case, if it was skipped, we will need to re-run it using the
index builds backend later.

3) Wait for 64-bit xids :)

4) Any ideas?

In addition, for the first and second options, we need logic to cancel
the second phase in the case of ForceTransactionIdLimitUpdate.
But maybe I'm missing something and the tuples may be frozen, ignoring
the set datfrozenxid values (over some horizon calculated at runtime
based on the xmin backends).

How does this behave when the newly inserted tuple's xmin gets frozen?
This would be allowed to happen during heap page pruning, afaik - no
rules that I know of which are against that - but it would create
issues where normal snapshot visibility rules would indicate it
visible to both snapshots regardless of whether it actually was
visible to the older snapshot when that snapshot was created...

Yes, good catch.
Assuming we have somehow prevented vac_truncate_clog from occurring
during CIC, we can leave frozen and potentially frozen
(xmin<frozenXID) for the second phase.

So, first phase processing items:
* not frozen
* xmin>frozenXID (may not be frozen)
* visible by snapshot

second phase:
* frozen
* xmin>frozenXID (may be frozen)
* not in the index summary
* visible by "old" snapshot

You might also think – why is the first stage needed at all? Just use
batch processing during initial index building?

Best regards,
Mikhail.

#8Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#7)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Yes, good catch.
Assuming we have somehow prevented vac_truncate_clog from occurring
during CIC, we can leave frozen and potentially frozen
(xmin<frozenXID) for the second phase.

Just realized that we can leave this for the first stage to improve efficiency.
Since the ID is locked, anything that can be frozen will be visible in
the first stage.

#9Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#8)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hello.

Realized my last idea is invalid (because tuples are frozen by using
dynamically calculated horizon) - so, don't waste your time on it :)

Need to think a little bit more here.

Thanks,
Mikhail.

#10Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#9)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hello!

It seems like the idea of "old" snapshot is still a valid one.

Should this deal with any potential XID wraparound, too?

As far as I understand in our case, we are not affected by this in any way.
Vacuum in our table is not possible because of locking, so, nothing
may be frozen (see below).
In the case of super long index building, transactional limits will
stop new connections using current
regular infrastructure because it is based on relation data (but not
actual xmin of backends).

How does this behave when the newly inserted tuple's xmin gets frozen?
This would be allowed to happen during heap page pruning, afaik - no
rules that I know of which are against that - but it would create
issues where normal snapshot visibility rules would indicate it
visible to both snapshots regardless of whether it actually was
visible to the older snapshot when that snapshot was created...

As I can see, heap_page_prune never freezes any tuples.
In the case of regular vacuum, it used this way: call heap_page_prune
and then call heap_prepare_freeze_tuple and then
heap_freeze_execute_prepared.

Merry Christmas,
Mikhail.

#11Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#10)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

On Mon, 25 Dec 2023 at 15:12, Michail Nikolaev
<michail.nikolaev@gmail.com> wrote:

Hello!

It seems like the idea of "old" snapshot is still a valid one.

Should this deal with any potential XID wraparound, too?

As far as I understand in our case, we are not affected by this in any way.
Vacuum in our table is not possible because of locking, so, nothing
may be frozen (see below).
In the case of super long index building, transactional limits will
stop new connections using current
regular infrastructure because it is based on relation data (but not
actual xmin of backends).

How does this behave when the newly inserted tuple's xmin gets frozen?
This would be allowed to happen during heap page pruning, afaik - no
rules that I know of which are against that - but it would create
issues where normal snapshot visibility rules would indicate it
visible to both snapshots regardless of whether it actually was
visible to the older snapshot when that snapshot was created...

As I can see, heap_page_prune never freezes any tuples.
In the case of regular vacuum, it used this way: call heap_page_prune
and then call heap_prepare_freeze_tuple and then
heap_freeze_execute_prepared.

Correct, but there are changes being discussed where we would freeze
tuples during pruning as well [0]/messages/by-id/CAAKRu_a+g2oe6aHJCbibFtNFiy2aib4E31X9QYJ_qKjxZmZQEg@mail.gmail.com, which would invalidate that
implementation detail. And, if I had to choose between improved
opportunistic freezing and improved R/CIC, I'd probably choose
improved freezing over R/CIC.

As an alternative, we _could_ keep track of concurrent index inserts
using a dummy index (with the same predicate) which only holds the
TIDs of the inserted tuples. We'd keep it as an empty index in phase
1, and every time we reset the visibility snapshot we now only need to
scan that index to know what tuples were concurrently inserted. This
should have a significantly lower IO overhead than repeated full index
bulkdelete scans for the new index in the second table scan phase of
R/CIC. However, in a worst case it could still require another
O(tablesize) of storage.

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

[0]: /messages/by-id/CAAKRu_a+g2oe6aHJCbibFtNFiy2aib4E31X9QYJ_qKjxZmZQEg@mail.gmail.com

#12Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#11)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hello!

Correct, but there are changes being discussed where we would freeze
tuples during pruning as well [0], which would invalidate that
implementation detail. And, if I had to choose between improved
opportunistic freezing and improved R/CIC, I'd probably choose
improved freezing over R/CIC.

As another option, we could extract a dedicated horizon value for an
opportunistic freezing.
And use some flags in R/CIC backend to keep it at the required value.

Best regards,
Michail.

#13Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#12)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hello, Melanie!

Sorry to interrupt you, just a quick question.

Correct, but there are changes being discussed where we would freeze
tuples during pruning as well [0], which would invalidate that
implementation detail. And, if I had to choose between improved
opportunistic freezing and improved R/CIC, I'd probably choose
improved freezing over R/CIC.

Do you have any patches\threads related to that refactoring
(opportunistic freezing of tuples during pruning) [0]/messages/by-id/CAAKRu_a+g2oe6aHJCbibFtNFiy2aib4E31X9QYJ_qKjxZmZQEg@mail.gmail.com?
This may affect the idea of the current thread (latest version of it
mostly in [1]/messages/by-id/CANtu0ojRX=osoiXL9JJG6g6qOowXVbVYX+mDsN+2jmFVe=eG7w@mail.gmail.com) - it may be required to disable such a feature for
particular relation temporary or affect horizon used for pruning
(without holding xmin).

Just no sure - is it reasonable to start coding right now, or wait for
some prune-freeze-related patch first?

[0]: /messages/by-id/CAAKRu_a+g2oe6aHJCbibFtNFiy2aib4E31X9QYJ_qKjxZmZQEg@mail.gmail.com
[1]: /messages/by-id/CANtu0ojRX=osoiXL9JJG6g6qOowXVbVYX+mDsN+2jmFVe=eG7w@mail.gmail.com

#14Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#13)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

I just realised there is one issue with this design: We can't cheaply
reset the snapshot during the second table scan:
It is critically important that the second scan of R/CIC uses an index
contents summary (made with index_bulk_delete) that was created while
the current snapshot was already registered.

So, the "reset the snapshot every so often" trick cannot be applied in
phase 3 (the rescan), or we'd have to do an index_bulk_delete call
every time we reset the snapshot. Rescanning might be worth the cost
(e.g. when using BRIN), but that is very unlikely.

Hm, I think it is still possible. We could just manually recheck the
tuples we see
to the snapshot currently used for the scan. If an "old" snapshot can see
the tuple also (HeapTupleSatisfiesHistoricMVCC) then search for it in the
index summary.

That's an interesting method.

How would this deal with tuples not visible to the old snapshot?
Presumably we can assume they're newer than that snapshot (the old
snapshot didn't have it, but the new one does, so it's committed after
the old snapshot, making them newer), so that backend must have
inserted it into the index already, right?

I made a draft of the patch and this idea is not working.

The problem is generally the same:

* reference snapshot sees tuple X
* reference snapshot is used to create index summary (but there is no
tuple X in the index summary)
* tuple X is updated to Y creating a HOT-chain
* we started scan with new temporary snapshot (it sees Y, X is too old for it)
* tuple X is pruned from HOT-chain because it is not protected by any snapshot
* we see tuple Y in the scan with temporary snapshot
* it is not in the index summary - so, we need to check if
reference snapshot can see it
* there is no way to understand if the reference snapshot was able
to see tuple X - because we need the full HOT chain (with X tuple) for
that

Best regards,
Michail.

#15Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#14)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

On Thu, 1 Feb 2024, 17:06 Michail Nikolaev, <michail.nikolaev@gmail.com> wrote:

I just realised there is one issue with this design: We can't cheaply
reset the snapshot during the second table scan:
It is critically important that the second scan of R/CIC uses an index
contents summary (made with index_bulk_delete) that was created while
the current snapshot was already registered.

I think the best way for this to work would be an index method that
exclusively stores TIDs, and of which we can quickly determine new
tuples, too. I was thinking about something like GIN's format, but
using (generation number, tid) instead of ([colno, colvalue], tid) as
key data for the internal trees, and would be unlogged (because the
data wouldn't have to survive a crash). Then we could do something
like this for the second table scan phase:

0. index->indisready is set
[...]
1. Empty the "changelog index", resetting storage and the generation number.
2. Take index contents snapshot of new index, store this.
3. Loop until completion:
4a. Take visibility snapshot
4b. Update generation number of the changelog index, store this.
4c. Take index snapshot of "changelog index" for data up to the
current stored generation number. Not including, because we only need
to scan that part of the index that were added before we created our
visibility snapshot, i.e. TIDs labeled with generation numbers between
the previous iteration's generation number (incl.) and this
iteration's generation (excl.).
4d. Combine the current index snapshot with that of the "changelog"
index, and save this.
Note that this needs to take care to remove duplicates.
4e. Scan segment of table (using the combined index snapshot) until we
need to update our visibility snapshot or have scanned the whole
table.

This should give similar, if not the same, behavour as that which we
have when we RIC a table with several small indexes, without requiring
us to scan a full index of data several times.

Attemp on proving this approach's correctness:
In phase 3, after each step 4b:
All matching tuples of the table that are in the visibility snapshot:
* Were created before scan 1's snapshot, thus in the new index's snapshot, or
* Were created after scan 1's snapshot but before index->indisready,
thus not in the new index's snapshot, nor in the changelog index, or
* Were created after the index was set as indisready, and committed
before the previous iteration's visibility snapshot, thus in the
combined index snapshot, or
* Were created after the index was set as indisready, after the
previous visibility snapshot was taken, but before the current
visibility snapshot was taken, and thus definitely included in the
changelog index.

Because we hold a snapshot, no data in the table that we should see is
removed, so we don't have a chance of broken HOT chains.

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

#16Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#15)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hello!

I think the best way for this to work would be an index method that
exclusively stores TIDs, and of which we can quickly determine new
tuples, too. I was thinking about something like GIN's format, but
using (generation number, tid) instead of ([colno, colvalue], tid) as
key data for the internal trees, and would be unlogged (because the
data wouldn't have to survive a crash)

Yeah, this seems to be a reasonable approach, but there are some
doubts related to it - it needs new index type as well as unlogged
indexes to be introduced - this may make the patch too invasive to be
merged. Also, some way to remove the index from the catalog in case of
a crash may be required.

A few more thoughts:
* it is possible to go without generation number - we may provide a
way to do some kind of fast index lookup (by TID) directly during the
second table scan phase.
* one more option is to maintain a Tuplesorts (instead of an index)
with TIDs as changelog and merge with index snapshot after taking a
new visibility snapshot. But it is not clear how to share the same
Tuplesort with multiple inserting backends.
* crazy idea - what is about to do the scan in the index we are
building? We have tuple, so, we have all the data indexed in the
index. We may try to do an index scan using that data to get all
tuples and find the one with our TID :) Yes, in some cases it may be
too bad because of the huge amount of TIDs we need to scan + also
btree copies whole page despite we need single item. But some
additional index method may help - feels like something related to
uniqueness (but it is only in btree anyway).

Thanks,
Mikhail.

#17Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#16)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

One more idea - is just forbid HOT prune while the second phase is
running. It is not possible anyway currently because of snapshot held.

Possible enhancements:
* we may apply restriction only to particular tables
* we may apply restrictions only to part of the tables (not yet
scanned by R/CICs).

Yes, it is not an elegant solution, limited, not reliable in terms of
architecture, but a simple one.

#18Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#16)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

On Wed, 21 Feb 2024 at 00:33, Michail Nikolaev
<michail.nikolaev@gmail.com> wrote:

Hello!

I think the best way for this to work would be an index method that
exclusively stores TIDs, and of which we can quickly determine new
tuples, too. I was thinking about something like GIN's format, but
using (generation number, tid) instead of ([colno, colvalue], tid) as
key data for the internal trees, and would be unlogged (because the
data wouldn't have to survive a crash)

Yeah, this seems to be a reasonable approach, but there are some
doubts related to it - it needs new index type as well as unlogged
indexes to be introduced - this may make the patch too invasive to be
merged.

I suppose so, though persistence is usually just to keep things
correct in case of crashes, and this "index" is only there to support
processes that don't expect to survive crashes.

Also, some way to remove the index from the catalog in case of
a crash may be required.

That's less of an issue though, we already accept that a crash during
CIC/RIC leaves unusable indexes around, so "needs more cleanup" is not
exactly a blocker.

A few more thoughts:
* it is possible to go without generation number - we may provide a
way to do some kind of fast index lookup (by TID) directly during the
second table scan phase.

While possible, I don't think this would be more performant than the
combination approach, at the cost of potentially much more random IO
when the table is aggressively being updated.

* one more option is to maintain a Tuplesorts (instead of an index)
with TIDs as changelog and merge with index snapshot after taking a
new visibility snapshot. But it is not clear how to share the same
Tuplesort with multiple inserting backends.

Tuplesort requires the leader process to wait for concurrent backends
to finish their sort before it can start consuming their runs. This
would make it a very bad alternative to the "changelog index" as the
CIC process would require on-demand actions from concurrent backends
(flush of sort state). I'm not convinced that's somehow easier.

* crazy idea - what is about to do the scan in the index we are
building? We have tuple, so, we have all the data indexed in the
index. We may try to do an index scan using that data to get all
tuples and find the one with our TID :)

We can't rely on that, because we have no guarantee we can find the
tuple quickly enough. Equality-based indexing is very much optional,
and so are TID-based checks (outside the current vacuum-related APIs),
so finding one TID can (and probably will) take O(indexsize) when the
tuple is not in the index, which is one reason for ambulkdelete() to
exist.

Kind regards,

Matthias van de Meent

#19Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#17)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

On Wed, 21 Feb 2024 at 09:35, Michail Nikolaev
<michail.nikolaev@gmail.com> wrote:

One more idea - is just forbid HOT prune while the second phase is
running. It is not possible anyway currently because of snapshot held.

Possible enhancements:
* we may apply restriction only to particular tables
* we may apply restrictions only to part of the tables (not yet
scanned by R/CICs).

Yes, it is not an elegant solution, limited, not reliable in terms of
architecture, but a simple one.

How do you suppose this would work differently from a long-lived
normal snapshot, which is how it works right now?
Would it be exclusively for that relation? How would this be
integrated with e.g. heap_page_prune_opt?

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

#20Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#19)
Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements

Hi!

How do you suppose this would work differently from a long-lived
normal snapshot, which is how it works right now?

Difference in the ability to take new visibility snapshot periodically
during the second phase with rechecking visibility of tuple according
to the "reference" snapshot (which is taken only once like now).
It is the approach from (1) but with a workaround for the issues
caused by heap_page_prune_opt.

Would it be exclusively for that relation?

Yes, only for that affected relation. Other relations are unaffected.

How would this be integrated with e.g. heap_page_prune_opt?

Probably by some flag in RelationData, but not sure here yet.

If the idea looks sane, I could try to extend my POC - it should be
not too hard, likely (I already have tests to make sure it is
correct).

(1): /messages/by-id/CANtu0oijWPRGRpaRR_OvT2R5YALzscvcOTFh-=uZKUpNJmuZtw@mail.gmail.com

#21Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#20)
#22Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#21)
#23Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#22)
#24Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#23)
#25Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#24)
#26Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#25)
#27Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#26)
#28Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#27)
#29Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#28)
#30Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#29)
#31Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#30)
#32Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#31)
#33Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#32)
#34Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#33)
#35Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#34)
#36Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#35)
#37Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#36)
#38Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#37)
#39Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#38)
#40Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#39)
#41Michael Paquier
michael@paquier.xyz
In reply to: Michail Nikolaev (#39)
#42Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michael Paquier (#41)
#43Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#42)
#44Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#43)
#45Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Michail Nikolaev (#43)
#46Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Matthias van de Meent (#45)
#47Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#46)
#48Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#47)
#49Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#48)
#50Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Michail Nikolaev (#49)
#51Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Michail Nikolaev (#50)
#52Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Mihail Nikalayeu (#51)
#53Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Michail Nikolaev (#52)
#54Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#53)
#55Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#54)
#56Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Mihail Nikalayeu (#55)
#57Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Alvaro Herrera (#56)
#58Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#57)
#59Sergey Sargsyan
sergey.sargsyan.2001@gmail.com
In reply to: Mihail Nikalayeu (#58)
#60Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Sergey Sargsyan (#59)
#61Sergey Sargsyan
sergey.sargsyan.2001@gmail.com
In reply to: Mihail Nikalayeu (#60)
#62Sergey Sargsyan
sergey.sargsyan.2001@gmail.com
In reply to: Sergey Sargsyan (#61)
#63Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Sergey Sargsyan (#62)
#64Sergey Sargsyan
sergey.sargsyan.2001@gmail.com
In reply to: Mihail Nikalayeu (#63)
#65Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Sergey Sargsyan (#64)
#66Sergey Sargsyan
sergey.sargsyan.2001@gmail.com
In reply to: Mihail Nikalayeu (#65)
#67Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Sergey Sargsyan (#66)
#68Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#67)
#69Sergey Sargsyan
sergey.sargsyan.2001@gmail.com
In reply to: Mihail Nikalayeu (#68)
#70Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Sergey Sargsyan (#69)
#71Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#70)
#72Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#71)
#73Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#72)
#74Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#73)
#75Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#74)
#76Antonin Houska
ah@cybertec.at
In reply to: Michail Nikolaev (#1)
#77Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Antonin Houska (#76)
#78Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Antonin Houska (#76)
#79Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Mihail Nikalayeu (#74)
#80Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#77)
#81Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Matthias van de Meent (#79)
#82Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Mihail Nikalayeu (#81)
#83Antonin Houska
ah@cybertec.at
In reply to: Matthias van de Meent (#78)
#84Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Antonin Houska (#83)
#85Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Matthias van de Meent (#82)
#86Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Mihail Nikalayeu (#85)
#87Hannu Krosing
hannu@tm.ee
In reply to: Matthias van de Meent (#86)
#88Hannu Krosing
hannu@tm.ee
In reply to: Hannu Krosing (#87)
#89Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Hannu Krosing (#87)
#90Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Hannu Krosing (#88)
#91Hannu Krosing
hannu@tm.ee
In reply to: Matthias van de Meent (#89)
#92Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Hannu Krosing (#91)
#93Hannu Krosing
hannu@tm.ee
In reply to: Mihail Nikalayeu (#92)
#94Antonin Houska
ah@cybertec.at
In reply to: Matthias van de Meent (#86)
#95Antonin Houska
ah@cybertec.at
In reply to: Hannu Krosing (#88)
#96Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Antonin Houska (#95)
#97Antonin Houska
ah@cybertec.at
In reply to: Mihail Nikalayeu (#96)
#98Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Antonin Houska (#97)
#99Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Antonin Houska (#94)
#100Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Mihail Nikalayeu (#98)
#101Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Hannu Krosing (#91)
#102Antonin Houska
ah@cybertec.at
In reply to: Matthias van de Meent (#99)
#103Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Antonin Houska (#102)
#104Antonin Houska
ah@cybertec.at
In reply to: Matthias van de Meent (#103)
In reply to: Antonin Houska (#104)
#106Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Hannu Krosing (#105)
#107Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Heikki Linnakangas (#106)
#108Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Heikki Linnakangas (#106)
#109Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Matthias van de Meent (#108)
#110Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Matthias van de Meent (#100)
#111Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#110)
#112Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Mihail Nikalayeu (#111)