POC: Lock updated tuples in tuple_update() and tuple_delete()
Hackers,
When working in the read committed transaction isolation mode
(default), we have the following sequence of actions when
tuple_update() or tuple_delete() find concurrently updated tuple.
1. tuple_update()/tuple_delete() returns TM_Updated
2. tuple_lock()
3. Re-evaluate plan qual (recheck if we still need to update/delete
and calculate the new tuple for update)
4. tuple_update()/tuple_delete() (this time should be successful,
since we've previously locked the tuple).
I wonder if we should merge steps 1 and 2. We could save some efforts
already done during tuple_update()/tuple_delete() for locking the
tuple. In heap table access method, we've to start tuple_lock() with
the first tuple in the chain, but tuple_update()/tuple_delete()
already visited it. For undo-based table access methods,
tuple_update()/tuple_delete() should start from the last version, why
don't place the tuple lock immediately once a concurrent update is
detected. I think this patch should have some performance benefits on
high concurrency.
Also, the patch simplifies code in nodeModifyTable.c getting rid of
the nested case. I also get rid of extra
table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old
tuple, when it should be exactly the same tuple we've just locked.
I'm going to check the performance impact. Thoughts and feedback are welcome.
------
Regards,
Alexander Korotkov
Attachments:
0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patchapplication/octet-stream; name=0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patchDownload+181-172
Hi Alexander,
Thoughts and feedback are welcome.
I took some preliminary look at the patch. I'm going to need more time
to meditate on the proposed changes and to figure out the performance
impact.
So far I just wanted to let you know that the patch applied OK for me
and passed all the tests. The `else` branch here seems to be redundant
here:
+ if (!updated)
+ {
+ /* Should not encounter speculative tuple on recheck */
+ Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data));
- ReleaseBuffer(buffer);
+ ReleaseBuffer(buffer);
+ }
+ else
+ {
+ updated = false;
+ }
Also I wish there were a little bit more comments since some of the
proposed changes are not that straightforward.
--
Best regards,
Aleksander Alekseev
Hi again,
+ if (!updated) + { + /* Should not encounter speculative tuple on recheck */ + Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data)); - ReleaseBuffer(buffer); + ReleaseBuffer(buffer); + } + else + { + updated = false; + }
OK, I got confused here. I suggest changing the if(!...) { .. } else {
.. } code to if() { .. } else { .. } here.
--
Best regards,
Aleksander Alekseev
Hi Alexander,
I'm going to need more time to meditate on the proposed changes and to figure out the performance impact.
OK, turned out this patch is slightly more complicated than I
initially thought, but I think I managed to get some vague
understanding of what's going on.
I tried to reproduce the case with concurrently updated tuples you
described on the current `master` branch. I created a new table:
```
CREATE TABLE phonebook(
"id" SERIAL PRIMARY KEY NOT NULL,
"name" NAME NOT NULL,
"phone" INT NOT NULL);
INSERT INTO phonebook ("name", "phone")
VALUES ('Alice', 123), ('Bob', 456), ('Charlie', 789);
```
Then I opened two sessions and attached them with LLDB. I did:
```
(lldb) b heapam_tuple_update
(lldb) c
```
... in both cases because I wanted to see two calls (steps 2 and 4) to
heapam_tuple_update() and check the return values.
Then I did:
```
session1 =# BEGIN;
session2 =# BEGIN;
session1 =# UPDATE phonebook SET name = 'Alex' WHERE name = 'Alice';
```
This update succeeds and I see heapam_tuple_update() returning TM_Ok.
```
session2 =# UPDATE phonebook SET name = 'Alfred' WHERE name = 'Alice';
```
This update hangs on a lock.
```
session1 =# COMMIT;
```
Now session2 unfreezes and returns 'UPDATE 0'. table_tuple_update()
was called once and returned TM_Updated. Also session2 sees an updated
tuple now. So apparently the visibility check (step 3) didn't pass.
At this point I'm slightly confused. I don't see where a performance
improvement is expected, considering that session2 gets blocked until
session1 commits.
Could you please walk me through here? Am I using the right test case
or maybe you had another one in mind? Which steps do you consider
expensive and expect to be mitigated by the patch?
--
Best regards,
Aleksander Alekseev
Hi Aleksander!
Thank you for your efforts reviewing this patch.
On Thu, Jul 7, 2022 at 12:43 PM Aleksander Alekseev
<aleksander@timescale.com> wrote:
I'm going to need more time to meditate on the proposed changes and to figure out the performance impact.
OK, turned out this patch is slightly more complicated than I
initially thought, but I think I managed to get some vague
understanding of what's going on.I tried to reproduce the case with concurrently updated tuples you
described on the current `master` branch. I created a new table:```
CREATE TABLE phonebook(
"id" SERIAL PRIMARY KEY NOT NULL,
"name" NAME NOT NULL,
"phone" INT NOT NULL);INSERT INTO phonebook ("name", "phone")
VALUES ('Alice', 123), ('Bob', 456), ('Charlie', 789);
```Then I opened two sessions and attached them with LLDB. I did:
```
(lldb) b heapam_tuple_update
(lldb) c
```... in both cases because I wanted to see two calls (steps 2 and 4) to
heapam_tuple_update() and check the return values.Then I did:
```
session1 =# BEGIN;
session2 =# BEGIN;
session1 =# UPDATE phonebook SET name = 'Alex' WHERE name = 'Alice';
```This update succeeds and I see heapam_tuple_update() returning TM_Ok.
```
session2 =# UPDATE phonebook SET name = 'Alfred' WHERE name = 'Alice';
```This update hangs on a lock.
```
session1 =# COMMIT;
```Now session2 unfreezes and returns 'UPDATE 0'. table_tuple_update()
was called once and returned TM_Updated. Also session2 sees an updated
tuple now. So apparently the visibility check (step 3) didn't pass.
Yes. But it's not exactly a visibility check. Session2 re-evaluates
WHERE condition on the most recent row version (bypassing snapshot).
WHERE condition is not true anymore, thus the row is not upated.
At this point I'm slightly confused. I don't see where a performance
improvement is expected, considering that session2 gets blocked until
session1 commits.Could you please walk me through here? Am I using the right test case
or maybe you had another one in mind? Which steps do you consider
expensive and expect to be mitigated by the patch?
This patch is not intended to change some high-level logic. On the
high level transaction, which updated the row, still holding a lock on
it until finished. The possible positive performance impact I expect
from doing the work of two calls tuple_update() and tuple_lock() in
the one call of tuple_update(). If we do this in one call, we can
save some efforts, for instance lock the same buffer once not twice.
------
Regards,
Alexander Korotkov
Hi, hackers!
I ran the following benchmark on master branch (15) vs patch (15-lock):
On the 36-vcore AWS server, I've run an UPDATE-only pgbench script with 50
connections on pgbench_tellers with 100 rows. The idea was to introduce as
much as possible concurrency for updates but avoid much clients being in a
wait state.
Indexes were not built to avoid index-update-related delays.
Done 2 runs each consisting of 6 series of updates (1st run:
master-patch-master-patch-master-patch, 2nd run
patch-master-patch-master-patch-master)
Each series started a fresh server and did VACUUM FULL to avoid bloating
heap relation after the previous series to affect the current. It collected
data for 10 minutes with first-minute data being dropped.
Disk-related operations were suppressed where possible (WAL, fsync etc.)
postgresql.conf:
fsync = off
autovacuum = off
full_page_writes = off
max_worker_processes = 99
max_parallel_workers = 99
max_connections = 100
shared_buffers = 4096MB
work_mem = 50MB
Attached are pictures of 2 runs, shell script, and SQL script that were
running.
According to htop all 36-cores were loaded to ~94% in each series
I'm not sure how to interpret the results. Seems like a TPS difference
between runs is significant, with average performance with lock-patch *(15lock)
*seeming a little bit faster than the master* (15)*.
Could someone try to repeat this on another server? What do you think?
--
Best regards,
Pavel Borisov,
Supabase, https://supabase.com/
Hi, Pavel!
On Fri, Jul 29, 2022 at 11:12 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:
I ran the following benchmark on master branch (15) vs patch (15-lock):
On the 36-vcore AWS server, I've run an UPDATE-only pgbench script with 50 connections on pgbench_tellers with 100 rows. The idea was to introduce as much as possible concurrency for updates but avoid much clients being in a wait state.
Indexes were not built to avoid index-update-related delays.
Done 2 runs each consisting of 6 series of updates (1st run: master-patch-master-patch-master-patch, 2nd run patch-master-patch-master-patch-master)
Each series started a fresh server and did VACUUM FULL to avoid bloating heap relation after the previous series to affect the current. It collected data for 10 minutes with first-minute data being dropped.
Disk-related operations were suppressed where possible (WAL, fsync etc.)postgresql.conf:
fsync = off
autovacuum = off
full_page_writes = off
max_worker_processes = 99
max_parallel_workers = 99
max_connections = 100
shared_buffers = 4096MB
work_mem = 50MBAttached are pictures of 2 runs, shell script, and SQL script that were running.
According to htop all 36-cores were loaded to ~94% in each seriesI'm not sure how to interpret the results. Seems like a TPS difference between runs is significant, with average performance with lock-patch (15lock) seeming a little bit faster than the master (15).
Could someone try to repeat this on another server? What do you think?
Thank you for your benchmarks. The TPS variation is high, and run
order heavily affects the result. Nevertheless, I think there is a
small but noticeable positive effect of the patch. I'll continue
working on the patch bringing it into more acceptable shape.
------
Regards,
Alexander Korotkov
On Fri, 1 Jul 2022 at 16:49, Alexander Korotkov <aekorotkov@gmail.com> wrote:
Hackers,
When working in the read committed transaction isolation mode
(default), we have the following sequence of actions when
tuple_update() or tuple_delete() find concurrently updated tuple.1. tuple_update()/tuple_delete() returns TM_Updated
2. tuple_lock()
3. Re-evaluate plan qual (recheck if we still need to update/delete
and calculate the new tuple for update)
4. tuple_update()/tuple_delete() (this time should be successful,
since we've previously locked the tuple).I wonder if we should merge steps 1 and 2. We could save some efforts
already done during tuple_update()/tuple_delete() for locking the
tuple. In heap table access method, we've to start tuple_lock() with
the first tuple in the chain, but tuple_update()/tuple_delete()
already visited it. For undo-based table access methods,
tuple_update()/tuple_delete() should start from the last version, why
don't place the tuple lock immediately once a concurrent update is
detected. I think this patch should have some performance benefits on
high concurrency.Also, the patch simplifies code in nodeModifyTable.c getting rid of
the nested case. I also get rid of extra
table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old
tuple, when it should be exactly the same tuple we've just locked.I'm going to check the performance impact. Thoughts and feedback are welcome.
The patch does not apply on top of HEAD as in [1]http://cfbot.cputube.org/patch_41_4099.log, please post a rebased patch:
=== Applying patches on top of PostgreSQL commit ID
eb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===
=== applying patch
./0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patch
patching file src/backend/executor/nodeModifyTable.c
...
Hunk #3 FAILED at 1376.
...
1 out of 15 hunks FAILED -- saving rejects to file
src/backend/executor/nodeModifyTable.c.rej
[1]: http://cfbot.cputube.org/patch_41_4099.log
Regards,
Vignesh
Hi, Vignesh!
On Wed, 4 Jan 2023 at 12:41, vignesh C <vignesh21@gmail.com> wrote:
On Fri, 1 Jul 2022 at 16:49, Alexander Korotkov <aekorotkov@gmail.com> wrote:
Hackers,
When working in the read committed transaction isolation mode
(default), we have the following sequence of actions when
tuple_update() or tuple_delete() find concurrently updated tuple.1. tuple_update()/tuple_delete() returns TM_Updated
2. tuple_lock()
3. Re-evaluate plan qual (recheck if we still need to update/delete
and calculate the new tuple for update)
4. tuple_update()/tuple_delete() (this time should be successful,
since we've previously locked the tuple).I wonder if we should merge steps 1 and 2. We could save some efforts
already done during tuple_update()/tuple_delete() for locking the
tuple. In heap table access method, we've to start tuple_lock() with
the first tuple in the chain, but tuple_update()/tuple_delete()
already visited it. For undo-based table access methods,
tuple_update()/tuple_delete() should start from the last version, why
don't place the tuple lock immediately once a concurrent update is
detected. I think this patch should have some performance benefits on
high concurrency.Also, the patch simplifies code in nodeModifyTable.c getting rid of
the nested case. I also get rid of extra
table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old
tuple, when it should be exactly the same tuple we've just locked.I'm going to check the performance impact. Thoughts and feedback are welcome.
The patch does not apply on top of HEAD as in [1], please post a rebased patch:
=== Applying patches on top of PostgreSQL commit ID
eb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===
=== applying patch
./0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patch
patching file src/backend/executor/nodeModifyTable.c
...
Hunk #3 FAILED at 1376.
...
1 out of 15 hunks FAILED -- saving rejects to file
src/backend/executor/nodeModifyTable.c.rej
The rebased patch is attached. It's just a change in formatting, no
changes in code though.
Regards,
Pavel Borisov,
Supabase.
Attachments:
v2-0001-Lock-updated-tuples-in-tuple_update-and-tuple_del.patchapplication/octet-stream; name=v2-0001-Lock-updated-tuples-in-tuple_update-and-tuple_del.patchDownload+181-172
On Wed, 4 Jan 2023 at 12:52, Pavel Borisov <pashkin.elfe@gmail.com> wrote:
Hi, Vignesh!
On Wed, 4 Jan 2023 at 12:41, vignesh C <vignesh21@gmail.com> wrote:
On Fri, 1 Jul 2022 at 16:49, Alexander Korotkov <aekorotkov@gmail.com> wrote:
Hackers,
When working in the read committed transaction isolation mode
(default), we have the following sequence of actions when
tuple_update() or tuple_delete() find concurrently updated tuple.1. tuple_update()/tuple_delete() returns TM_Updated
2. tuple_lock()
3. Re-evaluate plan qual (recheck if we still need to update/delete
and calculate the new tuple for update)
4. tuple_update()/tuple_delete() (this time should be successful,
since we've previously locked the tuple).I wonder if we should merge steps 1 and 2. We could save some efforts
already done during tuple_update()/tuple_delete() for locking the
tuple. In heap table access method, we've to start tuple_lock() with
the first tuple in the chain, but tuple_update()/tuple_delete()
already visited it. For undo-based table access methods,
tuple_update()/tuple_delete() should start from the last version, why
don't place the tuple lock immediately once a concurrent update is
detected. I think this patch should have some performance benefits on
high concurrency.Also, the patch simplifies code in nodeModifyTable.c getting rid of
the nested case. I also get rid of extra
table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old
tuple, when it should be exactly the same tuple we've just locked.I'm going to check the performance impact. Thoughts and feedback are welcome.
The patch does not apply on top of HEAD as in [1], please post a rebased patch:
=== Applying patches on top of PostgreSQL commit ID
eb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===
=== applying patch
./0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patch
patching file src/backend/executor/nodeModifyTable.c
...
Hunk #3 FAILED at 1376.
...
1 out of 15 hunks FAILED -- saving rejects to file
src/backend/executor/nodeModifyTable.c.rejThe rebased patch is attached. It's just a change in formatting, no
changes in code though.
One more update of a patchset to avoid compiler warnings.
Regards,
Pavel Borisov
Attachments:
v3-0001-Lock-updated-tuples-in-tuple_update-and-tuple_del.patchapplication/octet-stream; name=v3-0001-Lock-updated-tuples-in-tuple_update-and-tuple_del.patchDownload+182-173
Hi, Pavel!
On Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:
On Wed, 4 Jan 2023 at 12:52, Pavel Borisov <pashkin.elfe@gmail.com> wrote:
On Wed, 4 Jan 2023 at 12:41, vignesh C <vignesh21@gmail.com> wrote:
On Fri, 1 Jul 2022 at 16:49, Alexander Korotkov <aekorotkov@gmail.com> wrote:
Hackers,
When working in the read committed transaction isolation mode
(default), we have the following sequence of actions when
tuple_update() or tuple_delete() find concurrently updated tuple.1. tuple_update()/tuple_delete() returns TM_Updated
2. tuple_lock()
3. Re-evaluate plan qual (recheck if we still need to update/delete
and calculate the new tuple for update)
4. tuple_update()/tuple_delete() (this time should be successful,
since we've previously locked the tuple).I wonder if we should merge steps 1 and 2. We could save some efforts
already done during tuple_update()/tuple_delete() for locking the
tuple. In heap table access method, we've to start tuple_lock() with
the first tuple in the chain, but tuple_update()/tuple_delete()
already visited it. For undo-based table access methods,
tuple_update()/tuple_delete() should start from the last version, why
don't place the tuple lock immediately once a concurrent update is
detected. I think this patch should have some performance benefits on
high concurrency.Also, the patch simplifies code in nodeModifyTable.c getting rid of
the nested case. I also get rid of extra
table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old
tuple, when it should be exactly the same tuple we've just locked.I'm going to check the performance impact. Thoughts and feedback are welcome.
The patch does not apply on top of HEAD as in [1], please post a rebased patch:
=== Applying patches on top of PostgreSQL commit ID
eb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===
=== applying patch
./0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patch
patching file src/backend/executor/nodeModifyTable.c
...
Hunk #3 FAILED at 1376.
...
1 out of 15 hunks FAILED -- saving rejects to file
src/backend/executor/nodeModifyTable.c.rejThe rebased patch is attached. It's just a change in formatting, no
changes in code though.One more update of a patchset to avoid compiler warnings.
Thank you for your help. I'm going to provide the revised version of
patch with comments and commit message in the next couple of days.
------
Regards,
Alexander Korotkov
On Wed, Jan 4, 2023 at 5:05 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:
One more update of a patchset to avoid compiler warnings.
Thank you for your help. I'm going to provide the revised version of
patch with comments and commit message in the next couple of days.
The revised patch is attached. It contains describing commit message,
comments and some minor code improvements.
------
Regards,
Alexander Korotkov
Attachments:
0001-Allow-locking-updated-tuples-in-tuple_update-and--v4.patchapplication/octet-stream; name=0001-Allow-locking-updated-tuples-in-tuple_update-and--v4.patchDownload+392-374
Hi, Alexander!
On Thu, 5 Jan 2023 at 15:11, Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Wed, Jan 4, 2023 at 5:05 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:
One more update of a patchset to avoid compiler warnings.
Thank you for your help. I'm going to provide the revised version of
patch with comments and commit message in the next couple of days.The revised patch is attached. It contains describing commit message,
comments and some minor code improvements.
I've looked through the patch once again. It seems in a nice state to
be committed.
I also noticed that in tableam level and NodeModifyTable function
calls we have a one-to-one correspondence between *lockedSlot и bool
lockUpdated, but no checks on this in case something changes in the
code in the future. I'd propose combining these variables to remain
free from these checks. See v5 of a patch. Tests are successfully
passed.
Besides, the new version has only some minor changes in the comments
and the commit message.
Kind regards,
Pavel Borisov,
Supabase.
Attachments:
v5-0001-Allow-locking-updated-tuples-in-tuple_update-and-.patchapplication/octet-stream; name=v5-0001-Allow-locking-updated-tuples-in-tuple_update-and-.patchDownload+390-375
On Fri, Jan 6, 2023 at 4:46 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:
Hi, Alexander!
On Thu, 5 Jan 2023 at 15:11, Alexander Korotkov <aekorotkov@gmail.com>
wrote:On Wed, Jan 4, 2023 at 5:05 PM Alexander Korotkov <aekorotkov@gmail.com>
wrote:
On Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com>
wrote:
One more update of a patchset to avoid compiler warnings.
Thank you for your help. I'm going to provide the revised version of
patch with comments and commit message in the next couple of days.The revised patch is attached. It contains describing commit message,
comments and some minor code improvements.I've looked through the patch once again. It seems in a nice state to
be committed.
I also noticed that in tableam level and NodeModifyTable function
calls we have a one-to-one correspondence between *lockedSlot и bool
lockUpdated, but no checks on this in case something changes in the
code in the future. I'd propose combining these variables to remain
free from these checks. See v5 of a patch. Tests are successfully
passed.
Besides, the new version has only some minor changes in the comments
and the commit message.Kind regards,
Pavel Borisov,
Supabase.
It looks good, and the greater the concurrency the greater the benefit will
be. Just a few minor suggestions regarding comments.
"ExecDeleteAct() have already locked the old tuple for us", change "have"
to "has".
The comments in heapam_tuple_delete() and heapam_tuple_update() might be a
little clearer with something like:
"If the tuple has been concurrently updated, get lock already so that on
retry it will succeed, provided that the caller asked to do this by
providing a lockedSlot."
Also, not too important, but perhaps better clarify in the commit message
that the repeated work is driven by ExecUpdate and ExecDelete and can
happen multiple times depending on the concurrency.
Best Regards,
Mason
Hi, Mason!
Thank you very much for your review.
On Sun, Jan 8, 2023 at 9:33 PM Mason Sharp <masonlists@gmail.com> wrote:
On Fri, Jan 6, 2023 at 4:46 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:
Besides, the new version has only some minor changes in the comments
and the commit message.It looks good, and the greater the concurrency the greater the benefit will be. Just a few minor suggestions regarding comments.
"ExecDeleteAct() have already locked the old tuple for us", change "have" to "has".
The comments in heapam_tuple_delete() and heapam_tuple_update() might be a little clearer with something like:
"If the tuple has been concurrently updated, get lock already so that on
retry it will succeed, provided that the caller asked to do this by
providing a lockedSlot."
Thank you. These changes are incorporated into v6 of the patch.
Also, not too important, but perhaps better clarify in the commit message that the repeated work is driven by ExecUpdate and ExecDelete and can happen multiple times depending on the concurrency.
Hmm... It can't happen arbitrary number of times. If tuple was
concurrently updated, the we lock it. Once we lock, nobody can change
it until we finish out work. So, I think no changes needed.
I'm going to push this if no objections.
------
Regards,
Alexander Korotkov
Attachments:
0001-Allow-locking-updated-tuples-in-tuple_update-and--v6.patchapplication/octet-stream; name=0001-Allow-locking-updated-tuples-in-tuple_update-and--v6.patchDownload+400-374
Hi Alexander,
I'm going to push this if no objections.
I took a fresh look at the patch and it LGTM. I only did a few
cosmetic changes, PFA v7.
Changes since v6 are:
```
@@ -318,12 +318,12 @@ heapam_tuple_delete(Relation relation,
ItemPointer tid, CommandId cid,
result = heap_delete(relation, tid, cid, crosscheck, wait, tmfd,
changingPart);
/*
- * If the tuple has been concurrently updated, get lock already so that on
- * retry it will succeed, provided that the caller asked to do this by
- * providing a lockedSlot.
+ * If lockUpdated is true and the tuple has been concurrently updated, get
+ * the lock immediately so that on retry we will succeed.
*/
if (result == TM_Updated && lockUpdated)
{
+ Assert(lockedSlot != NULL);
```
... and the same for heapam_tuple_update().
--
Best regards,
Aleksander Alekseev
Attachments:
v7-0001-Allow-locking-updated-tuples-in-tuple_update-and-.patchapplication/octet-stream; name=v7-0001-Allow-locking-updated-tuples-in-tuple_update-and-.patchDownload+400-374
Hi Aleksander,
On Mon, Jan 9, 2023 at 12:56 PM Aleksander Alekseev
<aleksander@timescale.com> wrote:
I'm going to push this if no objections.
I took a fresh look at the patch and it LGTM. I only did a few
cosmetic changes, PFA v7.Changes since v6 are:
Thank you for looking into this. It appears that I've applied changes
proposed by Mason to v5, not v6. That lead to comment mismatch with
the code that you've noticed. v8 should be correct. Please, recheck.
------
Regards,
Alexander Korotkov
Attachments:
0001-Allow-locking-updated-tuples-in-tuple_update-and--v8.patchapplication/octet-stream; name=0001-Allow-locking-updated-tuples-in-tuple_update-and--v8.patchDownload+392-375
On Mon, Jan 9, 2023 at 1:10 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Mon, Jan 9, 2023 at 12:56 PM Aleksander Alekseev
<aleksander@timescale.com> wrote:I'm going to push this if no objections.
I took a fresh look at the patch and it LGTM. I only did a few
cosmetic changes, PFA v7.Changes since v6 are:
Thank you for looking into this. It appears that I've applied changes
proposed by Mason to v5, not v6. That lead to comment mismatch with
the code that you've noticed. v8 should be correct. Please, recheck.
v9 also incorporates lost changes to the commit message by Pavel Borisov.
------
Regards,
Alexander Korotkov
Attachments:
0001-Allow-locking-updated-tuples-in-tuple_update-and--v9.patchapplication/octet-stream; name=0001-Allow-locking-updated-tuples-in-tuple_update-and--v9.patchDownload+392-375
Hi, Alexander!
On Mon, 9 Jan 2023 at 13:29, Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Mon, Jan 9, 2023 at 1:10 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Mon, Jan 9, 2023 at 12:56 PM Aleksander Alekseev
<aleksander@timescale.com> wrote:I'm going to push this if no objections.
I took a fresh look at the patch and it LGTM. I only did a few
cosmetic changes, PFA v7.Changes since v6 are:
Thank you for looking into this. It appears that I've applied changes
proposed by Mason to v5, not v6. That lead to comment mismatch with
the code that you've noticed. v8 should be correct. Please, recheck.v9 also incorporates lost changes to the commit message by Pavel Borisov.
I've looked through patch v9. It resembles patch v5 plus comments
clarification by Mason plus the right discussion link in the commit
message from v8. Aleksander's proposal of Assert in v7 was due to
changes lost between v5 and v6, as combining connected variables in v5
makes checks for them being in agreement one with the other
unnecessary. So changes from v7 are not in v9.
Sorry for being so detailed in small details. In my opinion the patch
now is ready to be committed.
Regards,
Pavel Borisov
Alexander, Pavel,
Sorry for being so detailed in small details. In my opinion the patch
now is ready to be committed.
Agree.
Personally I liked the version with (lockUpdated, lockedSlot) pair a
bit more since it is a bit more readable, however the version without
lockUpdated is less error prone and slightly more efficient. So all in
all I have no strong opinion on which is better.
--
Best regards,
Aleksander Alekseev