suboverflowed subtransactions concurrency performance optimize

Started by Pengchengliuover 4 years ago30 messageshackers
Jump to latest
#1Pengchengliu
pengchengliu@tju.edu.cn

Hi hackers,
I wrote a patch to resolve the subtransactions concurrency performance
problems when suboverflowed.

When we use more than PGPROC_MAX_CACHED_SUBXIDS(64) subtransactions per
transaction concurrency, it will lead to subtransactions performance
problems.
All backend will be stuck at acquiring lock SubtransSLRULock.

The reproduce steps in PG master branch:

1, init a cluster, append postgresql.conf as below:

max_connections = '2500'
max_files_per_process = '2000'
max_locks_per_transaction = '64'
max_parallel_maintenance_workers = '8'
max_parallel_workers = '60'
max_parallel_workers_per_gather = '6'
max_prepared_transactions = '15000'
max_replication_slots = '10'
max_wal_senders = '64'
max_worker_processes = '250'
shared_buffers = 8GB

2, create table and insert some records as below:

CREATE UNLOGGED TABLE contend (
id integer,
val integer NOT NULL
)
WITH (fillfactor='50');

INSERT INTO contend (id, val)
SELECT i, 0
FROM generate_series(1, 10000) AS i;

VACUUM (ANALYZE) contend;

3, The script subtrans_128.sql in attachment. use pgbench with
subtrans_128.sql as below.
pgbench -d postgres -p 33800 -n -r -f subtrans_128.sql -c 500 -j 500 -T
3600

4, After for a while, we can get the stuck result. We can query
pg_stat_activity. All backends wait event is SubtransSLRULock.
We can use pert top and try find the root cause. The result of perf top
as below:
66.20% postgres [.] pg_atomic_compare_exchange_u32_impl
29.30% postgres [.] pg_atomic_fetch_sub_u32_impl
1.46% postgres [.] pg_atomic_read_u32
1.34% postgres [.] TransactionIdIsCurrentTransactionId
0.75% postgres [.] SimpleLruReadPage_ReadOnly
0.14% postgres [.] LWLockAttemptLock
0.14% postgres [.] LWLockAcquire
0.12% postgres [.] pg_atomic_compare_exchange_u32
0.09% postgres [.] HeapTupleSatisfiesMVCC
0.06% postgres [.] heapgetpage
0.03% postgres [.] sentinel_ok
0.03% postgres [.] XidInMVCCSnapshot
0.03% postgres [.] slot_deform_heap_tuple
0.03% postgres [.] ExecInterpExpr
0.02% postgres [.] AllocSetCheck
0.02% postgres [.] HeapTupleSatisfiesVisibility
0.02% postgres [.] LWLockRelease
0.02% postgres [.] TransactionIdPrecedes
0.02% postgres [.] SubTransGetParent
0.01% postgres [.] heapgettup_pagemode
0.01% postgres [.] CheckForSerializableConflictOutNeeded

After view the subtrans codes, it is easy to find that the global LWLock
SubtransSLRULock is the bottleneck of subtrans concurrently.

When a bakcend session assign more than PGPROC_MAX_CACHED_SUBXIDS(64)
subtransactions, we will get a snapshot with suboverflowed.
A suboverflowed snapshot does not contain all data required to determine
visibility, so PostgreSQL will occasionally have to resort to pg_subtrans.
These pages are cached in shared buffers, but you can see the overhead of
looking them up in the high rank of SimpleLruReadPage_ReadOnly in the perf
output.

To resolve this performance problem, we think about a solution which cache
SubtransSLRU to local cache.
First we can query parent transaction id from SubtransSLRU, and copy the
SLRU page to local cache page.
After that if we need query parent transaction id again, we can query it
from local cache directly.
It will reduce the number of acquire and release LWLock SubtransSLRULock
observably.

From all snapshots, we can get the latest xmin. All transaction id which
precedes this xmin, it muse has been commited/abortd.
Their parent/top transaction has been written subtrans SLRU. Then we can
cache the subtrans SLRU and copy it to local cache.

Use the same produce steps above, with our patch we cannot get the stuck
result.
Note that append our GUC parameter in postgresql.conf. This optimize is off
in default.
local_cache_subtrans_pages=128

The patch is base on PG master branch
0d906b2c0b1f0d625ff63d9ace906556b1c66a68

Our project in https://github.com/ADBSQL/AntDB, Welcome to follow us,
AntDB, AsiaInfo's PG-based distributed database product

Thanks
Pengcheng

Attachments:

subtrans_128.sqlapplication/octet-stream; name=subtrans_128.sqlDownload
subtrans_local_optimize.patchtext/plain; name=subtrans_local_optimize.patchDownload+594-5
#2Andrey Borodin
amborodin@acm.org
In reply to: Pengchengliu (#1)
Re: suboverflowed subtransactions concurrency performance optimize

Hi Pengcheng!

You are solving important problem, thank you!

30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):

To resolve this performance problem, we think about a solution which cache
SubtransSLRU to local cache.
First we can query parent transaction id from SubtransSLRU, and copy the
SLRU page to local cache page.
After that if we need query parent transaction id again, we can query it
from local cache directly.

A copy of SLRU in each backend's cache can consume a lot of memory. Why create a copy if we can optimise shared representation of SLRU?

JFYI There is a related patch to make SimpleLruReadPage_ReadOnly() faster for bigger SLRU buffers[0]https://commitfest.postgresql.org/34/2627/.
Also Nik Samokhvalov recently published interesting investigation on the topic, but for some reason his message did not pass the moderation. [1]/messages/by-id/BE73A0BB-5929-40F4-BAF8-55323DE39561@yandex-team.ru

Also it's important to note that there was a community request to move SLRUs to shared_buffers [2]/messages/by-id/20180814213500.GA74618@60f81dc409fc.ant.amazon.com.

Thanks!

Best regards, Andrey Borodin.

[0]: https://commitfest.postgresql.org/34/2627/
[1]: /messages/by-id/BE73A0BB-5929-40F4-BAF8-55323DE39561@yandex-team.ru
[2]: /messages/by-id/20180814213500.GA74618@60f81dc409fc.ant.amazon.com

#3Pengchengliu
pengchengliu@tju.edu.cn
In reply to: Andrey Borodin (#2)
RE: suboverflowed subtransactions concurrency performance optimize

Hi Andrey,
Thanks a lot for your replay and reference information.

The default NUM_SUBTRANS_BUFFERS is 32. My implementation is local_cache_subtrans_pages can be adjusted dynamically.
If we configure local_cache_subtrans_pages as 64, every backend use only extra 64*8192=512KB memory.
So the local cache is similar to the first level cache. And subtrans SLRU is the second level cache.
And I think extra memory is very well worth it. It really resolve massive subtrans stuck issue which I mentioned in previous email.

I have view the patch of [0]https://commitfest.postgresql.org/34/2627/ before. For SLRU buffers adding GUC configuration parameters are very nice.
I think for subtrans, its optimize is not enough. For SubTransGetTopmostTransaction, we should get the SubtransSLRULock first, then call SubTransGetParent in loop.
Prevent acquire/release SubtransSLRULock in SubTransGetTopmostTransaction-> SubTransGetParent in loop.
After I apply this patch which I optimize SubTransGetTopmostTransaction, with my test case, I still get stuck result.

[1]: /messages/by-id/BE73A0BB-5929-40F4-BAF8-55323DE39561@yandex-team.ru
With the test case which I mentioned in previous mail, It was still stuck. In default there is 2048 subtrans in one page.
When some processes get the top transaction in one page, they should pin/unpin and lock/unlock the same page repeatedly.
I found than it was stuck at pin/unpin page for some backends.

Compare test results, pgbench with subtrans_128.sql
Concurrency PG master PG master with path[0]https://commitfest.postgresql.org/34/2627/ Local cache optimize
300 stuck stuck no stuck
500 stuck stuck no stuck
1000 stuck stuck no stuck

Maybe we can test different approach with my test case. For massive concurrency, if it will not be stuck, we can get a good solution.

[0]: https://commitfest.postgresql.org/34/2627/
[1]: /messages/by-id/BE73A0BB-5929-40F4-BAF8-55323DE39561@yandex-team.ru

Thanks
Pengcheng

-----Original Message-----
From: Andrey Borodin <x4mmm@yandex-team.ru>
Sent: 2021年8月30日 18:25
To: Pengchengliu <pengchengliu@tju.edu.cn>
Cc: pgsql-hackers@postgresql.org
Subject: Re: suboverflowed subtransactions concurrency performance optimize

Hi Pengcheng!

You are solving important problem, thank you!

30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):

To resolve this performance problem, we think about a solution which
cache SubtransSLRU to local cache.
First we can query parent transaction id from SubtransSLRU, and copy
the SLRU page to local cache page.
After that if we need query parent transaction id again, we can query
it from local cache directly.

A copy of SLRU in each backend's cache can consume a lot of memory. Why create a copy if we can optimise shared representation of SLRU?

JFYI There is a related patch to make SimpleLruReadPage_ReadOnly() faster for bigger SLRU buffers[0]https://commitfest.postgresql.org/34/2627/.
Also Nik Samokhvalov recently published interesting investigation on the topic, but for some reason his message did not pass the moderation. [1]/messages/by-id/BE73A0BB-5929-40F4-BAF8-55323DE39561@yandex-team.ru

Also it's important to note that there was a community request to move SLRUs to shared_buffers [2]/messages/by-id/20180814213500.GA74618@60f81dc409fc.ant.amazon.com.

Thanks!

Best regards, Andrey Borodin.

[0]: https://commitfest.postgresql.org/34/2627/
[1]: /messages/by-id/BE73A0BB-5929-40F4-BAF8-55323DE39561@yandex-team.ru
[2]: /messages/by-id/20180814213500.GA74618@60f81dc409fc.ant.amazon.com

#4Zhihong Yu
zyu@yugabyte.com
In reply to: Pengchengliu (#1)
Re: suboverflowed subtransactions concurrency performance optimize

On Mon, Aug 30, 2021 at 1:43 AM Pengchengliu <pengchengliu@tju.edu.cn>
wrote:

Hi hackers,
I wrote a patch to resolve the subtransactions concurrency performance
problems when suboverflowed.

When we use more than PGPROC_MAX_CACHED_SUBXIDS(64) subtransactions per
transaction concurrency, it will lead to subtransactions performance
problems.
All backend will be stuck at acquiring lock SubtransSLRULock.

The reproduce steps in PG master branch:

1, init a cluster, append postgresql.conf as below:

max_connections = '2500'
max_files_per_process = '2000'
max_locks_per_transaction = '64'
max_parallel_maintenance_workers = '8'
max_parallel_workers = '60'
max_parallel_workers_per_gather = '6'
max_prepared_transactions = '15000'
max_replication_slots = '10'
max_wal_senders = '64'
max_worker_processes = '250'
shared_buffers = 8GB

2, create table and insert some records as below:

CREATE UNLOGGED TABLE contend (
id integer,
val integer NOT NULL
)
WITH (fillfactor='50');

INSERT INTO contend (id, val)
SELECT i, 0
FROM generate_series(1, 10000) AS i;

VACUUM (ANALYZE) contend;

3, The script subtrans_128.sql in attachment. use pgbench with
subtrans_128.sql as below.
pgbench -d postgres -p 33800 -n -r -f subtrans_128.sql -c 500 -j 500 -T
3600

4, After for a while, we can get the stuck result. We can query
pg_stat_activity. All backends wait event is SubtransSLRULock.
We can use pert top and try find the root cause. The result of perf top
as below:
66.20% postgres [.] pg_atomic_compare_exchange_u32_impl
29.30% postgres [.] pg_atomic_fetch_sub_u32_impl
1.46% postgres [.] pg_atomic_read_u32
1.34% postgres [.] TransactionIdIsCurrentTransactionId
0.75% postgres [.] SimpleLruReadPage_ReadOnly
0.14% postgres [.] LWLockAttemptLock
0.14% postgres [.] LWLockAcquire
0.12% postgres [.] pg_atomic_compare_exchange_u32
0.09% postgres [.] HeapTupleSatisfiesMVCC
0.06% postgres [.] heapgetpage
0.03% postgres [.] sentinel_ok
0.03% postgres [.] XidInMVCCSnapshot
0.03% postgres [.] slot_deform_heap_tuple
0.03% postgres [.] ExecInterpExpr
0.02% postgres [.] AllocSetCheck
0.02% postgres [.] HeapTupleSatisfiesVisibility
0.02% postgres [.] LWLockRelease
0.02% postgres [.] TransactionIdPrecedes
0.02% postgres [.] SubTransGetParent
0.01% postgres [.] heapgettup_pagemode
0.01% postgres [.] CheckForSerializableConflictOutNeeded

After view the subtrans codes, it is easy to find that the global LWLock
SubtransSLRULock is the bottleneck of subtrans concurrently.

When a bakcend session assign more than PGPROC_MAX_CACHED_SUBXIDS(64)
subtransactions, we will get a snapshot with suboverflowed.
A suboverflowed snapshot does not contain all data required to determine
visibility, so PostgreSQL will occasionally have to resort to pg_subtrans.
These pages are cached in shared buffers, but you can see the overhead of
looking them up in the high rank of SimpleLruReadPage_ReadOnly in the perf
output.

To resolve this performance problem, we think about a solution which cache
SubtransSLRU to local cache.
First we can query parent transaction id from SubtransSLRU, and copy the
SLRU page to local cache page.
After that if we need query parent transaction id again, we can query it
from local cache directly.
It will reduce the number of acquire and release LWLock SubtransSLRULock
observably.

From all snapshots, we can get the latest xmin. All transaction id which
precedes this xmin, it muse has been commited/abortd.
Their parent/top transaction has been written subtrans SLRU. Then we can
cache the subtrans SLRU and copy it to local cache.

Use the same produce steps above, with our patch we cannot get the stuck
result.
Note that append our GUC parameter in postgresql.conf. This optimize is off
in default.
local_cache_subtrans_pages=128

The patch is base on PG master branch
0d906b2c0b1f0d625ff63d9ace906556b1c66a68

Our project in https://github.com/ADBSQL/AntDB, Welcome to follow us,
AntDB, AsiaInfo's PG-based distributed database product

Thanks
Pengcheng

Hi,

+ uint16 valid_offset; /* how many entry is valid */

how many entry is -> how many entries are

+int slru_subtrans_page_num = 32;

Looks like the variable represents the number of subtrans pages. Maybe name
the variable slru_subtrans_page_count ?

+ if (lbuffer->in_htab == false)

The condition can be written as 'if (!lbuffer->in_htab)'

For SubtransAllocLocalBuffer(), you can enclose the body of method in while
loop so that you don't use goto statement.

Cheers

#5Andrey Borodin
amborodin@acm.org
In reply to: Pengchengliu (#3)
Re: suboverflowed subtransactions concurrency performance optimize

<html><head></head><body dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="ApplePlainTextBody"><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="ApplePlainTextBody"><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="ApplePlainTextBody"><div class="ApplePlainTextBody"><br><br><blockquote type="cite">31 авг. 2021 г., в 11:43, Pengchengliu &lt;pengchengliu@tju.edu.cn&gt; написал(а):<br><br>Hi Andrey,<br> Thanks a lot for your replay and reference information.<br><br> The default NUM_SUBTRANS_BUFFERS is 32. My implementation is local_cache_subtrans_pages can be adjusted dynamically.<br> If we configure local_cache_subtrans_pages as 64, every backend use only extra 64*8192=512KB memory. <br> So the local cache is similar to the first level cache. And subtrans SLRU is the second level cache.<br> And I think extra memory is very well worth it. It really resolve massive subtrans stuck issue which I mentioned in previous email.<br><br> I have view the patch of [0] before. For SLRU buffers adding GUC configuration parameters are very nice.<br> I think for subtrans, its optimize is not enough. For SubTransGetTopmostTransaction, we should get the SubtransSLRULock first, then call SubTransGetParent in loop.<br> Prevent acquire/release &nbsp;SubtransSLRULock in SubTransGetTopmostTransaction-&gt; SubTransGetParent in loop.<br> After I apply this patch which I &nbsp;optimize SubTransGetTopmostTransaction, &nbsp;with my test case, I still get stuck result.<br></blockquote><br>SubTransGetParent() acquires only Shared lock on SubtransSLRULock. The problem may arise only when someone reads page from disk. But if you have big enough cache - this will never happen. And this cache will be much less than 512KB*max_connections.<br><br>I think if we really want to fix exclusive SubtransSLRULock I think best option would be to split SLRU control lock into array of locks - one for each bank (in v17-0002-Divide-SLRU-buffers-into-n-associative-banks.patch). With this approach we will have to rename s/bank/partition/g for consistency with locks and buffers partitions. I really liked having my own banks, but consistency worth it anyway.<br><br>Thanks!<br><br>Best regards, Andrey Borodin.</div></div></div></body></html>

#6Andrey Borodin
amborodin@acm.org
In reply to: Pengchengliu (#3)
Re: suboverflowed subtransactions concurrency performance optimize

Sorry, for some reason Mail.app converted message to html and mailing list mangled this html into mess. I'm resending previous message as plain text again. Sorry for the noise.

31 авг. 2021 г., в 11:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):

Hi Andrey,
Thanks a lot for your replay and reference information.

The default NUM_SUBTRANS_BUFFERS is 32. My implementation is local_cache_subtrans_pages can be adjusted dynamically.
If we configure local_cache_subtrans_pages as 64, every backend use only extra 64*8192=512KB memory.
So the local cache is similar to the first level cache. And subtrans SLRU is the second level cache.
And I think extra memory is very well worth it. It really resolve massive subtrans stuck issue which I mentioned in previous email.

I have view the patch of [0] before. For SLRU buffers adding GUC configuration parameters are very nice.
I think for subtrans, its optimize is not enough. For SubTransGetTopmostTransaction, we should get the SubtransSLRULock first, then call SubTransGetParent in loop.
Prevent acquire/release SubtransSLRULock in SubTransGetTopmostTransaction-> SubTransGetParent in loop.
After I apply this patch which I optimize SubTransGetTopmostTransaction, with my test case, I still get stuck result.

SubTransGetParent() acquires only Shared lock on SubtransSLRULock. The problem may arise only when someone reads page from disk. But if you have big enough cache - this will never happen. And this cache will be much less than 512KB*max_connections.

I think if we really want to fix exclusive SubtransSLRULock I think best option would be to split SLRU control lock into array of locks - one for each bank (in v17-0002-Divide-SLRU-buffers-into-n-associative-banks.patch). With this approach we will have to rename s/bank/partition/g for consistency with locks and buffers partitions. I really liked having my own banks, but consistency worth it anyway.

Thanks!

Best regards, Andrey Borodin.

#7Pengchengliu
pengchengliu@tju.edu.cn
In reply to: Andrey Borodin (#6)
RE: suboverflowed subtransactions concurrency performance optimize

Hi Andrey,

I think if we really want to fix exclusive SubtransSLRULock I think best option would be to split SLRU control lock into array of locks

I agree with you. If we can resolve the performance issue with this approach, It should be a good solution.

one for each bank (in v17-0002-Divide-SLRU-buffers-into-n-associative-banks.patch)

I have tested with this patch. And I have modified NUM_SUBTRANS_BUFFERS to 128. With 500 concurrence, it would not be stuck indeed. But the performance is very bad. For a sequence scan table, it uses more than one minute.
I think it is unacceptable in a production environment.

postgres=# select count(*) from contend ;
count
-------
10127
(1 row)

Time: 86011.593 ms (01:26.012)
postgres=# select count(*) from contend ;
count
-------
10254
(1 row)
Time: 79399.949 ms (01:19.400)

With my local subtrans optimize approach, the same env and the same test script and 500 concurrence, a sequence scan, it uses only less than 10 seconds.

postgres=# select count(*) from contend ;
count
-------
10508
(1 row)

Time: 7104.283 ms (00:07.104)

postgres=# select count(*) from contend ;
count
-------
13175
(1 row)

Time: 6602.635 ms (00:06.603)
Thanks
Pengcheng

-----Original Message-----
From: Andrey Borodin <x4mmm@yandex-team.ru>
Sent: 2021年9月3日 14:51
To: Pengchengliu <pengchengliu@tju.edu.cn>
Cc: pgsql-hackers@postgresql.org
Subject: Re: suboverflowed subtransactions concurrency performance optimize

Sorry, for some reason Mail.app converted message to html and mailing list mangled this html into mess. I'm resending previous message as plain text again. Sorry for the noise.

31 авг. 2021 г., в 11:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):

Hi Andrey,
Thanks a lot for your replay and reference information.

The default NUM_SUBTRANS_BUFFERS is 32. My implementation is local_cache_subtrans_pages can be adjusted dynamically.
If we configure local_cache_subtrans_pages as 64, every backend use only extra 64*8192=512KB memory.
So the local cache is similar to the first level cache. And subtrans SLRU is the second level cache.
And I think extra memory is very well worth it. It really resolve massive subtrans stuck issue which I mentioned in previous email.

I have view the patch of [0] before. For SLRU buffers adding GUC configuration parameters are very nice.
I think for subtrans, its optimize is not enough. For SubTransGetTopmostTransaction, we should get the SubtransSLRULock first, then call SubTransGetParent in loop.
Prevent acquire/release SubtransSLRULock in SubTransGetTopmostTransaction-> SubTransGetParent in loop.
After I apply this patch which I optimize SubTransGetTopmostTransaction, with my test case, I still get stuck result.

SubTransGetParent() acquires only Shared lock on SubtransSLRULock. The problem may arise only when someone reads page from disk. But if you have big enough cache - this will never happen. And this cache will be much less than 512KB*max_connections.

I think if we really want to fix exclusive SubtransSLRULock I think best option would be to split SLRU control lock into array of locks - one for each bank (in v17-0002-Divide-SLRU-buffers-into-n-associative-banks.patch). With this approach we will have to rename s/bank/partition/g for consistency with locks and buffers partitions. I really liked having my own banks, but consistency worth it anyway.

Thanks!

Best regards, Andrey Borodin.

#8Simon Riggs
simon@2ndQuadrant.com
In reply to: Andrey Borodin (#2)
Re: suboverflowed subtransactions concurrency performance optimize

On Mon, 30 Aug 2021 at 11:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:

Hi Pengcheng!

You are solving important problem, thank you!

30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):

To resolve this performance problem, we think about a solution which cache
SubtransSLRU to local cache.
First we can query parent transaction id from SubtransSLRU, and copy the
SLRU page to local cache page.
After that if we need query parent transaction id again, we can query it
from local cache directly.

A copy of SLRU in each backend's cache can consume a lot of memory.

Yes, copying the whole SLRU into local cache seems overkill.

Why create a copy if we can optimise shared representation of SLRU?

transam.c uses a single item cache to prevent thrashing from repeated
lookups, which reduces problems with shared access to SLRUs.
multitrans.c also has similar.

I notice that subtrans. doesn't have this, but could easily do so.
Patch attached, which seems separate to other attempts at tuning.

On review, I think it is also possible that we update subtrans ONLY if
someone uses >PGPROC_MAX_CACHED_SUBXIDS.
This would make subtrans much smaller and avoid one-entry-per-page
which is a major source of cacheing.
This would means some light changes in GetSnapshotData().
Let me know if that seems interesting also?

--
Simon Riggs http://www.EnterpriseDB.com/

Attachments:

subtrans_single_item_cache.v1.patchapplication/octet-stream; name=subtrans_single_item_cache.v1.patchDownload+18-0
#9Andrey Borodin
amborodin@acm.org
In reply to: Simon Riggs (#8)
Re: suboverflowed subtransactions concurrency performance optimize

30 нояб. 2021 г., в 17:19, Simon Riggs <simon.riggs@enterprisedb.com> написал(а):

On Mon, 30 Aug 2021 at 11:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:

Hi Pengcheng!

You are solving important problem, thank you!

30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):

To resolve this performance problem, we think about a solution which cache
SubtransSLRU to local cache.
First we can query parent transaction id from SubtransSLRU, and copy the
SLRU page to local cache page.
After that if we need query parent transaction id again, we can query it
from local cache directly.

A copy of SLRU in each backend's cache can consume a lot of memory.

Yes, copying the whole SLRU into local cache seems overkill.

Why create a copy if we can optimise shared representation of SLRU?

transam.c uses a single item cache to prevent thrashing from repeated
lookups, which reduces problems with shared access to SLRUs.
multitrans.c also has similar.

I notice that subtrans. doesn't have this, but could easily do so.
Patch attached, which seems separate to other attempts at tuning.

I think this definitely makes sense to do.

On review, I think it is also possible that we update subtrans ONLY if
someone uses >PGPROC_MAX_CACHED_SUBXIDS.
This would make subtrans much smaller and avoid one-entry-per-page
which is a major source of cacheing.
This would means some light changes in GetSnapshotData().
Let me know if that seems interesting also?

I'm afraid of unexpected performance degradation. When the system runs fine, you provision a VM of some vCPU\RAM, and then some backend uses a little more than 64 subtransactions and all the system is stuck. Or will it affect only backend using more than 64 subtransactions?

Best regards, Andrey Borodin.

#10Dilip Kumar
dilipbalaut@gmail.com
In reply to: Simon Riggs (#8)
Re: suboverflowed subtransactions concurrency performance optimize

On Tue, Nov 30, 2021 at 5:49 PM Simon Riggs
<simon.riggs@enterprisedb.com> wrote:

transam.c uses a single item cache to prevent thrashing from repeated
lookups, which reduces problems with shared access to SLRUs.
multitrans.c also has similar.

I notice that subtrans. doesn't have this, but could easily do so.
Patch attached, which seems separate to other attempts at tuning.

Yeah, this definitely makes sense.

On review, I think it is also possible that we update subtrans ONLY if
someone uses >PGPROC_MAX_CACHED_SUBXIDS.
This would make subtrans much smaller and avoid one-entry-per-page
which is a major source of cacheing.
This would means some light changes in GetSnapshotData().
Let me know if that seems interesting also?

Do you mean to say avoid setting the sub-transactions parent if the
number of sun-transactions is not crossing PGPROC_MAX_CACHED_SUBXIDS?
But the TransactionIdDidCommit(), might need to fetch the parent if
the transaction status is TRANSACTION_STATUS_SUB_COMMITTED, so how
would we handle that?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#11Simon Riggs
simon@2ndQuadrant.com
In reply to: Dilip Kumar (#10)
Re: suboverflowed subtransactions concurrency performance optimize

On Fri, 3 Dec 2021 at 01:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On review, I think it is also possible that we update subtrans ONLY if
someone uses >PGPROC_MAX_CACHED_SUBXIDS.
This would make subtrans much smaller and avoid one-entry-per-page
which is a major source of cacheing.
This would means some light changes in GetSnapshotData().
Let me know if that seems interesting also?

Do you mean to say avoid setting the sub-transactions parent if the
number of sun-transactions is not crossing PGPROC_MAX_CACHED_SUBXIDS?
But the TransactionIdDidCommit(), might need to fetch the parent if
the transaction status is TRANSACTION_STATUS_SUB_COMMITTED, so how
would we handle that?

TRANSACTION_STATUS_SUB_COMMITTED is set as a transient state during
final commit.
In that case, the top-level xid is still in procarray when nsubxids <
PGPROC_MAX_CACHED_SUBXIDS
so we need not consult pg_subtrans in that case, see step 4 of
TransactionIdIsInProgress()

--
Simon Riggs http://www.EnterpriseDB.com/

#12Dilip Kumar
dilipbalaut@gmail.com
In reply to: Simon Riggs (#11)
Re: suboverflowed subtransactions concurrency performance optimize

On Fri, Dec 3, 2021 at 5:00 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:

On Fri, 3 Dec 2021 at 01:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On review, I think it is also possible that we update subtrans ONLY if
someone uses >PGPROC_MAX_CACHED_SUBXIDS.
This would make subtrans much smaller and avoid one-entry-per-page
which is a major source of cacheing.
This would means some light changes in GetSnapshotData().
Let me know if that seems interesting also?

Do you mean to say avoid setting the sub-transactions parent if the
number of sun-transactions is not crossing PGPROC_MAX_CACHED_SUBXIDS?
But the TransactionIdDidCommit(), might need to fetch the parent if
the transaction status is TRANSACTION_STATUS_SUB_COMMITTED, so how
would we handle that?

TRANSACTION_STATUS_SUB_COMMITTED is set as a transient state during
final commit.
In that case, the top-level xid is still in procarray when nsubxids <
PGPROC_MAX_CACHED_SUBXIDS
so we need not consult pg_subtrans in that case, see step 4 of.
TransactionIdIsInProgress()

Okay I see, that there is a rule that before calling
TransactionIdDidCommit(), we must consult TransactionIdIsInProgress()
for non MVCC snapshot or XidInMVCCSnapshot(). Okay so now I don't
have this concern, thanks for clarifying. I will think more about
this approach from other aspects.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#13Simon Riggs
simon@2ndQuadrant.com
In reply to: Andrey Borodin (#9)
Re: suboverflowed subtransactions concurrency performance optimize

On Wed, 1 Dec 2021 at 06:41, Andrey Borodin <x4mmm@yandex-team.ru> wrote:

On review, I think it is also possible that we update subtrans ONLY if
someone uses >PGPROC_MAX_CACHED_SUBXIDS.
This would make subtrans much smaller and avoid one-entry-per-page
which is a major source of cacheing.
This would means some light changes in GetSnapshotData().
Let me know if that seems interesting also?

I'm afraid of unexpected performance degradation. When the system runs fine, you provision a VM of some vCPU\RAM, and then some backend uses a little more than 64 subtransactions and all the system is stuck. Or will it affect only backend using more than 64 subtransactions?

That is the objective: to isolate the effect to only those that
overflow. It seems possible.

--
Simon Riggs http://www.EnterpriseDB.com/

#14Simon Riggs
simon@2ndQuadrant.com
In reply to: Dilip Kumar (#10)
Re: suboverflowed subtransactions concurrency performance optimize

On Fri, 3 Dec 2021 at 06:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Nov 30, 2021 at 5:49 PM Simon Riggs
<simon.riggs@enterprisedb.com> wrote:

transam.c uses a single item cache to prevent thrashing from repeated
lookups, which reduces problems with shared access to SLRUs.
multitrans.c also has similar.

I notice that subtrans. doesn't have this, but could easily do so.
Patch attached, which seems separate to other attempts at tuning.

Yeah, this definitely makes sense.

On review, I think it is also possible that we update subtrans ONLY if
someone uses >PGPROC_MAX_CACHED_SUBXIDS.
This would make subtrans much smaller and avoid one-entry-per-page
which is a major source of cacheing.
This would means some light changes in GetSnapshotData().
Let me know if that seems interesting also?

Do you mean to say avoid setting the sub-transactions parent if the
number of sun-transactions is not crossing PGPROC_MAX_CACHED_SUBXIDS?

Yes.

This patch shows where I'm going, with changes in GetSnapshotData()
and XidInMVCCSnapshot() and XactLockTableWait().
Passes make check, but needs much more, so this is review-only at this
stage to give a flavour of what is intended.

(No where near replacing the subtrans module as I envisage as the
final outcome, meaning we don't need ExtendSUBTRANS()).

--
Simon Riggs http://www.EnterpriseDB.com/

Attachments:

rethink_subtrans.v4.patchapplication/octet-stream; name=rethink_subtrans.v4.patchDownload+127-90
#15Julien Rouhaud
rjuju123@gmail.com
In reply to: Simon Riggs (#14)
Re: suboverflowed subtransactions concurrency performance optimize

Hi,

On Wed, Dec 08, 2021 at 04:39:11PM +0000, Simon Riggs wrote:

This patch shows where I'm going, with changes in GetSnapshotData()
and XidInMVCCSnapshot() and XactLockTableWait().
Passes make check, but needs much more, so this is review-only at this
stage to give a flavour of what is intended.

Thanks a lot to everyone involved in this!

I can't find any entry in the commitfest for the work being done here. Did I
miss something? If not could you create an entry in the next commitfest to
make sure that it doesn't get forgotten?

#16Simon Riggs
simon@2ndQuadrant.com
In reply to: Simon Riggs (#8)
Re: suboverflowed subtransactions concurrency performance optimize

On Tue, 30 Nov 2021 at 12:19, Simon Riggs <simon.riggs@enterprisedb.com> wrote:

On Mon, 30 Aug 2021 at 11:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:

Hi Pengcheng!

You are solving important problem, thank you!

30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):

To resolve this performance problem, we think about a solution which cache
SubtransSLRU to local cache.
First we can query parent transaction id from SubtransSLRU, and copy the
SLRU page to local cache page.
After that if we need query parent transaction id again, we can query it
from local cache directly.

A copy of SLRU in each backend's cache can consume a lot of memory.

Yes, copying the whole SLRU into local cache seems overkill.

Why create a copy if we can optimise shared representation of SLRU?

transam.c uses a single item cache to prevent thrashing from repeated
lookups, which reduces problems with shared access to SLRUs.
multitrans.c also has similar.

I notice that subtrans. doesn't have this, but could easily do so.
Patch attached, which seems separate to other attempts at tuning.

Re-attached, so that the CFapp isn't confused between the multiple
patches on this thread.

--
Simon Riggs http://www.EnterpriseDB.com/

Attachments:

subtrans_single_item_cache.v1.patchapplication/octet-stream; name=subtrans_single_item_cache.v1.patchDownload+18-0
#17Andrey Borodin
amborodin@acm.org
In reply to: Simon Riggs (#16)
Re: suboverflowed subtransactions concurrency performance optimize

17 янв. 2022 г., в 18:44, Simon Riggs <simon.riggs@enterprisedb.com> написал(а):

Re-attached, so that the CFapp isn't confused between the multiple
patches on this thread.

FWIW I've looked into the patch and it looks good to me. Comments describing when the cache is useful seem valid.

Thanks!

Best regards, Andrey Borodin.

#18Julien Rouhaud
rjuju123@gmail.com
In reply to: Simon Riggs (#16)
Re: suboverflowed subtransactions concurrency performance optimize

Hi,

On Mon, Jan 17, 2022 at 01:44:02PM +0000, Simon Riggs wrote:

Re-attached, so that the CFapp isn't confused between the multiple
patches on this thread.

Thanks a lot for working on this!

The patch is simple and overall looks good to me. A few comments though:

+/*
+ * Single-item cache for results of SubTransGetTopmostTransaction.  It's worth having
+ * such a cache because we frequently find ourselves repeatedly checking the
+ * same XID, for example when scanning a table just after a bulk insert,
+ * update, or delete.
+ */
+static TransactionId cachedFetchXid = InvalidTransactionId;
+static TransactionId cachedFetchTopmostXid = InvalidTransactionId;

The comment is above the 80 chars after
s/TransactionLogFetch/SubTransGetTopmostTransaction/, and I don't think this
comment is valid for subtrans.c.

Also, maybe naming the first variable cachedFetchSubXid would make it a bit
clearer?

It would be nice to see some benchmarks, for both when this change is
enough to avoid a contention (when there's a single long-running overflowed
backend) and when it's not enough. That will also be useful if/when working on
the "rethink_subtrans" patch.

#19Simon Riggs
simon@2ndQuadrant.com
In reply to: Julien Rouhaud (#18)
Re: suboverflowed subtransactions concurrency performance optimize

On Mon, 7 Mar 2022 at 09:49, Julien Rouhaud <rjuju123@gmail.com> wrote:

Hi,

On Mon, Jan 17, 2022 at 01:44:02PM +0000, Simon Riggs wrote:

Re-attached, so that the CFapp isn't confused between the multiple
patches on this thread.

Thanks a lot for working on this!

The patch is simple and overall looks good to me. A few comments though:

+/*
+ * Single-item cache for results of SubTransGetTopmostTransaction.  It's worth having
+ * such a cache because we frequently find ourselves repeatedly checking the
+ * same XID, for example when scanning a table just after a bulk insert,
+ * update, or delete.
+ */
+static TransactionId cachedFetchXid = InvalidTransactionId;
+static TransactionId cachedFetchTopmostXid = InvalidTransactionId;

The comment is above the 80 chars after
s/TransactionLogFetch/SubTransGetTopmostTransaction/, and I don't think this
comment is valid for subtrans.c.

What aspect makes it invalid? The comment seems exactly applicable to
me; Andrey thinks so also.

Also, maybe naming the first variable cachedFetchSubXid would make it a bit
clearer?

Sure, that can be done.

It would be nice to see some benchmarks, for both when this change is
enough to avoid a contention (when there's a single long-running overflowed
backend) and when it's not enough. That will also be useful if/when working on
the "rethink_subtrans" patch.

The patch doesn't do anything about the case of when there's a single
long-running overflowed backend, nor does it claim that.

The patch will speed up calls to SubTransGetTopmostTransaction(), which occur in
src/backend/access/heap/heapam.c
src/backend/utils/time/snapmgr.c
src/backend/storage/lmgr/lmgr.c
src/backend/storage/ipc/procarray.c

The patch was posted because TransactionLogFetch() has a cache, yet
SubTransGetTopmostTransaction() does not, yet the argument should be
identical in both cases.

--
Simon Riggs http://www.EnterpriseDB.com/

#20Julien Rouhaud
rjuju123@gmail.com
In reply to: Simon Riggs (#19)
Re: suboverflowed subtransactions concurrency performance optimize

On Mon, Mar 07, 2022 at 01:27:40PM +0000, Simon Riggs wrote:

+/*
+ * Single-item cache for results of SubTransGetTopmostTransaction.  It's worth having
+ * such a cache because we frequently find ourselves repeatedly checking the
+ * same XID, for example when scanning a table just after a bulk insert,
+ * update, or delete.
+ */
+static TransactionId cachedFetchXid = InvalidTransactionId;
+static TransactionId cachedFetchTopmostXid = InvalidTransactionId;

The comment is above the 80 chars after
s/TransactionLogFetch/SubTransGetTopmostTransaction/, and I don't think this
comment is valid for subtrans.c.

What aspect makes it invalid? The comment seems exactly applicable to
me; Andrey thinks so also.

Sorry, I somehow missed the "for example", and was thinking that
SubTransGetTopmostTransaction was used in many other places compared to
TransactionIdDidCommit and friends.

It would be nice to see some benchmarks, for both when this change is
enough to avoid a contention (when there's a single long-running overflowed
backend) and when it's not enough. That will also be useful if/when working on
the "rethink_subtrans" patch.

The patch doesn't do anything about the case of when there's a single
long-running overflowed backend, nor does it claim that.

I was thinking that having a cache for SubTransGetTopmostTransaction could help
at least to some extent for that problem, sorry if that's not the case.

I'm still curious on how much this simple optimization can help in some
scenarios, even if they're somewhat artificial.

The patch was posted because TransactionLogFetch() has a cache, yet
SubTransGetTopmostTransaction() does not, yet the argument should be
identical in both cases.

I totally agree with that.

#21Michael Paquier
michael@paquier.xyz
In reply to: Julien Rouhaud (#20)
#22Simon Riggs
simon@2ndQuadrant.com
In reply to: Michael Paquier (#21)
#23Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#21)
#24Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#23)
#25Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#24)
#26Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#25)
In reply to: Andres Freund (#26)
#28Andres Freund
andres@anarazel.de
In reply to: Peter Geoghegan (#27)
In reply to: Andres Freund (#28)
#30Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#26)