Autovacuum, dead tuples and bloat
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
1. My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
2. The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
Thanks in advance &
Best regards,
Manuel
On 6/20/24 09:46, Shenavai, Manuel wrote:
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99%
bloat. After vacuum full the DB decreases to 2GB.DB total size: 200GB
DB bloat: 198 GB >
DB non-bloat: 2GBWe further see, that during bulk updates (i.e. a long running
transaction), the DB is still growing, i.e. the size of the DB growth by
+20GB after the bulk updates.
How soon after the updates did you measure the above?
My assumption is, that after an autovacuum, the 99% bloat should be
available for usage again. But the DB size would stay at 200GB. In our
case, I would only expect a growth of the DB, if the bulk-updates exceed
the current DB size (i.e. 220 GB).
Was the transaction completed(commit/rollback)?
Are there other transactions using the table or tables?
How could I verify my assumption?
I think of two possibilities:
1. My assumption is wrong and for some reason the dead tuples are not
cleaned so that the space cannot be reused
2. The bulk-update indeed exceeds the current DB size. (Then the growth
is expected).Can you help me to verify these assumptions? Are there any statistics
available that could help me with my verification?
Use:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW
Select the rows that cover the table or tables involved. Look at the
vacuum/autovacuum/analyze fields.
Thanks in advance &
Best regards,
Manuel
--
Adrian Klaver
adrian.klaver@aklaver.com
On Thu, Jun 20, 2024 at 12:47 PM Shenavai, Manuel <manuel.shenavai@sap.com>
wrote:
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat.
After vacuum full the DB decreases to 2GB.DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running
transaction), the DB is still growing, i.e. the size of the DB growth by
+20GB after the bulk updates.My assumption is, that after an autovacuum, the 99% bloat should be
available for usage again. But the DB size would stay at 200GB. In our
case, I would only expect a growth of the DB, if the bulk-updates exceed
the current DB size (i.e. 220 GB).
That's also my understanding of how vacuum works.
Note: I disable autovacuum before bulk modifications, manually VACUUM
ANALYZE and then reenable autovacuum. That way, autovacuum doesn't jump in
the middle of what I'm doing.
How could I verify my assumption?
I think of two possibilities:
1. My assumption is wrong and for some reason the dead tuples are not
cleaned so that the space cannot be reused
2. The bulk-update indeed exceeds the current DB size. (Then the
growth is expected).Can you help me to verify these assumptions? Are there any statistics
available that could help me with my verification?
I've got a weekly process that deletes all records older than N days from a
set of tables.
db=# ALTER TABLE t1 SET (autovacuum_enabled = off);
db=# ALTER TABLE t2 SET (autovacuum_enabled = off);
db=# ALTER TABLE t3 SET (autovacuum_enabled = off);
db=# DELETE FROM t1 WHERE created_on < (CURRENT_TIMESTAMP - INTERVAL '90
DAY');
db=# DELETE FROM t2 WHERE created_on < (CURRENT_TIMESTAMP - INTERVAL '90
DAY');
db=# DELETE FROM t3 WHERE created_on < (CURRENT_TIMESTAMP - INTERVAL '90
DAY');
$ vacuumdb --jobs=3 -t t1 -t t2 -t t3
db=# ALTER TABLE t1 SET (autovacuum_enabled = on);
db=# ALTER TABLE t2 SET (autovacuum_enabled = on);
db=# ALTER TABLE t3 SET (autovacuum_enabled = on);
pgstattuple shows that that free percentage stays pretty constant. That
seems to be what you're asking about.
Στις 20/6/24 19:46, ο/η Shenavai, Manuel έγραψε:
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99%
bloat. After vacuum full the DB decreases to 2GB.DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running
transaction), the DB is still growing, i.e. the size of the DB growth
by +20GB after the bulk updates.My assumption is, that after an autovacuum, the 99% bloat should be
available for usage again. But the DB size would stay at 200GB. In our
case, I would only expect a growth of the DB, if the bulk-updates
exceed the current DB size (i.e. 220 GB).How could I verify my assumption?
I think of two possibilities:
1. My assumption is wrong and for some reason the dead tuples are not
cleaned so that the space cannot be reused
2. The bulk-update indeed exceeds the current DB size. (Then the
growth is expected).
Your only assumption should be the official manual, and other material
such as books, articles from reputable sources, even reading the source
as a last resort could be considered.
For starters : do you have autovacuum enabled ? If not, then enable this.
Then monitor for vacuum via pg_stat_user_tables, locate the tables that
you would expect vacuum to have happened but did not, then consider
autovacuum tuning.
Watch the logs for lines such as :
<N> dead row versions cannot be removed yet, oldest xmin: <some xid>
those are held from being marked as removed, due to being visible by
long running transactions. Monitor for those transactions.
You have to monitor (if this is the case) about autovacuum being killed
and not allowed to do its job.
Can you help me to verify these assumptions? Are there any statistics
available that could help me with my verification?Thanks in advance &
Best regards,
Manuel
--
Achilleas Mantzios
IT DEV - HEAD
IT DEPT
Dynacom Tankers Mgmt (as agents only)
Hi,
Thanks for the suggestions. I found the following details to our autovacuum (see below). The related toast-table of my table shows some logs related the vacuum. This toast seems to consume all the data (27544451 pages * 8kb ≈ 210GB )
Any thoughts on this?
Best regards,
Manuel
Autovacuum details
Details from pg_stat_all_tables:
{
"analyze_count": 0,
"autoanalyze_count": 11,
"autovacuum_count": 60,
"idx_scan": 1925218,
"idx_tup_fetch": 1836820,
"last_analyze": null,
"last_autoanalyze": "2024-06-19T09:39:50.680818+00:00",
"last_autovacuum": "2024-06-19T09:41:50.58592+00:00",
"last_vacuum": null,
"n_dead_tup": 120,
"n_live_tup": 9004,
"n_mod_since_analyze": 474,
"n_tup_del": 84,
"n_tup_hot_upd": 5,
"n_tup_ins": 118,
"n_tup_upd": 15180,
"relid": "27236",
"relname": "my_tablename",
"schemaname": "public",
"seq_scan": 2370,
"seq_tup_read": 18403231,
"vacuum_count": 0
}
From the server logs, I found autocacuum details for my toast table (pg_toast_27236):
{
"category": "PostgreSQLLogs",
"operationName": "LogEvent",
"properties": {
"errorLevel": "LOG",
"message": "2024-06-19 17:45:02 UTC-66731911.22f2-LOG: automatic vacuum of table \"0ecf0241-aab3-45d5-b020-e586364f810c.pg_toast.pg_toast_27236\":
index scans: 1
pages: 0 removed, 27544451 remain, 0 skipped due to pins, 27406469 skipped frozen
tuples: 9380 removed, 819294 remain, 0 are dead but not yet removable, oldest xmin: 654973054
buffer usage: 318308 hits, 311886 misses, 2708 dirtied
avg read rate: 183.934 MB/s, avg write rate: 1.597 MB/s
system usage: CPU: user: 1.47 s, system: 1.43 s, elapsed: 13.24 s",
"processId": 8946,
"sqlerrcode": "00000",
"timestamp": "2024-06-19 17:45:02.564 UTC"
},
"time": "2024-06-19T17:45:02.568Z"
}
Best regards,
Manuel
From: Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>
Sent: 20 June 2024 19:10
To: pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloat
You don't often get email from a.mantzios@cloud.gatewaynet.com<mailto:a.mantzios@cloud.gatewaynet.com>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>
Στις 20/6/24 19:46, ο/η Shenavai, Manuel έγραψε:
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
1. My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
2. The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Your only assumption should be the official manual, and other material such as books, articles from reputable sources, even reading the source as a last resort could be considered.
For starters : do you have autovacuum enabled ? If not, then enable this.
Then monitor for vacuum via pg_stat_user_tables, locate the tables that you would expect vacuum to have happened but did not, then consider autovacuum tuning.
Watch the logs for lines such as :
<N> dead row
versions cannot be removed yet, oldest xmin: <some xid>
those
are held from being marked as removed, due to being visible by long running transactions. Monitor for those transactions.
You
have to monitor (if this is the case) about autovacuum being killed and not allowed to do its job.
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
Thanks in advance &
Best regards,
Manuel
--
Achilleas Mantzios
IT DEV - HEAD
IT DEPT
Dynacom Tankers Mgmt (as agents only)
Here some more details related to the toast table:
{
"analyze_count": 0,
"autoanalyze_count": 0,
"autovacuum_count": 22,
"idx_scan": 1464881,
"idx_tup_fetch": 363681753,
"last_analyze": null,
"last_autoanalyze": null,
"last_autovacuum": "2024-06-19T17:45:02.564937+00:00",
"last_vacuum": null,
"n_dead_tup": 12,
"n_live_tup": 819294,
"n_mod_since_analyze": 225250407,
"n_tup_del": 112615126,
"n_tup_hot_upd": 0,
"n_tup_ins": 112635281,
"n_tup_upd": 0,
"relid": "27240",
"relname": "pg_toast_27236",
"schemaname": "pg_toast",
"seq_scan": 0,
"seq_tup_read": 0,
"vacuum_count": 0
}
From: Shenavai, Manuel <manuel.shenavai@sap.com>
Sent: 21 June 2024 21:31
To: Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: RE: Autovacuum, dead tuples and bloat
Hi,
Thanks for the suggestions. I found the following details to our autovacuum (see below). The related toast-table of my table shows some logs related the vacuum. This toast seems to consume all the data (27544451 pages * 8kb ≈ 210GB )
Any thoughts on this?
Best regards,
Manuel
Autovacuum details
Details from pg_stat_all_tables:
{
"analyze_count": 0,
"autoanalyze_count": 11,
"autovacuum_count": 60,
"idx_scan": 1925218,
"idx_tup_fetch": 1836820,
"last_analyze": null,
"last_autoanalyze": "2024-06-19T09:39:50.680818+00:00",
"last_autovacuum": "2024-06-19T09:41:50.58592+00:00",
"last_vacuum": null,
"n_dead_tup": 120,
"n_live_tup": 9004,
"n_mod_since_analyze": 474,
"n_tup_del": 84,
"n_tup_hot_upd": 5,
"n_tup_ins": 118,
"n_tup_upd": 15180,
"relid": "27236",
"relname": "my_tablename",
"schemaname": "public",
"seq_scan": 2370,
"seq_tup_read": 18403231,
"vacuum_count": 0
}
From the server logs, I found autocacuum details for my toast table (pg_toast_27236):
{
"category": "PostgreSQLLogs",
"operationName": "LogEvent",
"properties": {
"errorLevel": "LOG",
"message": "2024-06-19 17:45:02 UTC-66731911.22f2-LOG: automatic vacuum of table \"0ecf0241-aab3-45d5-b020-e586364f810c.pg_toast.pg_toast_27236\":
index scans: 1
pages: 0 removed, 27544451 remain, 0 skipped due to pins, 27406469 skipped frozen
tuples: 9380 removed, 819294 remain, 0 are dead but not yet removable, oldest xmin: 654973054
buffer usage: 318308 hits, 311886 misses, 2708 dirtied
avg read rate: 183.934 MB/s, avg write rate: 1.597 MB/s
system usage: CPU: user: 1.47 s, system: 1.43 s, elapsed: 13.24 s",
"processId": 8946,
"sqlerrcode": "00000",
"timestamp": "2024-06-19 17:45:02.564 UTC"
},
"time": "2024-06-19T17:45:02.568Z"
}
Best regards,
Manuel
From: Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com<mailto:a.mantzios@cloud.gatewaynet.com>>
Sent: 20 June 2024 19:10
To: pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Subject: Re: Autovacuum, dead tuples and bloat
You don't often get email from a.mantzios@cloud.gatewaynet.com<mailto:a.mantzios@cloud.gatewaynet.com>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>
Στις 20/6/24 19:46, ο/η Shenavai, Manuel έγραψε:
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
1. My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
2. The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Your only assumption should be the official manual, and other material such as books, articles from reputable sources, even reading the source as a last resort could be considered.
For starters : do you have autovacuum enabled ? If not, then enable this.
Then monitor for vacuum via pg_stat_user_tables, locate the tables that you would expect vacuum to have happened but did not, then consider autovacuum tuning.
Watch the logs for lines such as :
<N> dead row
versions cannot be removed yet, oldest xmin: <some xid>
those
are held from being marked as removed, due to being visible by long running transactions. Monitor for those transactions.
You
have to monitor (if this is the case) about autovacuum being killed and not allowed to do its job.
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
Thanks in advance &
Best regards,
Manuel
--
Achilleas Mantzios
IT DEV - HEAD
IT DEPT
Dynacom Tankers Mgmt (as agents only)
On 6/21/24 12:31, Shenavai, Manuel wrote:
Hi,
Thanks for the suggestions. I found the following details to our
autovacuum (see below). The related toast-table of my table shows some
logs related the vacuum. This toast seems to consume all the data
(27544451 pages * 8kb ≈ 210GB )
Those tuples(pages) are still live per the pg_stat entry in your second
post:
"n_dead_tup": 12,
"n_live_tup": 819294
So they are needed.
Now the question is why are they needed?
1) All transactions that touch that table are done and that is the data
that is left.
2) There are open transactions that still need to 'see' that data and
autovacuum cannot remove them yet. Take a look at:
pg_stat_activity:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW
and
pg_locks
https://www.postgresql.org/docs/current/view-pg-locks.html
to see if there is a process holding that data open.
Any thoughts on this?
Best regards,
Manuel
--
Adrian Klaver
adrian.klaver@aklaver.com
Thanks for the suggestion. This is what I found:
- pg_locks shows only one entry for my DB (I filtered by db oid). The entry is related to the relation "pg_locks" (AccessShareLock).
- pg_stat_activity shows ~30 connections (since the DB is in use, this is expected)
Is there anything specific I should further look into in these tables?
Regarding my last post: Did we see a problem in the logs I provided in my previous post? We have seen that there are 819294 n_live_tup in the toast-table. Do we know how much space these tuple use? Do we know how much space one tuple use?
Best regards,
Manuel
-----Original Message-----
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: 21 June 2024 22:39
To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloat
On 6/21/24 12:31, Shenavai, Manuel wrote:
Hi,
Thanks for the suggestions. I found the following details to our
autovacuum (see below). The related toast-table of my table shows some
logs related the vacuum. This toast seems to consume all the data
(27544451 pages * 8kb ≈ 210GB )
Those tuples(pages) are still live per the pg_stat entry in your second
post:
"n_dead_tup": 12,
"n_live_tup": 819294
So they are needed.
Now the question is why are they needed?
1) All transactions that touch that table are done and that is the data
that is left.
2) There are open transactions that still need to 'see' that data and
autovacuum cannot remove them yet. Take a look at:
pg_stat_activity:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW
and
pg_locks
https://www.postgresql.org/docs/current/view-pg-locks.html
to see if there is a process holding that data open.
Any thoughts on this?
Best regards,
Manuel
--
Adrian Klaver
adrian.klaver@aklaver.com
On 6/22/24 13:13, Shenavai, Manuel wrote:
Thanks for the suggestion. This is what I found:
- pg_locks shows only one entry for my DB (I filtered by db oid). The entry is related to the relation "pg_locks" (AccessShareLock).
Which would be the SELECT you did on pg_locks.
- pg_stat_activity shows ~30 connections (since the DB is in use, this is expected)
The question then is, are any of those 30 connections holding a
transaction open that needs to see the data in the affected table and is
keeping autovacuum from recycling the tuples?
You might need to look at the Postgres logs to determine the above.
Logging connections/disconnections helps as well at least 'mod' statements.
See:
https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
for more information.
Is there anything specific I should further look into in these tables?
Regarding my last post: Did we see a problem in the logs I provided in my previous post? We have seen that there are 819294 n_live_tup in the toast-table. Do we know how much space these tuple use? Do we know how much space one tuple use?
You will want to read:
https://www.postgresql.org/docs/current/storage-toast.html
Also:
https://www.postgresql.org/docs/current/functions-admin.html
9.27.7. Database Object Management Functions
There are functions there that show table sizes among other things.
Best regards,
Manuel-----Original Message-----
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: 21 June 2024 22:39
To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloatOn 6/21/24 12:31, Shenavai, Manuel wrote:
Hi,
Thanks for the suggestions. I found the following details to our
autovacuum (see below). The related toast-table of my table shows some
logs related the vacuum. This toast seems to consume all the data
(27544451 pages * 8kb ≈ 210GB )Those tuples(pages) are still live per the pg_stat entry in your second
post:"n_dead_tup": 12,
"n_live_tup": 819294So they are needed.
Now the question is why are they needed?
1) All transactions that touch that table are done and that is the data
that is left.2) There are open transactions that still need to 'see' that data and
autovacuum cannot remove them yet. Take a look at:pg_stat_activity:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW
and
pg_locks
https://www.postgresql.org/docs/current/view-pg-locks.html
to see if there is a process holding that data open.
Any thoughts on this?
Best regards,
Manuel--
Adrian Klaver
adrian.klaver@aklaver.com
--
Adrian Klaver
adrian.klaver@aklaver.com
Thanks for the suggestions.
I checked pg_locks shows and pg_stat_activity but I could not find a LOCK or an transaction on this (at this point in time).
I assume that this problem may relate to long running transactions which write a lot of data. Is there already something in place that would help me to:
1) identify long running transactions
2) get an idea of the data-volume a single transaction writes?
I tested the log_statement='mod' but this writes too much data (including all payloads). I rather would like to get a summary entry of each transaction like:
"Tx 4752 run for 1hour and 1GB data was written."
Is there something like this already available in postgres?
Best regards,
Manuel
-----Original Message-----
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: 22 June 2024 23:17
To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloat
On 6/22/24 13:13, Shenavai, Manuel wrote:
Thanks for the suggestion. This is what I found:
- pg_locks shows only one entry for my DB (I filtered by db oid). The entry is related to the relation "pg_locks" (AccessShareLock).
Which would be the SELECT you did on pg_locks.
- pg_stat_activity shows ~30 connections (since the DB is in use, this is expected)
The question then is, are any of those 30 connections holding a
transaction open that needs to see the data in the affected table and is
keeping autovacuum from recycling the tuples?
You might need to look at the Postgres logs to determine the above.
Logging connections/disconnections helps as well at least 'mod' statements.
See:
https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
for more information.
Is there anything specific I should further look into in these tables?
Regarding my last post: Did we see a problem in the logs I provided in my previous post? We have seen that there are 819294 n_live_tup in the toast-table. Do we know how much space these tuple use? Do we know how much space one tuple use?
You will want to read:
https://www.postgresql.org/docs/current/storage-toast.html
Also:
https://www.postgresql.org/docs/current/functions-admin.html
9.27.7. Database Object Management Functions
There are functions there that show table sizes among other things.
Best regards,
Manuel-----Original Message-----
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: 21 June 2024 22:39
To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloatOn 6/21/24 12:31, Shenavai, Manuel wrote:
Hi,
Thanks for the suggestions. I found the following details to our
autovacuum (see below). The related toast-table of my table shows some
logs related the vacuum. This toast seems to consume all the data
(27544451 pages * 8kb ≈ 210GB )Those tuples(pages) are still live per the pg_stat entry in your second
post:"n_dead_tup": 12,
"n_live_tup": 819294So they are needed.
Now the question is why are they needed?
1) All transactions that touch that table are done and that is the data
that is left.2) There are open transactions that still need to 'see' that data and
autovacuum cannot remove them yet. Take a look at:pg_stat_activity:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW
and
pg_locks
https://www.postgresql.org/docs/current/view-pg-locks.html
to see if there is a process holding that data open.
Any thoughts on this?
Best regards,
Manuel--
Adrian Klaver
adrian.klaver@aklaver.com
--
Adrian Klaver
adrian.klaver@aklaver.com
On Wed, Jun 26, 2024 at 3:03 AM Shenavai, Manuel <manuel.shenavai@sap.com>
wrote:
Thanks for the suggestions.
I checked pg_locks shows and pg_stat_activity but I could not find a LOCK
or an transaction on this (at this point in time).I assume that this problem may relate to long running transactions which
write a lot of data. Is there already something in place that would help me
to:
1) identify long running transactions
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW
https://www.postgresql.org/docs/current/pgstatstatements.html
2) get an idea of the data-volume a single transaction writes?
I tested the log_statement='mod' but this writes too much data (including
all payloads). I rather would like to get a summary entry of each
transaction like:
"Tx 4752 run for 1hour and 1GB data was written."Is there something like this already available in postgres?
*Maybe* you can interpolate that by seeing how much wal activity is written
during the transaction, but I'm dubious.
Show quoted text
Best regards,
Manuel-----Original Message-----
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: 22 June 2024 23:17
To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <
a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloatOn 6/22/24 13:13, Shenavai, Manuel wrote:
Thanks for the suggestion. This is what I found:
- pg_locks shows only one entry for my DB (I filtered by db oid). The
entry is related to the relation "pg_locks" (AccessShareLock).
Which would be the SELECT you did on pg_locks.
- pg_stat_activity shows ~30 connections (since the DB is in use, this
is expected)
The question then is, are any of those 30 connections holding a
transaction open that needs to see the data in the affected table and is
keeping autovacuum from recycling the tuples?You might need to look at the Postgres logs to determine the above.
Logging connections/disconnections helps as well at least 'mod' statements.See:
https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
for more information.
Is there anything specific I should further look into in these tables?
Regarding my last post: Did we see a problem in the logs I provided in
my previous post? We have seen that there are 819294 n_live_tup in the
toast-table. Do we know how much space these tuple use? Do we know how
much space one tuple use?You will want to read:
https://www.postgresql.org/docs/current/storage-toast.html
Also:
https://www.postgresql.org/docs/current/functions-admin.html
9.27.7. Database Object Management Functions
There are functions there that show table sizes among other things.
Best regards,
Manuel-----Original Message-----
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: 21 June 2024 22:39
To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloat
On 6/21/24 12:31, Shenavai, Manuel wrote:
Hi,
Thanks for the suggestions. I found the following details to our
autovacuum (see below). The related toast-table of my table shows some
logs related the vacuum. This toast seems to consume all the data
(27544451 pages * 8kb ≈ 210GB )Those tuples(pages) are still live per the pg_stat entry in your second
post:"n_dead_tup": 12,
"n_live_tup": 819294So they are needed.
Now the question is why are they needed?
1) All transactions that touch that table are done and that is the data
that is left.2) There are open transactions that still need to 'see' that data and
autovacuum cannot remove them yet. Take a look at:pg_stat_activity:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW
and
pg_locks
https://www.postgresql.org/docs/current/view-pg-locks.html
to see if there is a process holding that data open.
Any thoughts on this?
Best regards,
Manuel--
Adrian Klaver
adrian.klaver@aklaver.com--
Adrian Klaver
adrian.klaver@aklaver.com
On 6/26/24 00:03, Shenavai, Manuel wrote:
Thanks for the suggestions.
I checked pg_locks shows and pg_stat_activity but I could not find a LOCK or an transaction on this (at this point in time).I assume that this problem may relate to long running transactions which write a lot of data. Is there already something in place that would help me to:
1) identify long running transactions
2) get an idea of the data-volume a single transaction writes?I tested the log_statement='mod' but this writes too much data (including all payloads). I rather would like to get a summary entry of each transaction like:
"Tx 4752 run for 1hour and 1GB data was written."
https://www.postgresql.org/docs/current/runtime-config-logging.html
log_min_duration_statement
Read the Note below the entry.
This will log long running queries, though it will not show th amount of
data written.
If you want to go more in depth there is:
https://www.postgresql.org/docs/current/pgstatstatements.html
It is an extension that you will need to install per instructions at the
link.
Is there something like this already available in postgres?
Best regards,
Manuel
--
Adrian Klaver
adrian.klaver@aklaver.com