AW: running vacuumlo periodically

Started by Zwettler Markus (OIZ)about 5 years ago2 messagesgeneral
Jump to latest
#1Zwettler Markus (OIZ)
Markus.Zwettler@zuerich.ch

I run "vacuumlo" in batches (-l) which worked well.

I found table "pg_catalog.pg_largeobjects" to be massively bloated afterwards.

I tried "vacuum full pg_catalog.pg_largeobjects" but run out of diskspace (although having 250G diskspace free; database size = 400G).

Question:
Will "vacuum full pg_catalog.pg_largeobjects" need less diskspace when "maintenance_work_mem" is increased?

Show quoted text

-----Ursprüngliche Nachricht-----
Von: Zwettler Markus (OIZ) <Markus.Zwettler@zuerich.ch>
Gesendet: Donnerstag, 28. Januar 2021 18:04
An: Laurenz Albe <laurenz.albe@cybertec.at>; pgsql-general@postgresql.org
Betreff: AW: running vacuumlo periodically?

-----Ursprüngliche Nachricht-----
Von: Laurenz Albe <laurenz.albe@cybertec.at>
Gesendet: Donnerstag, 28. Januar 2021 17:39
An: Zwettler Markus (OIZ) <Markus.Zwettler@zuerich.ch>; pgsql-
general@postgresql.org
Betreff: Re: running vacuumlo periodically?

On Thu, 2021-01-28 at 13:18 +0000, Zwettler Markus (OIZ) wrote:

Short question. Is it recommended - or even best practice – to run
vacuumlo

periodically as a routine maintenance task?

We don't do it. I think if this would be recommended it would have
been

implemented as an autotask like autovacuum. No?

It is recommended to run it regularly if
- you are using large objects
- you don't have a trigger in place that deletes large objects that you don't
need any more

Only a small minority of people do that, so it wouldn't make sense to
automatically run that on all databases.

Avoid large objects if you can.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

[Zwettler Markus (OIZ)]

We didn't recognize that an application is using large objects and didn't delete
them.
Now we found >100G dead large objects within the database. :-(

Is there any _GENERIC_ query which enables monitoring for orphaned objects
(dead LO)?

select oid from pg_largeobject_metadata m where not exists (select 1 from
ANY_EXISTING_TABLE_WITHIN_THE_DATABASE where m.oid =
ANY_COLUMN_CONTAINING_OIDs);

check_postgres.pl doesn't have any generic check for it. :-(

Thanks, Markus

#2Laurenz Albe
laurenz.albe@cybertec.at
In reply to: Zwettler Markus (OIZ) (#1)
Re: AW: running vacuumlo periodically

On Fri, 2021-01-29 at 15:44 +0000, Zwettler Markus (OIZ) wrote:

I run "vacuumlo" in batches (-l) which worked well.

I found table "pg_catalog.pg_largeobjects" to be massively bloated afterwards.

Sure, that deletes entries from that table.

I tried "vacuum full pg_catalog.pg_largeobjects" but run out of diskspace (although having 250G diskspace free; database size = 400G).

Question:
Will "vacuum full pg_catalog.pg_largeobjects" need less diskspace when "maintenance_work_mem" is increased?

No, it won't. It will just be faster.
That sounds like your database consists almost exclusively of large objects...

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com