Bloated pg_catalog.pg_largeobjects

Started by Jon Erdmanover 1 year ago4 messagesgeneral
Jump to latest
#1Jon Erdman
postgresql@thewickedtribe.net

Hello All,

I've got a cluster that's having issues with pg_catalog.pg_largeobject getting massively bloated. Vacuum is running OK and there's 700GB of free space in the table and only 100GB of data, but subsequent inserts seem to be not using space from the FSM and instead always allocating new pages. The table just keeps growing.

Is this a known thing, maybe something special about LOs?

Also, is the only way to recover space here a vacuum full on the table since it's a catalog table?

Thanks,

-- 
Jon Erdman (aka StuckMojo on IRC)
    PostgreSQL Zealot

#2Priancka Chatz
pc9926@gmail.com
In reply to: Jon Erdman (#1)
Re: Bloated pg_catalog.pg_largeobjects

You have to run vacuumlo to remove orphaned large objects.
https://www.postgresql.org/docs/current/vacuumlo.html

Regards,
Priyanka

On Sun, 21 Jul 2024 at 12:46 AM, <postgresql@thewickedtribe.net> wrote:

Show quoted text

Hello All,

I've got a cluster that's having issues with pg_catalog.pg_largeobject
getting massively bloated. Vacuum is running OK and there's 700GB of free
space in the table and only 100GB of data, but subsequent inserts seem to
be not using space from the FSM and instead always allocating new pages.
The table just keeps growing.

Is this a known thing, maybe something special about LOs?

Also, is the only way to recover space here a vacuum full on the table
since it's a catalog table?

Thanks,
--
Jon Erdman (aka StuckMojo on IRC)
PostgreSQL Zealot

#3khan Affan
bawag773@gmail.com
In reply to: Jon Erdman (#1)
Re: Bloated pg_catalog.pg_largeobjects

Hi

I would suggest to backup your DB before doing such a thing.

Run Vaccum Full, (VACUUM FULL pg_catalog.pg_largeobject) Running this on
the system table might be risky Make sure you backup the database.

& if you are using PG version above 9.1 use Pg_repack to reclaim the space.

Note: It can be disruptive, so planning and preparing for potential
downtime is essential.

Thanks & regards

*Muhammad Affan (*아판*)*

*PostgreSQL Technical Support Engineer** / Pakistan R&D*

Interlace Plaza 4th floor Twinhub office 32 I8 Markaz, Islamabad, Pakistan

On Sun, Jul 21, 2024 at 3:46 AM <postgresql@thewickedtribe.net> wrote:

Show quoted text

Hello All,

I've got a cluster that's having issues with pg_catalog.pg_largeobject
getting massively bloated. Vacuum is running OK and there's 700GB of free
space in the table and only 100GB of data, but subsequent inserts seem to
be not using space from the FSM and instead always allocating new pages.
The table just keeps growing.

Is this a known thing, maybe something special about LOs?

Also, is the only way to recover space here a vacuum full on the table
since it's a catalog table?

Thanks,
--
Jon Erdman (aka StuckMojo on IRC)
PostgreSQL Zealot

#4Priancka Chatz
pc9926@gmail.com
In reply to: khan Affan (#3)
Re: Bloated pg_catalog.pg_largeobjects

Vacuum Full will not help here if you have large objects present in the
pg_largeobjects table but not being referred to by any table. Vacuumlo
doesn't require downtime but based on the data it needs to remove can run
long and use resources and hence schedule it in off peak hours. You can do
a dry run to get an estimate.

On Mon, Jul 22, 2024 at 1:21 PM khan Affan <bawag773@gmail.com> wrote:

Show quoted text

Hi

I would suggest to backup your DB before doing such a thing.

Run Vaccum Full, (VACUUM FULL pg_catalog.pg_largeobject) Running this on
the system table might be risky Make sure you backup the database.

& if you are using PG version above 9.1 use Pg_repack to reclaim the space.

Note: It can be disruptive, so planning and preparing for potential
downtime is essential.

Thanks & regards

*Muhammad Affan (*아판*)*

*PostgreSQL Technical Support Engineer** / Pakistan R&D*

Interlace Plaza 4th floor Twinhub office 32 I8 Markaz, Islamabad, Pakistan

On Sun, Jul 21, 2024 at 3:46 AM <postgresql@thewickedtribe.net> wrote:

Hello All,

I've got a cluster that's having issues with pg_catalog.pg_largeobject
getting massively bloated. Vacuum is running OK and there's 700GB of free
space in the table and only 100GB of data, but subsequent inserts seem to
be not using space from the FSM and instead always allocating new pages.
The table just keeps growing.

Is this a known thing, maybe something special about LOs?

Also, is the only way to recover space here a vacuum full on the table
since it's a catalog table?

Thanks,
--
Jon Erdman (aka StuckMojo on IRC)
PostgreSQL Zealot