Bloated toast table with empty associated table

Started by Paul Allenabout 1 year ago7 messagesgeneral
Jump to latest
#1Paul Allen
paulcrtool@gmail.com

Hello.

Preconditions.

I have some empty table and constantly try to execute `insert ... on
conflict do update ...` on it. My data in row which I try to insert is
invalid by violation of foreing key constraint, so I am getting error
while inserting and table keeps being empty. This table have some bytea
columns with default storage type. It's purpose is to keep images.
PostgreSQL version is 15, everything is default, autovacuum settings is

```
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 60
autovacuum_vacuum_cost_delay 20
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1
log_autovacuum_min_duration -1
```

Problem.

My backend application attempts unsuccessfully repeatedly to insert the
same ~100 rows with images, and despite table's row count remains 0,
toast table's size is growing up permanently, reaching 100, 200, 300 GB
until it takes all available space.

VACUUM FULL fixes this, but a want some automatic solution. I tried to
alter table, believing that the settings below would force autovacuum to
clean toast anyway, but it had no effect.

```
alter table controlzone_passage set (
autovacuum_enabled = true,
toast.autovacuum_enabled = true,
autovacuum_vacuum_threshold = 0,
toast.autovacuum_vacuum_threshold = 0,
autovacuum_vacuum_scale_factor = 0,
toast.autovacuum_vacuum_scale_factor = 0,
autovacuum_analyze_threshold = 0,
autovacuum_analyze_scale_factor = 0);
```

At the moment, my assumption is that the autovacuum is not working
because the number of rows in the table does not change and remains
zero. Any solution will suit me, for example, not to write rows to toast
if their insertion failed. Or the proper setting of the autovacuum.
Please tell me what can be done.

Thanks. Daniil Rozanov

#2Ron
ronljohnsonjr@gmail.com
In reply to: Paul Allen (#1)
Re: Bloated toast table with empty associated table

On Thu, Mar 20, 2025 at 7:40 AM Paul Allen <paulcrtool@gmail.com> wrote:

Hello.

Preconditions.

I have some empty table and constantly try to execute `insert ... on
conflict do update ...` on it. My data in row which I try to insert is
invalid by violation of foreing key constraint, so I am getting error
while inserting and table keeps being empty. This table have some bytea
columns with default storage type. It's purpose is to keep images.
PostgreSQL version is 15, everything is default, autovacuum settings is

```
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 60
autovacuum_vacuum_cost_delay 20
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1
log_autovacuum_min_duration -1

Your _scale_factor values are too high. Drop them down to about 5%.

That's not the proximate cause, though.

```

Problem.

My backend application attempts unsuccessfully repeatedly to insert the
same ~100 rows with images,

Instantaneously and repeatedly, while ignoring the error?

and despite table's row count remains 0,
toast table's size is growing up permanently, reaching 100, 200, 300 GB
until it takes all available space.

VACUUM FULL fixes this, but a want some automatic solution. I tried to
alter table, believing that the settings below would force autovacuum to
clean toast anyway, but it had no effect.

```
alter table controlzone_passage set (
autovacuum_enabled = true,
toast.autovacuum_enabled = true,
autovacuum_vacuum_threshold = 0,
toast.autovacuum_vacuum_threshold = 0,
autovacuum_vacuum_scale_factor = 0,
toast.autovacuum_vacuum_scale_factor = 0,
autovacuum_analyze_threshold = 0,
autovacuum_analyze_scale_factor = 0);
```

At the moment, my assumption is that the autovacuum is not working
because the number of rows in the table does not change and remains
zero. Any solution will suit me, for example, not to write rows to toast
if their insertion failed. Or the proper setting of the autovacuum.
Please tell me what can be done.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t
controlzone_passage". How often you run it depends on how quickly it
bloats.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

#3Paul Allen
paulcrtool@gmail.com
In reply to: Ron (#2)
Re: Bloated toast table with empty associated table

Instantaneously and repeatedly, while ignoring the error?

Yes, that's how it should work and I probably can't do anything about it.

Your _scale_factor values are too high. Drop them down to about 5%.

Okay, but what about altering controlzone_passage table, where I set
all _scale_factor
values to 0? If this did not have an effect, then how will the value
of 5% affect? Maybe I
misunderstand, but the table does not change by any number of rows and
its logical size
remains zero. Anyway I will try it.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t controlzone_passage". How often you run it depends on how quickly it bloats.

Seems like it is the only solution for now.

Show quoted text

On Thu, Mar 20, 2025 at 4:03 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:

On Thu, Mar 20, 2025 at 7:40 AM Paul Allen <paulcrtool@gmail.com> wrote:

Hello.

Preconditions.

I have some empty table and constantly try to execute `insert ... on
conflict do update ...` on it. My data in row which I try to insert is
invalid by violation of foreing key constraint, so I am getting error
while inserting and table keeps being empty. This table have some bytea
columns with default storage type. It's purpose is to keep images.
PostgreSQL version is 15, everything is default, autovacuum settings is

```
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 60
autovacuum_vacuum_cost_delay 20
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1
log_autovacuum_min_duration -1

Your _scale_factor values are too high. Drop them down to about 5%.

That's not the proximate cause, though.

```

Problem.

My backend application attempts unsuccessfully repeatedly to insert the
same ~100 rows with images,

Instantaneously and repeatedly, while ignoring the error?

and despite table's row count remains 0,
toast table's size is growing up permanently, reaching 100, 200, 300 GB
until it takes all available space.

VACUUM FULL fixes this, but a want some automatic solution. I tried to
alter table, believing that the settings below would force autovacuum to
clean toast anyway, but it had no effect.

```
alter table controlzone_passage set (
autovacuum_enabled = true,
toast.autovacuum_enabled = true,
autovacuum_vacuum_threshold = 0,
toast.autovacuum_vacuum_threshold = 0,
autovacuum_vacuum_scale_factor = 0,
toast.autovacuum_vacuum_scale_factor = 0,
autovacuum_analyze_threshold = 0,
autovacuum_analyze_scale_factor = 0);
```

At the moment, my assumption is that the autovacuum is not working
because the number of rows in the table does not change and remains
zero. Any solution will suit me, for example, not to write rows to toast
if their insertion failed. Or the proper setting of the autovacuum.
Please tell me what can be done.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t controlzone_passage". How often you run it depends on how quickly it bloats.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

#4Ron
ronljohnsonjr@gmail.com
In reply to: Paul Allen (#3)
Re: Bloated toast table with empty associated table

On Thu, Mar 20, 2025 at 8:22 AM Paul Allen <paulcrtool@gmail.com> wrote:

Instantaneously and repeatedly, while ignoring the error?

Yes, that's how it should work and I probably can't do anything about it.

Your _scale_factor values are too high. Drop them down to about 5%.

Okay, but what about altering controlzone_passage table, where I set
all _scale_factor
values to 0? If this did not have an effect, then how will the value
of 5% affect? Maybe I
misunderstand, but the table does not change by any number of rows and
its logical size
remains zero. Anyway I will try it.

It's the general principle that the _scale_factor defaults are in my
experience too high.

Like I said: "That's not the proximate cause, though" of this table's bloat.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t

controlzone_passage". How often you run it depends on how quickly it
bloats.

Seems like it is the only solution for now.

The autovacuum daemon can't know/see everything.

On Thu, Mar 20, 2025 at 4:03 PM Ron Johnson <ronljohnsonjr@gmail.com>
wrote:

On Thu, Mar 20, 2025 at 7:40 AM Paul Allen <paulcrtool@gmail.com> wrote:

Hello.

Preconditions.

I have some empty table and constantly try to execute `insert ... on
conflict do update ...` on it. My data in row which I try to insert is
invalid by violation of foreing key constraint, so I am getting error
while inserting and table keeps being empty. This table have some bytea
columns with default storage type. It's purpose is to keep images.
PostgreSQL version is 15, everything is default, autovacuum settings is

```
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 60
autovacuum_vacuum_cost_delay 20
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1
log_autovacuum_min_duration -1

Your _scale_factor values are too high. Drop them down to about 5%.

That's not the proximate cause, though.

```

Problem.

My backend application attempts unsuccessfully repeatedly to insert the
same ~100 rows with images,

Instantaneously and repeatedly, while ignoring the error?

and despite table's row count remains 0,
toast table's size is growing up permanently, reaching 100, 200, 300 GB
until it takes all available space.

VACUUM FULL fixes this, but a want some automatic solution. I tried to
alter table, believing that the settings below would force autovacuum to
clean toast anyway, but it had no effect.

```
alter table controlzone_passage set (
autovacuum_enabled = true,
toast.autovacuum_enabled = true,
autovacuum_vacuum_threshold = 0,
toast.autovacuum_vacuum_threshold = 0,
autovacuum_vacuum_scale_factor = 0,
toast.autovacuum_vacuum_scale_factor = 0,
autovacuum_analyze_threshold = 0,
autovacuum_analyze_scale_factor = 0);
```

At the moment, my assumption is that the autovacuum is not working
because the number of rows in the table does not change and remains
zero. Any solution will suit me, for example, not to write rows to toast
if their insertion failed. Or the proper setting of the autovacuum.
Please tell me what can be done.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t

controlzone_passage". How often you run it depends on how quickly it
bloats.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

#5Paul Allen
paulcrtool@gmail.com
In reply to: Ron (#4)
Re: Bloated toast table with empty associated table

It's the general principle that the _scale_factor defaults are in my experience too high.

Sorry, didn't mentioned

Ok, thanks, I will use the cron task.

It's toast behaviour still seems odd to me. Is there a way to prevent
this behaviour at all, to not store rows which were failed to insert?

Show quoted text

On Thu, Mar 20, 2025 at 4:28 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:

On Thu, Mar 20, 2025 at 8:22 AM Paul Allen <paulcrtool@gmail.com> wrote:

Instantaneously and repeatedly, while ignoring the error?

Yes, that's how it should work and I probably can't do anything about it.

Your _scale_factor values are too high. Drop them down to about 5%.

Okay, but what about altering controlzone_passage table, where I set
all _scale_factor
values to 0? If this did not have an effect, then how will the value
of 5% affect? Maybe I
misunderstand, but the table does not change by any number of rows and
its logical size
remains zero. Anyway I will try it.

It's the general principle that the _scale_factor defaults are in my experience too high.

Like I said: "That's not the proximate cause, though" of this table's bloat.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t controlzone_passage". How often you run it depends on how quickly it bloats.

Seems like it is the only solution for now.

The autovacuum daemon can't know/see everything.

On Thu, Mar 20, 2025 at 4:03 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:

On Thu, Mar 20, 2025 at 7:40 AM Paul Allen <paulcrtool@gmail.com> wrote:

Hello.

Preconditions.

I have some empty table and constantly try to execute `insert ... on
conflict do update ...` on it. My data in row which I try to insert is
invalid by violation of foreing key constraint, so I am getting error
while inserting and table keeps being empty. This table have some bytea
columns with default storage type. It's purpose is to keep images.
PostgreSQL version is 15, everything is default, autovacuum settings is

```
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 60
autovacuum_vacuum_cost_delay 20
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1
log_autovacuum_min_duration -1

Your _scale_factor values are too high. Drop them down to about 5%.

That's not the proximate cause, though.

```

Problem.

My backend application attempts unsuccessfully repeatedly to insert the
same ~100 rows with images,

Instantaneously and repeatedly, while ignoring the error?

and despite table's row count remains 0,
toast table's size is growing up permanently, reaching 100, 200, 300 GB
until it takes all available space.

VACUUM FULL fixes this, but a want some automatic solution. I tried to
alter table, believing that the settings below would force autovacuum to
clean toast anyway, but it had no effect.

```
alter table controlzone_passage set (
autovacuum_enabled = true,
toast.autovacuum_enabled = true,
autovacuum_vacuum_threshold = 0,
toast.autovacuum_vacuum_threshold = 0,
autovacuum_vacuum_scale_factor = 0,
toast.autovacuum_vacuum_scale_factor = 0,
autovacuum_analyze_threshold = 0,
autovacuum_analyze_scale_factor = 0);
```

At the moment, my assumption is that the autovacuum is not working
because the number of rows in the table does not change and remains
zero. Any solution will suit me, for example, not to write rows to toast
if their insertion failed. Or the proper setting of the autovacuum.
Please tell me what can be done.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t controlzone_passage". How often you run it depends on how quickly it bloats.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

#6Ron
ronljohnsonjr@gmail.com
In reply to: Paul Allen (#5)
Re: Bloated toast table with empty associated table

On Thu, Mar 20, 2025 at 8:45 AM Paul Allen <paulcrtool@gmail.com> wrote:

It's the general principle that the _scale_factor defaults are in my

experience too high.
Sorry, didn't mentioned

Ok, thanks, I will use the cron task.

It's toast behaviour still seems odd to me. Is there a way to prevent
this behaviour at all, to not store rows which were failed to insert?

Fix the cause of the FK violation? Check for possible FK violations before
they happen?

See, the problem you're having is that PG is properly doing what it was
told to do: INSERT records.

And... it did just that. But what *must* PG do when it discovers a
constraint violation 99% of the way through inserting those 100 records?
That's right: remove the records.

Thus, you get bloat.

On Thu, Mar 20, 2025 at 4:28 PM Ron Johnson <ronljohnsonjr@gmail.com>
wrote:

On Thu, Mar 20, 2025 at 8:22 AM Paul Allen <paulcrtool@gmail.com> wrote:

Instantaneously and repeatedly, while ignoring the error?

Yes, that's how it should work and I probably can't do anything about

it.

Your _scale_factor values are too high. Drop them down to about 5%.

Okay, but what about altering controlzone_passage table, where I set
all _scale_factor
values to 0? If this did not have an effect, then how will the value
of 5% affect? Maybe I
misunderstand, but the table does not change by any number of rows and
its logical size
remains zero. Anyway I will try it.

It's the general principle that the _scale_factor defaults are in my

experience too high.

Like I said: "That's not the proximate cause, though" of this table's

bloat.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t

controlzone_passage". How often you run it depends on how quickly it
bloats.

Seems like it is the only solution for now.

The autovacuum daemon can't know/see everything.

On Thu, Mar 20, 2025 at 4:03 PM Ron Johnson <ronljohnsonjr@gmail.com>

wrote:

On Thu, Mar 20, 2025 at 7:40 AM Paul Allen <paulcrtool@gmail.com>

wrote:

Hello.

Preconditions.

I have some empty table and constantly try to execute `insert ... on
conflict do update ...` on it. My data in row which I try to insert

is

invalid by violation of foreing key constraint, so I am getting error
while inserting and table keeps being empty. This table have some

bytea

columns with default storage type. It's purpose is to keep images.
PostgreSQL version is 15, everything is default, autovacuum settings

is

```
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 60
autovacuum_vacuum_cost_delay 20
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1
log_autovacuum_min_duration -1

Your _scale_factor values are too high. Drop them down to about 5%.

That's not the proximate cause, though.

```

Problem.

My backend application attempts unsuccessfully repeatedly to insert

the

same ~100 rows with images,

Instantaneously and repeatedly, while ignoring the error?

and despite table's row count remains 0,
toast table's size is growing up permanently, reaching 100, 200, 300

GB

until it takes all available space.

VACUUM FULL fixes this, but a want some automatic solution. I tried

to

alter table, believing that the settings below would force

autovacuum to

clean toast anyway, but it had no effect.

```
alter table controlzone_passage set (
autovacuum_enabled = true,
toast.autovacuum_enabled = true,
autovacuum_vacuum_threshold = 0,
toast.autovacuum_vacuum_threshold = 0,
autovacuum_vacuum_scale_factor = 0,
toast.autovacuum_vacuum_scale_factor = 0,
autovacuum_analyze_threshold = 0,
autovacuum_analyze_scale_factor = 0);
```

At the moment, my assumption is that the autovacuum is not working
because the number of rows in the table does not change and remains
zero. Any solution will suit me, for example, not to write rows to

toast

if their insertion failed. Or the proper setting of the autovacuum.
Please tell me what can be done.

I'd create a cron entry that does a regular "vacuumdb -d the_db -t

controlzone_passage". How often you run it depends on how quickly it
bloats.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

#7Adrian Klaver
adrian.klaver@aklaver.com
In reply to: Paul Allen (#1)
Re: Bloated toast table with empty associated table

On 3/20/25 04:39, Paul Allen wrote:

Hello.

Problem.

My backend application attempts unsuccessfully repeatedly to insert the
same ~100 rows with images, and despite table's row count remains 0,
toast table's size is growing up permanently, reaching 100, 200, 300 GB
until it takes all available space.

You are going to need to show the complete process you are using as well
as the error messages you get in the Postgres log.

Thanks. Daniil Rozanov

--
Adrian Klaver
adrian.klaver@aklaver.com