is there any adverse effect on DB if I set autovacuum scale factor to zero?
Hi All,
We are using postgres *9.2* version on *Centos *operating system. We have
around *1300+* tables.We have following auto vacuum settings are enables.
Still few of the tables(84 tables) which are always busy are not
vacuumed.Dead tuples in those tables are more than 5000. Due to that
tables are bloating and observed few areas has performance degradation.
autovacuum = on
log_autovacuum_min_duration = 100
autovacuum_max_workers = 5
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 40
autovacuum_analyze_threshold = 20
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
autovacuum_freeze_max_age = 200000000
autovacuum_vacuum_cost_delay = 30ms
autovacuum_vacuum_cost_limit = 1200
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
vacuum_cost_limit = 200 # 1-10000 credits
In order to avoid the table bloating and performance degradation,we would
like to set the* ' autovacuum_vacuum_scale_factor'(zero) * and
*'autovacuum_vacuum_threshold
' (200)* settings for the busy tables as below. Please let me know is there
any adverse effect on DB if I set autovacuum scale factor to zero for
certain tables. If yes, what is the effect and how to test.
ALTER TABLE cmdevice SET (autovacuum_vacuum_scale_factor = 0,
autovacuum_vacuum_threshold = 200);
Kindly let me know the role of *autovacuum_vacuum_cost_delay* and
*autovacuum_vacuum_cost_limit* settings .
Regards,
Raghavendra Rao
On 08/13/2018 11:07 AM, Raghavendra Rao J S V wrote:
Hi All,
We are using postgres *9.2* version on *Centos *operating system. We
have around *1300+* tables.We have following auto vacuum settings are
enables. Still few of the tables(84 tables) which are always busy are
not vacuumed.Dead tuples in those tables are more than 5000. Due to
that tables are bloating and observed few areas has performance
degradation.
You don't say how large the tables are, so it's impossible to say
whether 5000 dead tuples is excessive or not. IMHO it's a negligible
amount and should not lead to excessive bloat or issues.
A certain amount of wasted is expected - it's a trade-off between
immediate and delayed cleanup. If you delay the cleanup a bit, it's
going to be more efficient overall.
It's also unclear why the tables are not vacuumed - it may easily be due
to all the autovacuum workers being constantly busy, unable to cleanup
all tables in a timely manner. In that case lowering the threshold is
not going to help, on the contrary.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi Tomas,
Thank you very much for your response.
As we know table becomes a candidate for autovacuum process based on
below formula.
*Autovacuum VACUUM thresold for a table = autovacuum_vacuum_scale_factor *
number of tuples + autovacuum_vacuum_threshold*
*Current settings in my database are as follows.*
*autovacuum_vacuum_scale_factor = 0.1 *
*autovacuum_vacuum_threshold = 40*
Due to above formula the dead tuples are accumulating based on the number
of live tuples as show below picture.
select relname,n_live_tup,n_dead_tup,(n_live_tup*.1+40)
expected_to_autovacuum,* from pg_stat_user_tables
where n_dead_tup>800
order by n_live_tup desc
limit 100;
In order to avoid the dead tuples accumulation I wold like to change the
auto vacuum settings in *"postgresql.conf"* as below.
*autovacuum_vacuum_scale_factor = 0.01*
* autovacuum_vacuum_threshold = 100*
*Kindly guide me your views. Does it cause any adverse effect on DB.*
Regards,
Raghavendra Rao
On 13 August 2018 at 18:05, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:
On 08/13/2018 11:07 AM, Raghavendra Rao J S V wrote:
Hi All,
We are using postgres *9.2* version on *Centos *operating system. We
have around *1300+* tables.We have following auto vacuum settings are
enables. Still few of the tables(84 tables) which are always busy are not
vacuumed.Dead tuples in those tables are more than 5000. Due to that
tables are bloating and observed few areas has performance degradation.You don't say how large the tables are, so it's impossible to say whether
5000 dead tuples is excessive or not. IMHO it's a negligible amount and
should not lead to excessive bloat or issues.A certain amount of wasted is expected - it's a trade-off between
immediate and delayed cleanup. If you delay the cleanup a bit, it's going
to be more efficient overall.It's also unclear why the tables are not vacuumed - it may easily be due
to all the autovacuum workers being constantly busy, unable to cleanup all
tables in a timely manner. In that case lowering the threshold is not going
to help, on the contrary.regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
Attachments:
image.pngimage/png; name=image.pngDownload
On 08/13/2018 03:41 PM, Raghavendra Rao J S V wrote:
Hi Tomas,
Thank you very much for your response.
As we know table becomes a candidate for autovacuum process based on
below formula.*Autovacuum VACUUM thresold for a table = autovacuum_vacuum_scale_factor
* number of tuples + autovacuum_vacuum_threshold**Current settings in my database are as follows.*
/autovacuum_vacuum_scale_factor = 0.1/
/autovacuum_vacuum_threshold = 40/
Due to above formula the dead tuples are accumulating based on the
number of live tuples as show below picture.select relname,n_live_tup,n_dead_tup,(n_live_tup*.1+40)
expected_to_autovacuum,* from pg_stat_user_tables
where n_dead_tup>800
order by n_live_tup desc
limit 100;In order to avoid the dead tuples accumulation I wold like to change the
auto vacuum settings in *"postgresql.conf"* as below./autovacuum_vacuum_scale_factor = 0.01/
//autovacuum_vacuum_threshold = 100/
/
OK, so the tables apparently have enough dead tuples to trigger vacuum.
That mean the autovacuum throughput is insufficient to do all the
cleanup. If you lower the scale factor, the amount of cleanup will
*increase* (more tables being eligible for cleanup) making it less
likely autovacuum can keep up.
You need to increase the throughtput, by increasing vacuum_cost_limit or
something like that.
*Kindly guide me your views. Does it cause any adverse effect on DB.*
*
Well, it forces the database to do more stuff / more often, so it may
have adverse impact, of course. It's hard to say if it's going to be a
win overall, because we don't know how serious is the bloat.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
+ pgsql-general
Thanks and Regards,
Ashu Pachauri
---------- Forwarded message ---------
From: Ashu Pachauri <ashu210890@gmail.com>
Date: Mon, Aug 13, 2018 at 7:53 PM
Subject: Re: is there any adverse effect on DB if I set autovacuum scale
factor to zero?
To: <raghavendrajsv@gmail.com>
The way I see *autovacuum_vacuum_scale_factor* is not in terms of absolute
number but as the percentage of any table that can consist of updated /
deleted tuples to make it eligible for vacuuming. A factor of 0.1 ensures
that your tables would be eligible for vacuuming if more than 10% of the
tuples are deleted/updated.
1. If you think that 10% is too high for you in terms of storage cost, you
can decrease the number or set it to zero. But, I would advise to increase
the value of *autovacuum_vacuum_threshold* to something reasonable if you
do that, otherwise you pay the CPU cost frequent vacuuming across all
tables.
2. However, if your issue is not the fixed 10% overhead but the lack of
throughput i.e. you see the number of deleted/updated tuples keeps
increasing in an unbounded fashion, the right way to deal with it is a)
Having higher value of *autovacuum_max_workers* b) lower value for
*autovacuum_naptime*.
Apart from configuration tuning, one common reason for low vacuum
throughput is lock waits. You can turn on *log_lock_waits* config to find
out if that's what's happening. As a general rule of thumb, you should not
have long running transactions, especially the ones that require *share/share
row exclusive/ exclusive /access exclusive* locks. They not only hamper
vacuuming throughput but also the throughput of your db writes in general.
Thanks and Regards,
Ashu Pachauri
On Mon, Aug 13, 2018 at 7:11 PM Raghavendra Rao J S V <
raghavendrajsv@gmail.com> wrote:
Show quoted text
Hi Tomas,
Thank you very much for your response.
As we know table becomes a candidate for autovacuum process based on
below formula.*Autovacuum VACUUM thresold for a table = autovacuum_vacuum_scale_factor *
number of tuples + autovacuum_vacuum_threshold**Current settings in my database are as follows.*
*autovacuum_vacuum_scale_factor = 0.1 *
*autovacuum_vacuum_threshold = 40*
Due to above formula the dead tuples are accumulating based on the number
of live tuples as show below picture.select relname,n_live_tup,n_dead_tup,(n_live_tup*.1+40)
expected_to_autovacuum,* from pg_stat_user_tables
where n_dead_tup>800
order by n_live_tup desc
limit 100;In order to avoid the dead tuples accumulation I wold like to change the
auto vacuum settings in *"postgresql.conf"* as below.*autovacuum_vacuum_scale_factor = 0.01*
* autovacuum_vacuum_threshold = 100*
*Kindly guide me your views. Does it cause any adverse effect on DB.*
Regards,
Raghavendra RaoOn 13 August 2018 at 18:05, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:On 08/13/2018 11:07 AM, Raghavendra Rao J S V wrote:
Hi All,
We are using postgres *9.2* version on *Centos *operating system. We
have around *1300+* tables.We have following auto vacuum settings are
enables. Still few of the tables(84 tables) which are always busy are not
vacuumed.Dead tuples in those tables are more than 5000. Due to that
tables are bloating and observed few areas has performance degradation.You don't say how large the tables are, so it's impossible to say whether
5000 dead tuples is excessive or not. IMHO it's a negligible amount and
should not lead to excessive bloat or issues.A certain amount of wasted is expected - it's a trade-off between
immediate and delayed cleanup. If you delay the cleanup a bit, it's going
to be more efficient overall.It's also unclear why the tables are not vacuumed - it may easily be due
to all the autovacuum workers being constantly busy, unable to cleanup all
tables in a timely manner. In that case lowering the threshold is not going
to help, on the contrary.regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
Attachments:
image.pngimage/png; name=image.pngDownload
Import Notes
Reply to msg id not found: CA+C_EDicKgxzcNjn2qOegts=MWd3Pis1f6HF1317SqzUOq9N8Q@mail.gmail.com
On 08/13/2018 04:24 PM, Ashu Pachauri wrote:
+ pgsql-general
Thanks and Regards,
Ashu Pachauri---------- Forwarded message ---------
From: *Ashu Pachauri* <ashu210890@gmail.com <mailto:ashu210890@gmail.com>>
Date: Mon, Aug 13, 2018 at 7:53 PM
Subject: Re: is there any adverse effect on DB if I set autovacuum scale
factor to zero?
To: <raghavendrajsv@gmail.com <mailto:raghavendrajsv@gmail.com>>The way I see *autovacuum_vacuum_scale_factor* is not in terms of
absolute number but as the percentage of any table that can consist of
updated / deleted tuples to make it eligible for vacuuming. A factor of
0.1 ensures that your tables would be eligible for vacuuming if more
than 10% of the tuples are deleted/updated.
1. If you think that 10% is too high for you in terms of storage cost,
you can decrease the number or set it to zero. But, I would advise to
increase the value of *autovacuum_vacuum_threshold* to something
reasonable if you do that, otherwise you pay the CPU cost frequent
vacuuming across all tables.
2. However, if your issue is not the fixed 10% overhead but the lack of
throughput i.e. you see the number of deleted/updated tuples keeps
increasing in an unbounded fashion, the right way to deal with it is a)
Having higher value of *autovacuum_max_workers* b) lower value for
*autovacuum_naptime*.
Increasing autovacuum_max_workers is unlikely to solve the issue with
throughput, because all the workers are throttled together - there's a
limit on the amount of work that can be done per second. Increasing the
number of workers is akin to allowing more cars on a highway, but also
lowering the speed limit.
You need to increase the limit on amount of work, and lowering naptime
is one way to do that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services