autovacuum

Started by Enzo Daddarioalmost 20 years ago12 messages
#1Enzo Daddario
enzo@pienetworks.com

Hi All,

I am concerned with the impact autovacuum of table(s) would have on
regular DB activity.

With our current DB's the majority of tables have either a low number of
updates or a large number of inserts (which I believe should not be a
problem), however, a small number of tables have an extremely high
number of updates (up to 150 000)

#2Matthew T. O'Connor
matthew@zeut.net
In reply to: Enzo Daddario (#1)
Re: autovacuum

Legit concern. However one of the things that autovacuum is supposed to
do is not vacuum tables that don't need it. This can result in an overal
reduction in vacuum overhead. In addition, if you see that autovacuum is
firing off vacuum commands during the day and they are impacting your
response time, then you can play with the vacuum cost delay settings that
are design to throttle down the IO impact vacuum commands can have. In
addition if you use 8.1, you can set per table thresholds, per table
vacuum cost delay settings, and autovacuum will respect the work done by
non-autovacuum vacuum commands. Meaning that if you manually vacuum
tables at night during a maintenance window, autovacuum will take that
into account. Contrib autovacuum couldn't do this.

Hope that helps. Real world feed-back is always welcome.

Matt

Show quoted text

I am concerned with the impact autovacuum of table(s) would have on
regular DB activity.

With our current DB's the majority of tables have either a low number of
updates or a large number of inserts (which I believe should not be a
problem), however, a small number of tables have an extremely high
number of updates (up to 150 000)

#3Chris Browne
cbbrowne@acm.org
In reply to: Enzo Daddario (#1)
Re: autovacuum

matthew@zeut.net ("Matthew T. O'Connor") writes:

Legit concern. However one of the things that autovacuum is supposed to
do is not vacuum tables that don't need it. This can result in an overal
reduction in vacuum overhead. In addition, if you see that autovacuum is
firing off vacuum commands during the day and they are impacting your
response time, then you can play with the vacuum cost delay settings that
are design to throttle down the IO impact vacuum commands can have. In
addition if you use 8.1, you can set per table thresholds, per table
vacuum cost delay settings, and autovacuum will respect the work done by
non-autovacuum vacuum commands. Meaning that if you manually vacuum
tables at night during a maintenance window, autovacuum will take that
into account. Contrib autovacuum couldn't do this.

Hope that helps. Real world feed-back is always welcome.

I have a question/suggestion...

Something we found useful with Slony-I was the notion of checking the
eldest XID on the system to see if there was any point at all in
bothering to vacuum. I don't see anything analagous in autovacuum.c;
this might well be a useful addition.

In the Slony-I cleanup thread loop, we collect, in each iteration, the
current earliest XID.

In each iteration of this loop, we check to see if that XID has
changed.

- First time thru, it changes from 0 to 'some value' and so tries to do
a vacuum.

- But supposing you have some long running transaction (say, a pg_dump
that runs for 2h), it becomes pretty futile to bother trying to
vacuum things for the duration of that transaction, because that
long running transaction will, via MVCC, hold onto any old tuples.

It strikes me as a slick idea for autovacuum to take on that
behaviour. If the daily backup runs for 2h, then it is quite futile
to bother vacuuming a table multiple times during that 2h period when
none of the tuples obsoleted during the 2h period will be able to be
cleaned out until the end.

Presumably this means that, during that 2h period, pg_autovacuum would
probably only issue ANALYZE statements...
--
let name="cbbrowne" and tld="ntlug.org" in String.concat "@" [name;tld];;
http://www.ntlug.org/~cbbrowne/languages.html
Rules of the Evil Overlord #51. "If one of my dungeon guards begins
expressing concern over the conditions in the beautiful princess'
cell, I will immediately transfer him to a less people-oriented
position." <http://www.eviloverlord.com/&gt;

#4Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Chris Browne (#3)
Re: [HACKERS] autovacuum

Chris Browne wrote:

It strikes me as a slick idea for autovacuum to take on that
behaviour. If the daily backup runs for 2h, then it is quite futile
to bother vacuuming a table multiple times during that 2h period when
none of the tuples obsoleted during the 2h period will be able to be
cleaned out until the end.

Hmm, yeah, sounds useful. There's one implementation issue to notice
however, and it's that the autovacuum process dies and restarts for each
iteration, so there's no way for it to remember previous state unless
it's saved somewhere permanent, as the stats info is.

However this seems at least slightly redundant with the "maintenance
window" feature -- you could set a high barrier to vacuum during the
daily backup period instead. (Anybody up for doing this job?)

--
Alvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"No single strategy is always right (Unless the boss says so)"
(Larry Wall)

#5Matthew T. O'Connor
matthew@zeut.net
In reply to: Alvaro Herrera (#4)
Re: [HACKERS] autovacuum

Alvaro Herrera wrote:

Chris Browne wrote:

It strikes me as a slick idea for autovacuum to take on that
behaviour. If the daily backup runs for 2h, then it is quite futile
to bother vacuuming a table multiple times during that 2h period when
none of the tuples obsoleted during the 2h period will be able to be
cleaned out until the end.

Hmm, yeah, sounds useful. There's one implementation issue to notice
however, and it's that the autovacuum process dies and restarts for each
iteration, so there's no way for it to remember previous state unless
it's saved somewhere permanent, as the stats info is.

However this seems at least slightly redundant with the "maintenance
window" feature -- you could set a high barrier to vacuum during the
daily backup period instead. (Anybody up for doing this job?)

I can't promise anything, but it's on my list of things to hopefully
find time for in the coming months. No way I can start it in Feb, but
maybe sometime in March. Anyone else?

Matt

#6Chris Browne
cbbrowne@acm.org
In reply to: Enzo Daddario (#1)
Re: autovacuum

alvherre@alvh.no-ip.org (Alvaro Herrera) writes:

Chris Browne wrote:

It strikes me as a slick idea for autovacuum to take on that
behaviour. If the daily backup runs for 2h, then it is quite futile
to bother vacuuming a table multiple times during that 2h period when
none of the tuples obsoleted during the 2h period will be able to be
cleaned out until the end.

Hmm, yeah, sounds useful. There's one implementation issue to notice
however, and it's that the autovacuum process dies and restarts for each
iteration, so there's no way for it to remember previous state unless
it's saved somewhere permanent, as the stats info is.

Hmm. It restarts repeatedly??? Hmmm...

However this seems at least slightly redundant with the "maintenance
window" feature -- you could set a high barrier to vacuum during the
daily backup period instead. (Anybody up for doing this job?)

In effect, this would be an alternative to the "window" feature. You
open the window by starting pg_dump; pg_autovacuum would automatically
notice that as the eldest XID, and stop work until the pg_dump
actually finished.

In a way, it strikes me as more elegant; it would automatically notice
"backup windows," noticing *exact* start and end times...
--
select 'cbbrowne' || '@' || 'ntlug.org';
http://www.ntlug.org/~cbbrowne/linuxxian.html
"I'm sorry, Mr. Kipling, but you just don't know how to use the
English Language." -- Editor of the San Francisco Examiner, informing
Rudyard Kipling, who had one article published in the newspaper, that
he needn't bother submitting a second, 1889

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#4)
Re: [HACKERS] autovacuum

Alvaro Herrera <alvherre@alvh.no-ip.org> writes:

Hmm, yeah, sounds useful. There's one implementation issue to notice
however, and it's that the autovacuum process dies and restarts for each
iteration, so there's no way for it to remember previous state unless
it's saved somewhere permanent, as the stats info is.

I think you'd really need to remember the previous oldest XID on a
per-table basis to get full traction out of the idea. But weren't we
thinking of tracking something isomorphic to this for purposes of
minimizing anti-wraparound VACUUMs?

regards, tom lane

#8Chris Browne
cbbrowne@acm.org
In reply to: Enzo Daddario (#1)
Re: autovacuum

tgl@sss.pgh.pa.us (Tom Lane) writes:

Alvaro Herrera <alvherre@alvh.no-ip.org> writes:

Hmm, yeah, sounds useful. There's one implementation issue to notice
however, and it's that the autovacuum process dies and restarts for each
iteration, so there's no way for it to remember previous state unless
it's saved somewhere permanent, as the stats info is.

I think you'd really need to remember the previous oldest XID on a
per-table basis to get full traction out of the idea. But weren't
we thinking of tracking something isomorphic to this for purposes of
minimizing anti-wraparound VACUUMs?

I think I'd like that even better :-).

In the Slony-I case, the tables being vacuumed are ones where the
deletion is taking place within the same thread, so that having one
XID is plenty enough because the only thing that should be touching
the tables is the cleanup thread, which is invoked every 10 minutes.
One XID is enough "protection" for that, as least as a reasonable
approximation.

Tracking just the one eldest XID is still quite likely to be
*reasonably* useful with autovacuum, assuming there isn't a by-table
option. By-table would be better, though.
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://www3.sympatico.ca/cbbrowne/sgml.html
"Politics is not a bad profession. If you succeed there are many
rewards, if you disgrace yourself you can always write a book."
-- Ronald Reagan

#9Chris Browne
cbbrowne@acm.org
In reply to: Enzo Daddario (#1)
Re: autovacuum

matthew@zeut.net ("Matthew T. O'Connor") writes:

Hope that helps. Real world feed-back is always welcome.

While I'm at it, I should throw in an idea that I had a little while
back about a "vacuum request manager."

This is kind of orthogonal to everything else that has been happening
with pg_autovacuum...

One of the troubles we have been hitting with our homebrew scripts is
when locking doesn't turn out, and they start submitting multiple
vacuums at once, which sometimes builds up "to ill."

A thought I had was to create a daemon that would serially process
requests. It would just watch a table of requests, and when it finds
work, start work.

We'd then have some sort of "injection" process that would tell the
daemon "Here's new work!"

Requests would be defined thus:

/* cbbrowne@[local]/dba2 vacdb=*/ \d vacuum_requests
Table "public.vacuum_requests"
Column | Type | Modifiers
--------------+--------------------------+------------------------
vtable | text | not null
vhost | text | not null
vdatabase | text | not null
urgency | integer | not null default 1
created_on | timestamp with time zone | not null default now()
completed_on | timestamp with time zone |
failed_at | timestamp with time zone |
Indexes:
"vacuum_requests_pkey" primary key, btree (vtable, vhost, vdatabase, created_on)
"vr_priority" btree (vhost, vdatabase, urgency) WHERE ((completed_on IS NULL) AND (failed_at IS NULL))

/* cbbrowne@[local]/dba2 vacdb=*/ \d vacuum_start
Table "public.vacuum_start"
Column | Type | Modifiers
--------------+--------------------------+------------------------
vtable | text | not null
vhost | text | not null
vdatabase | text | not null
started_on | timestamp with time zone | not null default now()
completed_on | timestamp with time zone |
Indexes:
"vacuum_start_pkey" primary key, btree (vtable, vhost, vdatabase, started_on)

/* cbbrowne@[local]/dba2 vacdb=*/ \d vacuum_failures
Table "public.vacuum_failures"
Column | Type | Modifiers
------------+--------------------------+------------------------
vtable | text | not null
vhost | text | not null
vdatabase | text | not null
started_on | timestamp with time zone | not null
failed_on | timestamp with time zone | not null default now()
Indexes:
"vacuum_failures_pkey" primary key, btree (vtable, vhost, vdatabase, started_on)

This has a bit more generality than would be needed for handling just
one postmaster; host/database would allow this to be used to manage
multiple backends...

We have, in our "kludged-up scripts," three levels of granularity:

1. There are tables we vacuum every few minutes; they would be at
urgency 1; every few minutes, we would, in effect, run the query...

insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
select t.fqtablename, h.hostname, tld.name, 1
from urgent_tables t, all_hosts h, all_tlds tld;

2. Then, there are "hourly" tables, at urgency level 2.

Once an hour, we run:

insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
select t.fqtablename, h.hostname, tld.name, 2
from hourly_tables t, all_hosts h, all_tlds tld;

3. Once a day, we'd do something kind of like:

insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
select table_schema || '.' || table_name, h.hostname, tld.name, 3
from information_schema.tables, all_hosts h, all_tlds tld
where table_type = 'BASE TABLE' and table_schema in ('public', 'pg_catalog');

The event loop for the daemon would be to look up the highest priority
table, and add an entry to vacuum_start.

Then it vacuums the table.

If that succeeds, the table is marked as complete in both
vacuum_start, and, FOR EVERY ENTRY CURRENTLY OUTSTANDING, in
vacuum_requests. Thus, if a table is queued up 20 times, it will be
vacuumed once, and marked as done 20 times.

If that fails, all the relevant entries in vacuum_start and
vacuum_requests are marked with the failure information, and a record
is added to the failures table.

We're putting this off, pending the thought that, with 8.1, it's worth
testing out pg_autovacuum again.

The above is an "in-the-database" way of queueing up requests,
associating priorities to them, and having the queue be
administrator-visible.

We were anticipating using our present quasi-kludgy scripts to add our
favorite tables to the queue; it would seem a nice/natural thing for
there to be some automatic process (ala the pg_autovacuum daemon) that
could add things to the queue based on its knowledge of updates.

My thought is that if anything about the above appears useful to
pg_autovacuum, I'd be happy if pg_autovacuum grabbed (stole? ;-)) some
of the ideas.
--
"cbbrowne","@","cbbrowne.com"
http://cbbrowne.com/info/sap.html
"The X-Files are too optimistic. The truth is *not* out there..."
-- Anthony Ord <nws@rollingthunder.co.uk>

#10Jim C. Nasby
jnasby@pervasive.com
In reply to: Chris Browne (#9)
Re: autovacuum

This seems maybe a bit overkill to me. I think what would be more useful
is if autovacuum could execute more than one vacuum at a time, and you
could specify tables that are high priority (or possibly just say that
all tables with less than X live tuples in them are high priority). That
way a longer-running vacuum on a large table wouldn't prevent more
vacuum-sensative tables (such as queues) from being vacuumed frequently
enough.

On Wed, Feb 01, 2006 at 03:50:25PM -0500, Chris Browne wrote:

matthew@zeut.net ("Matthew T. O'Connor") writes:

Hope that helps. Real world feed-back is always welcome.

While I'm at it, I should throw in an idea that I had a little while
back about a "vacuum request manager."

This is kind of orthogonal to everything else that has been happening
with pg_autovacuum...

One of the troubles we have been hitting with our homebrew scripts is
when locking doesn't turn out, and they start submitting multiple
vacuums at once, which sometimes builds up "to ill."

A thought I had was to create a daemon that would serially process
requests. It would just watch a table of requests, and when it finds
work, start work.

We'd then have some sort of "injection" process that would tell the
daemon "Here's new work!"

Requests would be defined thus:

/* cbbrowne@[local]/dba2 vacdb=*/ \d vacuum_requests
Table "public.vacuum_requests"
Column | Type | Modifiers
--------------+--------------------------+------------------------
vtable | text | not null
vhost | text | not null
vdatabase | text | not null
urgency | integer | not null default 1
created_on | timestamp with time zone | not null default now()
completed_on | timestamp with time zone |
failed_at | timestamp with time zone |
Indexes:
"vacuum_requests_pkey" primary key, btree (vtable, vhost, vdatabase, created_on)
"vr_priority" btree (vhost, vdatabase, urgency) WHERE ((completed_on IS NULL) AND (failed_at IS NULL))

/* cbbrowne@[local]/dba2 vacdb=*/ \d vacuum_start
Table "public.vacuum_start"
Column | Type | Modifiers
--------------+--------------------------+------------------------
vtable | text | not null
vhost | text | not null
vdatabase | text | not null
started_on | timestamp with time zone | not null default now()
completed_on | timestamp with time zone |
Indexes:
"vacuum_start_pkey" primary key, btree (vtable, vhost, vdatabase, started_on)

/* cbbrowne@[local]/dba2 vacdb=*/ \d vacuum_failures
Table "public.vacuum_failures"
Column | Type | Modifiers
------------+--------------------------+------------------------
vtable | text | not null
vhost | text | not null
vdatabase | text | not null
started_on | timestamp with time zone | not null
failed_on | timestamp with time zone | not null default now()
Indexes:
"vacuum_failures_pkey" primary key, btree (vtable, vhost, vdatabase, started_on)

This has a bit more generality than would be needed for handling just
one postmaster; host/database would allow this to be used to manage
multiple backends...

We have, in our "kludged-up scripts," three levels of granularity:

1. There are tables we vacuum every few minutes; they would be at
urgency 1; every few minutes, we would, in effect, run the query...

insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
select t.fqtablename, h.hostname, tld.name, 1
from urgent_tables t, all_hosts h, all_tlds tld;

2. Then, there are "hourly" tables, at urgency level 2.

Once an hour, we run:

insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
select t.fqtablename, h.hostname, tld.name, 2
from hourly_tables t, all_hosts h, all_tlds tld;

3. Once a day, we'd do something kind of like:

insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
select table_schema || '.' || table_name, h.hostname, tld.name, 3
from information_schema.tables, all_hosts h, all_tlds tld
where table_type = 'BASE TABLE' and table_schema in ('public', 'pg_catalog');

The event loop for the daemon would be to look up the highest priority
table, and add an entry to vacuum_start.

Then it vacuums the table.

If that succeeds, the table is marked as complete in both
vacuum_start, and, FOR EVERY ENTRY CURRENTLY OUTSTANDING, in
vacuum_requests. Thus, if a table is queued up 20 times, it will be
vacuumed once, and marked as done 20 times.

If that fails, all the relevant entries in vacuum_start and
vacuum_requests are marked with the failure information, and a record
is added to the failures table.

We're putting this off, pending the thought that, with 8.1, it's worth
testing out pg_autovacuum again.

The above is an "in-the-database" way of queueing up requests,
associating priorities to them, and having the queue be
administrator-visible.

We were anticipating using our present quasi-kludgy scripts to add our
favorite tables to the queue; it would seem a nice/natural thing for
there to be some automatic process (ala the pg_autovacuum daemon) that
could add things to the queue based on its knowledge of updates.

My thought is that if anything about the above appears useful to
pg_autovacuum, I'd be happy if pg_autovacuum grabbed (stole? ;-)) some
of the ideas.
--
"cbbrowne","@","cbbrowne.com"
http://cbbrowne.com/info/sap.html
"The X-Files are too optimistic. The truth is *not* out there..."
-- Anthony Ord <nws@rollingthunder.co.uk>

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly

--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461

#11Christopher Browne
cbbrowne@acm.org
In reply to: Enzo Daddario (#1)
Re: autovacuum

This seems maybe a bit overkill to me. I think what would be more useful
is if autovacuum could execute more than one vacuum at a time, and you
could specify tables that are high priority (or possibly just say that
all tables with less than X live tuples in them are high priority). That
way a longer-running vacuum on a large table wouldn't prevent more
vacuum-sensative tables (such as queues) from being vacuumed frequently
enough.

Actually, I can think of a case for much the opposite, namely to want
to concurrently vacuum some LARGE tables...

Suppose you have 2 rather big tables that get updates on similar
schedules such that both will have a lot of dead tuples at similar
times.

And suppose both of these tables are Way Large, so that they take
six hours to vacuum.

I could argue for kicking off vacuums on both, at the same moment;
they'll both be occupying transactions for 1/4 of a day, and, with
possibly related patterns of updates, doing them one after the other
*wouldn't* forcibly get you more tuples cleaned than doing them
concurrently.

I'm not sure that's a case to push for, either, as something
pg_autovacuum is smart enough to handle; I'm just putting out some
ideas that got enough internal discussion to suggest they were
interesting enough to let others consider...
--
"cbbrowne","@","gmail.com"
http://cbbrowne.com/info/linuxdistributions.html
"Transported to a surreal landscape, a young girl kills the first
woman she meets and then teams up with three complete strangers to
kill again." -- Unknown, Marin County newspaper's TV listing for _The
Wizard of Oz_

#12Scott Marlowe
smarlowe@g2switchworks.com
In reply to: Christopher Browne (#11)
Re: autovacuum

On Wed, 2006-02-01 at 20:32, Christopher Browne wrote:

This seems maybe a bit overkill to me. I think what would be more useful
is if autovacuum could execute more than one vacuum at a time, and you
could specify tables that are high priority (or possibly just say that
all tables with less than X live tuples in them are high priority). That
way a longer-running vacuum on a large table wouldn't prevent more
vacuum-sensative tables (such as queues) from being vacuumed frequently
enough.

Actually, I can think of a case for much the opposite, namely to want
to concurrently vacuum some LARGE tables...

Suppose you have 2 rather big tables that get updates on similar
schedules such that both will have a lot of dead tuples at similar
times.

And suppose both of these tables are Way Large, so that they take
six hours to vacuum.

I could argue for kicking off vacuums on both, at the same moment;
they'll both be occupying transactions for 1/4 of a day, and, with
possibly related patterns of updates, doing them one after the other
*wouldn't* forcibly get you more tuples cleaned than doing them
concurrently.

I'm not sure that's a case to push for, either, as something
pg_autovacuum is smart enough to handle; I'm just putting out some
ideas that got enough internal discussion to suggest they were
interesting enough to let others consider...

This could be a big win on databases where those two tables were on
different table spaces, since vacuum now wouldn't be fighting for the
same thin I/O stream twice.

If the autovacuum daemon scheduled vacuums so that each tablespace had a
list of vacuums to run, but then ran those sets in parallel (i.e.
tablespace1 has one single vacuum running though a list while
tablespace2 has its own single vacuum.)

Maybe even a setting that told it the max number to run in parallel for
each tablespace. After all, a tablespace running on 30 hard drives in a
RAID-10 could handly several concurrent vacuums, while another
tablespace running on a single drive would be well limited to one vacuum
at a time.