Dump / restore for optimization?

Started by lecalmost 22 years ago5 messagesgeneral
Jump to latest
#1lec
limec@streamyx.com

I always have to dump & restore to make a database (which is in use for
a few months) efficient again. I have scheduled nightly vacuum analyze,
but only dump & restore can make the database efficient again. Has
anyone experience this?

-lec

#2Bill Moran
wmoran@potentialtech.com
In reply to: lec (#1)
Re: Dump / restore for optimization?

lec <limec@streamyx.com> wrote:

I always have to dump & restore to make a database (which is in use for
a few months) efficient again. I have scheduled nightly vacuum analyze,
but only dump & restore can make the database efficient again. Has
anyone experience this?

You might benefit from identifying the tables that are having the biggest
performance problems, identifying the keys that are most commonly used, and
using CLUSTER to physically reorder those tables on a set schedule. This
is actually a lot like dump/restore (since it physically recreates the
tables on disk) but should take less time (especially if you can narrow
the problem down to only a few tables out of your entire db)

This may or may not help, but it's something to try.

--
Bill Moran
Potential Technologies
http://www.potentialtech.com

#3Shridhar Daithankar
shridhar@frodo.hserus.net
In reply to: lec (#1)
Re: Dump / restore for optimization?

lec wrote:

I always have to dump & restore to make a database (which is in use for
a few months) efficient again. I have scheduled nightly vacuum analyze,
but only dump & restore can make the database efficient again. Has
anyone experience this?

You could try several things if you want to avoid dump/restore

- Try vacuum full instead of vacuum analyze
- Postgresql versions pre-7.4 had index bloat with vacuum. 7.4.x fixed this. So
try with that.
- Run autovacuum daemon if possible.

HTH

Shridhar

#4Bill Montgomery
billm@lulu.com
In reply to: Shridhar Daithankar (#3)
Re: Dump / restore for optimization?

Shridhar Daithankar wrote:

lec wrote:

I always have to dump & restore to make a database (which is in use
for a few months) efficient again. I have scheduled nightly vacuum
analyze, but only dump & restore can make the database efficient
again. Has anyone experience this?

You could try several things if you want to avoid dump/restore

- Try vacuum full instead of vacuum analyze

VACUUM FULL can be undesirable due to it's locking effects. If you can
afford to be unable to write to your database for the duration of the
vacuum full, go for it.

Otherwise, make sure you have enough FSM (free space map) space. The
postgresql.conf parameters that affect it are max_fsm_relations and
max_fsm_pages. If you have tables that see a lot of UPDATE or DELETE
action throughout the day, and tiny (e.g. default) FSM settings you may
run out of space in your FSM to track "dead" tuples. When that happens,
your tables will grow and grow, and will not be fully cleaned up by a
regular vacuum. This will lead to slowly degrading performance that will
only be fixed by a full vacuum or a dump/restore.

Good Luck,

Bill Montgomery

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Shridhar Daithankar (#3)
Re: Dump / restore for optimization?

Shridhar Daithankar <shridhar@frodo.hserus.net> writes:

lec wrote:

I always have to dump & restore to make a database (which is in use for
a few months) efficient again. I have scheduled nightly vacuum analyze,
but only dump & restore can make the database efficient again. Has
anyone experience this?

You could try several things if you want to avoid dump/restore

- Try vacuum full instead of vacuum analyze
- Postgresql versions pre-7.4 had index bloat with vacuum. 7.4.x fixed this. So
try with that.
- Run autovacuum daemon if possible.

The most likely bet is that he needs larger FSM settings in
postgresql.conf. I'd try larger FSM and more frequent regular
vacuums, rather than vacuum full. (autovacuum is good but
not necessary.)

regards, tom lane