Re: vacuum...
On Thu, 12 Mar 1998, Steve Tarkalson wrote:
Hi,
thanks for your response to my questions. We are
reviewing our backup/vacuum procedures in light of
your information.I/O subsys is SCSI, comprised of (8) 4Gb Seagates on
(2) DPT controllers in a RAID 5 configuration.
Ouch...I'm just curious here, but what does an 'ls -l' look like
for your database?
Our database is made up of approximately 50 tables with
6 of these incurring relentless "hits". The problem table
I referred to is the largest(+300K) and is read/written
to most frequently. We could possibly do further normalization
here.
This might help, as, depending on how it is done, you can reduce
the overall size of the database requiring the vacuum's...on other thing
to try is vacuum'ng each table seperately.
This might sound odd, but, if, for instance, you stagger your
vacuum's, you might find that the impact is a little less on your system.
Basically, create a file with all your tables listed in it, and do
something like:
========================
#!/bin/sh
for i in `cat <table file>`
do
psql -c "vacuum analyze $i" <database>
sleep 30
done
========================
The problem (one which we hope to have addressed in v6.4) is that
when vacuum runs, it locks the pg_class table, which essentially locks the
complete database. The above will run a vacuum on a particular table,
locking the pg_class file, then, when complete, will release that lock so
that other readers/writers can grab the lock...30 seconds later, the next
table gets done. This basically allows other aspects of the system to
grab access to the database, even for a little while, so that the whole
system deadlocks.
The locking manager in v6.3 has been considerably enhanced to
improve this...
I was curious about your thoughts on the problem
with the corrupted index "FATAL 1:btree BTP_CHAIN was expected".
Was this a known bug in pre-6.3 versions? We have experienced
spurious postgres process core dumps. Can this corrupt the
indexes/db?
I believe so, but I've CC'd pgsql-hackers@postgresql.org on this,
so that those better informed/more knowledgeable may respond...
You suggested that we upgrade to 6.3 and I agree
that we will do this in the very near future. One
question regarding this. Is regular vacuuming still
a requirement in 6.3?
regular vacuuming will always be a requirement, as it updates
various statistics...but, one of the things that I believe is being looked
into is having the database overwrite delete records, so that a vacuum
isn't required to remove those records and shrink the database...
Marc G. Fournier
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
Import Notes
Reply to msg id not found: XFMail.980312183049.Steve_Tarkalson@atlas.com
The Hermit Hacker wrote:
psql -c "vacuum analyze $i" <database>
^^^^^^^
It takes additional time and shouldn't be used in each run.
I was curious about your thoughts on the problem
with the corrupted index "FATAL 1:btree BTP_CHAIN was expected".
Was this a known bug in pre-6.3 versions? We have experienced
spurious postgres process core dumps. Can this corrupt the
indexes/db?
Yes. Core dumps and power off-s can break indices in all versions.
You suggested that we upgrade to 6.3 and I agree
that we will do this in the very near future. One
question regarding this. Is regular vacuuming still
a requirement in 6.3?regular vacuuming will always be a requirement, as it updates
various statistics...but, one of the things that I believe is being looked
into is having the database overwrite delete records, so that a vacuum
isn't required to remove those records and shrink the database...
Required, currently.
Vadim