vacuum job taking very long time to complete
Hi,
I've started a vacuumdb on a database having 2 large tables of approx.
3,800,000 records each. The database size is approx . 2Gbyte
It generates a lot of logfiles, currently the pg_xlog directory has
about 4.6 Gbyte of logfiles and when I started there were I guess 2
logfiles each 16 Megs.
I still have 3.5 G available for logfiles but suppose there is a chance
that I will run out of diskspace before the vacuum is done , can I kill
it safely before that happens ?
Thanks,
--
Feite Brekeveld
feite.brekeveld@osiris-it.nl
Feite Brekeveld <feite.brekeveld@osiris-it.nl> writes:
I still have 3.5 G available for logfiles but suppose there is a chance
that I will run out of diskspace before the vacuum is done , can I kill
it safely before that happens ?
Yes, you can just send a SIGINT to the vacuuming backend (or type
control-C in psql, if you issued the vacuum command from psql).
You may care to apply the patch in
http://www.ca.postgresql.org/mhonarc/pgsql-patches/2001-06/msg00061.html
before trying again.
regards, tom lane
Tom Lane wrote:
Feite Brekeveld <feite.brekeveld@osiris-it.nl> writes:
I still have 3.5 G available for logfiles but suppose there is a chance
that I will run out of diskspace before the vacuum is done , can I kill
it safely before that happens ?Yes, you can just send a SIGINT to the vacuuming backend (or type
control-C in psql, if you issued the vacuum command from psql).
What happens with the logs when you do that ? Are the cleaned up because of
SIGINT ?
You may care to apply the patch in
http://www.ca.postgresql.org/mhonarc/pgsql-patches/2001-06/msg00061.html
before trying again.regards, tom lane
--
Feite Brekeveld
feite.brekeveld@osiris-it.nl
http://www.osiris-it.nl
On Wed, Jun 27, 2001 at 04:10:05PM +0200, Feite Brekeveld wrote:
: Hi,
:
: I've started a vacuumdb on a database having 2 large tables of approx.
: 3,800,000 records each. The database size is approx . 2Gbyte
I'm seeing a similar slowdown, but on much smaller tables (roughly
10,000 and 30,000 rows apiece). It takes up to a minute to analyze the
10,000 row table (see other messages to this list about the constantly
updating nature of the application using this database). The weird
thing is that these vacuums took barely any time before a recent
required restart of the database. Nothing's changed in the config, yet
now the vacuums take a while now. I'm not sure what to make of it.
The system doesn't appear to be under any sort of increased burden (in
fact, postgres is using barely any resources during the analyze).
As Austin Powers would say, "That's not right ..."
* Philip Molter
* DataFoundry.net
* http://www.datafoundry.net/
* philip@datafoundry.net