Problem with VACUUM after very large delete?

Started by Doug Fieldsover 23 years ago4 messagesgeneral
Jump to latest
#1Doug Fields
dfields-pg-general@pexicom.com

Hello all,

(7.2.1 on Debian Woody 3.0 with various 7.2.3 patches backported)

Earlier today I deleted about 31 million records from one of our tables
with a very complex query which required a full table scan. This query took
quite a few hours to run (in fact, overnight).

Later, I did a VACUUM and then an ANALYZE.

However, it does not appear that either:

1) The records were actually deleted or

2) The VACUUM is working properly

Why? The VACUUM doesn't seem to be vacuuming the two indexes on the table.
I'm not sure what's going on, but if you could make any suggestions, I'd
appreciate it.

Details are below.

Thanks,

Doug

pexicast_lg=# ANALYZE VERBOSE audit_log;
NOTICE: Analyzing audit_log
ANALYZE
pexicast_lg=# VACUUM VERBOSE audit_log;
NOTICE: --Relation audit_log--
NOTICE: Pages 2083605: Changed 2961, Empty 0; Tup 105773248: Vac 0, Keep
31106432, UnUsed 2.
Total CPU 89.49s/13.67u sec elapsed 1101.17 sec.
VACUUM
pexicast_lg=# \d audit_log
Table "audit_log"
Column | Type | Modifiers
----------+--------------------------+---------------
at | timestamp with time zone | default now()
type | integer |
severity | integer |
who | integer |
target | integer |
var1 | integer |
var2 | integer |
message | character varying(255) |
ip | character(15) |
login | integer |
Indexes: audit_type_target_idx,
audit_type_var1_idx

#2Robert Treat
xzilla@users.sourceforge.net
In reply to: Doug Fields (#1)
Re: Problem with VACUUM after very large delete?

On Tue, 2002-12-17 at 17:47, Doug Fields wrote:

Hello all,

(7.2.1 on Debian Woody 3.0 with various 7.2.3 patches backported)

Earlier today I deleted about 31 million records from one of our tables
with a very complex query which required a full table scan. This query
took quite a few hours to run (in fact, overnight).

Later, I did a VACUUM and then an ANALYZE.

However, it does not appear that either:

1) The records were actually deleted or

2) The VACUUM is working properly

Why? The VACUUM doesn't seem to be vacuuming the two indexes on the
table. I'm not sure what's going on, but if you could make any
suggestions, I'd appreciate it.

While vacuum will delete dead index entries, it doesn't collapse out
empty pages in the indexes. This causes indexes to never get smaller
and, in cases where the range of the index continually expands, causes
indexes to grow larger and larger. The generally accepted way to fix
this is via reindexing. (note this was not fixed in 7.3)

Robert Treat

#3Doug Fields
dfields-pg-general@pexicom.com
In reply to: Robert Treat (#2)
Re: Problem with VACUUM after very large delete?

At 06:12 PM 12/17/2002, Robert Treat wrote:

While vacuum will delete dead index entries, it doesn't collapse out
empty pages in the indexes. This causes indexes to never get smaller
and, in cases where the range of the index continually expands, causes
indexes to grow larger and larger. The generally accepted way to fix
this is via reindexing. (note this was not fixed in 7.3)

Thanks Robert. However, if I can quote from my previous message:

pexicast_lg=# VACUUM VERBOSE audit_log;
NOTICE: --Relation audit_log--
NOTICE: Pages 2083605: Changed 2961, Empty 0; Tup 105773248: Vac 0, Keep
31106432, UnUsed 2.
Total CPU 89.49s/13.67u sec elapsed 1101.17 sec.
VACUUM

You'll notice that there are no "deleted" tuples listed. However, there are
"Keep" tuples listed. I'm not sure what those are - but that's almost
exactly the size of what I attempted to delete earlier. What are these
"Keep" tuples?

Also - whenever I vacuum my other tables, it does vacuum the indexes, with
additional NOTICE statements. In this case, it does not.

What's up?

Thanks,

Doug

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Doug Fields (#1)
Re: Problem with VACUUM after very large delete?

Doug Fields <dfields-pg-general@pexicom.com> writes:

Earlier today I deleted about 31 million records from one of our tables
with a very complex query which required a full table scan. This query took
quite a few hours to run (in fact, overnight).

Why? The VACUUM doesn't seem to be vacuuming the two indexes on the table.

Have you got any open transactions hanging about? VACUUM probably
thinks it can't remove the tuples yet because they're still potentially
visible to somebody under MVCC rules.

regards, tom lane