Re: Index tuple killing code committed

Started by Hannu Krosingalmost 24 years ago1 messageshackers
Jump to latest
#1Hannu Krosing
hannu@tm.ee

On Sat, 2002-05-25 at 02:38, Joe Conway wrote:

Tom Lane wrote:

The remaining degradation is actually in seqscan performance, not
indexscan --- unless one uses a much larger -s setting, the planner will
think it ought to use seqscans for updating the "branches" and "tellers"
tables, since those nominally have just a few rows; and there's no way
to avoid scanning lots of dead tuples in a seqscan. Forcing indexscans
helps some in the former CVS tip:

This may qualify as a "way out there" idea, or more trouble than it's
worth, but what about a table option which provides a bitmap index of
tuple status -- i.e. tuple dead t/f. If available, a seqscan in between
vacuums could maybe gain some of the same efficiency.

I guess this would be only useful if it is a bitmap of dead _pages_ not
tuples (page reading is mostexpensive plus there is no way to know how
many tuples per page)

but for worst cases (small table with lots of updates) this can be a
great thing that can postpone fixing optimiser to account for dead
tuples.

one 8K pages can hold bits for 8192*8 = 65536 pages = 512 Mbytes and if
seqscan could skip first 500 of them it would definitely be worth it ;)

This is the first time I have ever seen repeated pgbench runs without
substantial performance degradation. Not a bad result for a Friday
afternoon...

Really good news!

-----------
Hannu