Re: [GENERAL] Large database
i filed a bug report at one time noting that:
"ALTER TABLE tbname RENAME TO tbname_new;"
was not renaming all of the extents.do you know if this has been fixed?
Yes, in 6.5.*.
cool.
if i'm annoying you, tell me to go away.
do you know why vacuum can consume an enormous amount of core when cleaning
a large table?i've actually had to add a gig of swap to our server so that vacuum can
actually finish on some of our tables.sometimes the vacuum won't even do that, and i need to:
pg_dump -t tb -s db > tb.dmp
psql -c "copy tb to stdout using delimiters ':';" db | gzip > tb.dat.gz
psql -c "drop table tb;" db
psql -e db < tb.dmp
zcat tb.dat.gz | psql -c "copy tb from stdin using delimiters ':';" dbvery painful (taking several hours).
Can someone comment on the high memory usage of vacuum?
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Import Notes
Reply to msg id not found: m11Gn9f-00080YC@mailbox.reptiles.org
Bruce Momjian <maillist@candle.pha.pa.us> writes:
do you know why vacuum can consume an enormous amount of core when cleaning
a large table?
Can someone comment on the high memory usage of vacuum?
First thing that comes to mind is memory leaks for palloc'd data
types...
What exactly is the declaration of the table that's causing the problem?
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofTue17Aug1999143852-0400199908171838.OAA01135@candle.pha.pa.us | Resolved by subject fallback