Re: [GENERAL] Large database

Started by Bruce Momjianover 26 years ago2 messageshackers
Jump to latest
#1Bruce Momjian
bruce@momjian.us

i filed a bug report at one time noting that:
"ALTER TABLE tbname RENAME TO tbname_new;"
was not renaming all of the extents.

do you know if this has been fixed?

Yes, in 6.5.*.

cool.

if i'm annoying you, tell me to go away.

do you know why vacuum can consume an enormous amount of core when cleaning
a large table?

i've actually had to add a gig of swap to our server so that vacuum can
actually finish on some of our tables.

sometimes the vacuum won't even do that, and i need to:

pg_dump -t tb -s db > tb.dmp
psql -c "copy tb to stdout using delimiters ':';" db | gzip > tb.dat.gz
psql -c "drop table tb;" db
psql -e db < tb.dmp
zcat tb.dat.gz | psql -c "copy tb from stdin using delimiters ':';" db

very painful (taking several hours).

Can someone comment on the high memory usage of vacuum?

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#1)
Re: [HACKERS] Re: [GENERAL] Large database

Bruce Momjian <maillist@candle.pha.pa.us> writes:

do you know why vacuum can consume an enormous amount of core when cleaning
a large table?

Can someone comment on the high memory usage of vacuum?

First thing that comes to mind is memory leaks for palloc'd data
types...

What exactly is the declaration of the table that's causing the problem?

regards, tom lane