Horrible pg_restore performance, please help
Hi,
We're running Postgres 7.1.3 on a Redhat Linux system with 800 MHz CPU,
1GB ram, 1.5 GB Swap. This is obviously not a screamer of a box, but
its not terrible.
We're trying to import a 32 MB dump file and its taking forever. We've
tried multiple cfg changes, but things don't seem to speed up, or
rather ever finish. After about 10 minutes, the cpu is 100% consumed by
system, with pg_restore being the proc on top. pg_restore consumes
about 1200 MB of memory in total.
Our latest run has been going for almost 6 hours and is still not
complete. This just seems a little insane. I've read through google
groups and have tried changing shared mem parameters and postgres
parameters. Nothing seems to help.
Postgresql.conf:
query=2
syslog=0
fsync=false
shared_buffers=5000
max_connections=64
debug_level=0
verbose=0
/proc/sys/kernel/shmall ---> 134217728
/proc/sys/kernel/shmmax ---> 134217728
Cmd:
pg_restore -d <db name> -O -Sc <dump.file>
Any advice would be much appreciated.
thanks..
__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com
brice <yamwak@yahoo.com> writes:
We're running Postgres 7.1.3 on a Redhat Linux system with 800 MHz CPU,
1GB ram, 1.5 GB Swap. This is obviously not a screamer of a box, but
its not terrible.
We're trying to import a 32 MB dump file and its taking forever. We've
tried multiple cfg changes, but things don't seem to speed up, or
rather ever finish. After about 10 minutes, the cpu is 100% consumed by
system, with pg_restore being the proc on top. pg_restore consumes
about 1200 MB of memory in total.
pg_restore, not the backend?
Some digging in the CVS logs reveals past fixes for memory leaks and
other problems in pg_restore, particularly when dealing with lots of
large objects. If you have LOs in your database then an update to 7.2
or 7.3 should improve life.
regards, tom lane
Hi Brice,
Thats very wrong...
Whilst changing conf settings will speed things up it still shouldn't affect
loading a 32mb dump file. Have you tried restarting PG with logging and then
piping the dump file through psql? You should find that you get a lot more
direct feedback from psql. This assumes that your dump is plain text
though..
Rgds,
Jason
Show quoted text
On Tue, 15 Jul 2003 07:07 pm, brice wrote:
Hi,
We're running Postgres 7.1.3 on a Redhat Linux system with 800 MHz CPU,
1GB ram, 1.5 GB Swap. This is obviously not a screamer of a box, but
its not terrible.We're trying to import a 32 MB dump file and its taking forever. We've
tried multiple cfg changes, but things don't seem to speed up, or
rather ever finish. After about 10 minutes, the cpu is 100% consumed by
system, with pg_restore being the proc on top. pg_restore consumes
about 1200 MB of memory in total.Our latest run has been going for almost 6 hours and is still not
complete. This just seems a little insane. I've read through google
groups and have tried changing shared mem parameters and postgres
parameters. Nothing seems to help.Postgresql.conf:
query=2
syslog=0
fsync=false
shared_buffers=5000
max_connections=64
debug_level=0
verbose=0/proc/sys/kernel/shmall ---> 134217728
/proc/sys/kernel/shmmax ---> 134217728Cmd:
pg_restore -d <db name> -O -Sc <dump.file>Any advice would be much appreciated.
thanks..
__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org