Improve dump and restore time
Hello,
I am studying how to migrate our Production Database which is running
under PG 8.2 and we would like to move to PG 8.3
I have read that the only safe solution is to perform a dump and restore.
Our DB is around 6GB large.
I wanted to have an expectation of the migration duration and performed
it on a less powerful machine than the one we have in Production.
Unfortunately it took very very long time (around 27 hours!).
Are there best practices to reduce the migration time ?
Some kind of questions I try to solve:
- Is it better to use a compressed dump or not ?
- Should I use a kind of trick like reading dump file from network while
restoring on the machine to reduce concurrent I/O and so on
Any advice is welcome.
Thanks!
Thanks,
in fact I also expected a couple of hours and I was surprised by this
result.
I will have a look to the hardware (unfortunately not before next week now).
Thanks again.
Rasper, Franz wrote:
Show quoted text
Hi,
Normally it should be done in between 1 and 4 hours.
Fastest version is maybepg_dump ... | psql ...
27 hours is in my opinion a little bit to much.
What is your hardware ?Greetings,
-Franz
-----Urspr�ngliche Nachricht-----
Von: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] Im Auftrag von Pascal Cohen
Gesendet: Freitag, 10. Oktober 2008 17:17
An: pgsql-general@postgresql.org
Betreff: [Spam] [GENERAL] Improve dump and restore timeHello,
I am studying how to migrate our Production Database which is running under PG 8.2 and we would like to move to PG 8.3 I have read that the only safe solution is to perform a dump and restore.
Our DB is around 6GB large.
I wanted to have an expectation of the migration duration and performed it on a less powerful machine than the one we have in Production.
Unfortunately it took very very long time (around 27 hours!).
Are there best practices to reduce the migration time ?
Some kind of questions I try to solve:
- Is it better to use a compressed dump or not ?
- Should I use a kind of trick like reading dump file from network while restoring on the machine to reduce concurrent I/O and so onAny advice is welcome.
Thanks!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Import Notes
Reply to msg id not found: B71FC48620139F41B3105F0A1195BFF0270B053D93@IZPSIMSX03.asp.izbReference msg id not found: B71FC48620139F41B3105F0A1195BFF0270B053D93@IZPSIMSX03.asp.izb | Resolved by subject fallback
Hi,
Pascal Cohen a �crit :
I am studying how to migrate our Production Database which is running
under PG 8.2 and we would like to move to PG 8.3
I have read that the only safe solution is to perform a dump and restore.
Our DB is around 6GB large.
I wanted to have an expectation of the migration duration and performed
it on a less powerful machine than the one we have in Production.
Unfortunately it took very very long time (around 27 hours!).
27 hours feels really strange for only 6GB. I'm sure there's something
wrong here.
General advices. If you have big indexes, you should probably grow
maintenance_work_mem. You should also take a look at your
checkpoint_segments settings.
Are there best practices to reduce the migration time ?
Using Slony is one. I'm not sure I would go this way for a 6GB database.
But it would assure you a really small downtime.
Some kind of questions I try to solve:
- Is it better to use a compressed dump or not ?
You'll loose time doing the compressed dump.
- Should I use a kind of trick like reading dump file from network while
restoring on the machine to reduce concurrent I/O and so on
I would find this a better way than the compressed dump.
Regards.
--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com
On Fri, 10 Oct 2008, Pascal Cohen wrote:
Are there best practices to reduce the migration time ?
There's a number of resources in this area listed at
http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores
I wanted to have an expectation of the migration duration and performed it on
a less powerful machine than the one we have in Production.
Note that if your production server has any sort of decent disk controller
in it, but the test machine doesn't, you can end up with results that
don't extrapolate very well. That's particularly true if you don't follow
the standard good practice on the restore (like using the default value
for checkpoint_segments). As mentioned in the above, using syncronous
commit can help a lot there on some systems.
If you do any restore tests again, try and look at what the bottleneck is
on the system using something like vmstat, and make sure you check the
database log files (that will tell you if the checkpoint stuff is setup
reasonably or not). It's really hard to say whether any of the things you
were asking about will be helpful or not without knowing what the limiting
factor on your system is. If you're CPU limited for example, you'd want
to stay away from compression; if I/O limited that might make sense.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD