Hot Backup
Hello to all the Doers of Postgres!!!
Last time I went through forums, people spoke highly about 7.3 and its capability to do hot backups. My problem is if the database goes down and I lose my main data store, then I will lose all transactions back to the time I did the pg_dump.
Other databases (i e Oracle) solves this by retaining their archive logs in some physically separate storage. So, when you lose your data, you can restore the data from back-up, and then apply your archive log, and avoid losing any committed transactions.
Postgresql has been lacking this all along. I've installed postgres 7.3b2 and still don't see any archive's flushed to any other place. Please let me know how is hot backup procedure implemented in current 7.3 beta(2) release.
Thanks.
"Sandeep Chadha" <sandeep@newnetco.com> writes:
Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other
place. Please let me know how is hot backup procedure implemented in
current 7.3 beta(2) release.
AFAIK no such hot backup feature has been implemented for 7.3 -- you
appear to have been misinformed.
That said, I agree that would be a good feature to have.
Cheers,
Neil
--
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
Hmmm, Then are there any new enhancements as far as backups are concerned between current 7.2.x to 7.3.x.
Like can we do a tar when database is up and running or another feature.
Thanks a bunch in advance.
-----Original Message-----
From: Neil Conway [mailto:neilc@samurai.com]
Sent: Monday, October 07, 2002 1:48 PM
To: Sandeep Chadha
Cc: Tom Lane; pgsql-hackers@postgresql.org; pgsql-general
Subject: Re: [HACKERS] Hot Backup
"Sandeep Chadha" <sandeep@newnetco.com> writes:
Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other
place. Please let me know how is hot backup procedure implemented in
current 7.3 beta(2) release.
AFAIK no such hot backup feature has been implemented for 7.3 -- you
appear to have been misinformed.
That said, I agree that would be a good feature to have.
Cheers,
Neil
--
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
Import Notes
Resolved by subject fallback
On 7 Oct 2002 at 13:48, Neil Conway wrote:
"Sandeep Chadha" <sandeep@newnetco.com> writes:
Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other
place. Please let me know how is hot backup procedure implemented in
current 7.3 beta(2) release.AFAIK no such hot backup feature has been implemented for 7.3 -- you
appear to have been misinformed.
Is replication an answer to hot backup?
Bye
Shridhar
--
ink, n.: A villainous compound of tannogallate of iron, gum-arabic, and water,
chiefly used to facilitate the infection of idiocy and promote intellectual
crime. -- H.L. Mencken
MvO> http://www.postgresql.org/idocs/index.php?wal.html
- The URL you refer to is the ch11 I was refering to. It seems that this chapter is not as easily understandable as it should...
It says that with WAL, "pg is able to garantee consistency in the case of a crash".
OK, but I think is about /consistency/.
For what I understand, it just says that in the case of a core dump of a server process (improbable) or a power cut (probable) or an unwanted kill -9 (may happen), Pg will not have any corrupted table or index.
Cool, but not enough.
As Timur pointed out, I was refering to a disk crash or total loss of a server.
In this case, you loose up to 1 day of data.
MvO> I've never lost any data with postgres, even if it's crashed, even without
MvO> WAL.
Your're a lucky guy, but Pg may not be the weakest part of your information system.
In my short DBA life ( 3 years, Oracle & MS SQL ), I have already seen twice the case when a WHOLE raid5 array was broken and all the data lost.
One time, it was the controller which wrote ch... on the disks, the other time, a power failure/peak/else crashed all of the 8 disks of the array.
Both times, the incremental backup method reduced the data loss to almost nothing.
There is a need in "incremental" backup, which backs up only those
transactions which has been fulfilled after last "full dump" or last
"incremental dump". These backups should be done quite painlessly -
just copy some part of WAL, and should be small enough (compared to
full dump), so they can be done each hour or even more frequently..I hope sometime PostgreSQL will support that. :-)
So do I.
I think this would be on top of my "missing features" list.
As someone said, Replication may be a way to reduce the risks.
E.D.
-------------------------------------------------------------------------------
Erwan DUROSELLE // SEAFRANCE DSI
Responsable Bases de Données // Databases Manager
Tel: +33 (0)1 55 31 59 70 // Fax: +33 (0)1 55 31 85 28
email: eduroselle@seafrance.fr
-------------------------------------------------------------------------------
"Timur V. Irmatov" <itvthor@sdf.lonestar.org> 08/10/2002 13:00 >>>
Martijn!
Tuesday, October 08, 2002, 3:45:13 PM, you wrote:
- No, PostgreSQL does NOT provide a way to restore a database up to the
last commited transaction, with a reapply of the WAL, as Oracle or SQL
Server ( and others, I guess) do. That would be a VERY good feature. See
Administrator's guide ch11
MvO> Umm, I thought the whole point of WAL was that if the database crashed, the
MvO> WAL would provide the info to replay to the last committed transaction.
MvO> ... because we know that in the event of a crash we will be able to recover
MvO> the database using the log: ...
MvO> These docs seem to corrobrate this.
So, with Pg, if you backup your db every night with pg_dump, and your
server crashes during the day, you will loose up to one day of work.
MvO> I've never lost any data with postgres, even if it's crashed, even without
MvO> WAL.
Suppose you made your nightly backup, and then after a day of work
the building where your server is located disappears in flames..
That's it - you lost one day of work (of course, if your dumps where
stored outside that building otherwise you lost everything)..
There is a need in "incremental" backup, which backs up only those
transactions which has been fulfilled after last "full dump" or last
"incremental dump". These backups should be done quite painlessly -
just copy some part of WAL, and should be small enough (compared to
full dump), so they can be done each hour or even more frequently..
I hope sometime PostgreSQL will support that. :-)
Sincerely Yours,
Timur
mailto:itvthor@sdf.lonestar.org
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
Import Notes
Resolved by subject fallback
On 8 Oct 2002 at 14:17, Erwan DUROSELLE wrote:
MvO> http://www.postgresql.org/idocs/index.php?wal.html
- The URL you refer to is the ch11 I was refering to. It seems that this chapter is not as easily understandable as it should...
It says that with WAL, "pg is able to garantee consistency in the case of a crash".
OK, but I think is about /consistency/.
For what I understand, it just says that in the case of a core dump of a server process (improbable) or a power cut (probable) or an unwanted kill -9 (may happen), Pg will not have any corrupted table or index.Cool, but not enough.
As Timur pointed out, I was refering to a disk crash or total loss of a server.
In this case, you loose up to 1 day of data.There is a need in "incremental" backup, which backs up only those
transactions which has been fulfilled after last "full dump" or last
"incremental dump". These backups should be done quite painlessly -
just copy some part of WAL, and should be small enough (compared to
full dump), so they can be done each hour or even more frequently..I hope sometime PostgreSQL will support that. :-)
Well, there are replication solutions which rsyncs WAL files after they are
rotated so two database instances are upto sync with each other at a difference
of one WAL file. If you are interested I can post the pdf.
I guess that takes care of scenario you plan to avoid..
Bye
Shridhar
--
Truthful, adj.: Dumb and illiterate. -- Ambrose Bierce, "The Devil's
Dictionary"
On Tue, 2002-10-08 at 08:58, Shridhar Daithankar wrote:
On 8 Oct 2002 at 14:17, Erwan DUROSELLE wrote:
MvO> http://www.postgresql.org/idocs/index.php?wal.html
- The URL you refer to is the ch11 I was refering to. It seems that this chapter is not as easily understandable as it should...
It says that with WAL, "pg is able to garantee consistency in the case of a crash".
OK, but I think is about /consistency/.
For what I understand, it just says that in the case of a core dump of a server process (improbable) or a power cut (probable) or an unwanted kill -9 (may happen), Pg will not have any corrupted table or index.Cool, but not enough.
As Timur pointed out, I was refering to a disk crash or total loss of a server.
In this case, you loose up to 1 day of data.
Is it me or do doomsdays scenarios sometimes seem a little silly? I'd
like to ask just where are you storing your "incremental backups" with
Oracle/m$ sql ?? If it's on the same drive, then when you drive craps
out you've lost the incremental backups as well. Are you putting them
on a different drive (you can do that with the WAL) you'd still have the
problem that if the building went up in smoke you'd lose that
incremental backup. Unless you are doing "incremental backups" to a
computer in another physical location, you still fail all of your
scenarios.
There is a need in "incremental" backup, which backs up only those
transactions which has been fulfilled after last "full dump" or last
"incremental dump". These backups should be done quite painlessly -
just copy some part of WAL, and should be small enough (compared to
full dump), so they can be done each hour or even more frequently..I hope sometime PostgreSQL will support that. :-)
Well, there are replication solutions which rsyncs WAL files after they are
rotated so two database instances are upto sync with each other at a difference
of one WAL file. If you are interested I can post the pdf.I guess that takes care of scenario you plan to avoid..
This type of scenario sounds as good as the above mentioned methods for
oracle/m$ server. Could you post your pdf? Seems like it might be worth
adding to the techdocs site.
Robert Treat
On 8 Oct 2002 at 9:40, Robert Treat wrote:
Is it me or do doomsdays scenarios sometimes seem a little silly? I'd
like to ask just where are you storing your "incremental backups" with
Oracle/m$ sql ?? If it's on the same drive, then when you drive craps
out you've lost the incremental backups as well. Are you putting them
on a different drive (you can do that with the WAL) you'd still have the
problem that if the building went up in smoke you'd lose that
incremental backup. Unless you are doing "incremental backups" to a
computer in another physical location, you still fail all of your
scenarios.
Well, all I can say is having a sync'ed and replicated database ssytem is good.
Either from load sharing point of view or from fail over point.
Forget backup. If you are upto restoring from backup, doesn't matter most of
the times whether you are restoring from dump or cycling thr. thousands of WAL
files..
This type of scenario sounds as good as the above mentioned methods for
oracle/m$ server. Could you post your pdf? Seems like it might be worth
adding to the techdocs site.
Well, I found it googling around. Se if this helps...I have posted this before
dunno which list. This crosposting habit of PG lists made me to lose tracks of
where things originated and where they ended.. Not that it's bad..
HTH
Bye
Shridhar
--
like: When being alive at the same time is a wonderful coincidence.
Attachments:
johnson_darren.zipapplication/zip; name=johnson_darren.zipDownload+0-2
Hi Erwan,
Erwan DUROSELLE wrote:
<snip>
As someone said, Replication may be a way to reduce the risks.
Yes, it can, but it all depends on how much the data is worth, what kind
of load happens, etc.
Something as simple as a master->slave replication setup will do,
because the master would generally be the beefy box doing processing,
and the slave database server only receives changes, etc.
Useful for lots of circumstances, although not the only solution.
:-)
Regards and best wishes,
Justin Clift
E.D.
-------------------------------------------------------------------------------
Erwan DUROSELLE // SEAFRANCE DSI
Responsable Bases de Donn�es // Databases Manager
Tel: +33 (0)1 55 31 59 70 // Fax: +33 (0)1 55 31 85 28
email: eduroselle@seafrance.fr
-------------------------------------------------------------------------------
--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi
Shridhar Daithankar wrote:
On 7 Oct 2002 at 13:48, Neil Conway wrote:
"Sandeep Chadha" <sandeep@newnetco.com> writes:
Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other
place. Please let me know how is hot backup procedure implemented in
current 7.3 beta(2) release.AFAIK no such hot backup feature has been implemented for 7.3 -- you
appear to have been misinformed.Is replication an answer to hot backup?
We already allow hot backups using pg_dump. If you mean point-in-time
recovery, we have a patch for that ready for 7.4.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
I'd say yes replication can solve lot of issues, but is there a way to do replication in postgres(active-active or active-passive)
-----Original Message-----
From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]
Sent: Tuesday, October 08, 2002 8:27 PM
To: shridhar_daithankar@persistent.co.in
Cc: pgsql-hackers@postgresql.org; pgsql-general
Subject: Re: [GENERAL] [HACKERS] Hot Backup
Shridhar Daithankar wrote:
On 7 Oct 2002 at 13:48, Neil Conway wrote:
"Sandeep Chadha" <sandeep@newnetco.com> writes:
Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other
place. Please let me know how is hot backup procedure implemented in
current 7.3 beta(2) release.AFAIK no such hot backup feature has been implemented for 7.3 -- you
appear to have been misinformed.Is replication an answer to hot backup?
We already allow hot backups using pg_dump. If you mean point-in-time
recovery, we have a patch for that ready for 7.4.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
Import Notes
Resolved by subject fallback
Hi Sandeep. What you were calling Hot Backup is really called Point in
Time Recovery (PITR). Hot Backup means making a complete backup of the
database while it is running, something Postgresql has supported for a
very long time.
On Mon, 7 Oct 2002, Sandeep Chadha wrote:
Hello to all the Doers of Postgres!!!
Last time I went through forums, people spoke highly about 7.3 and its
capability to do hot backups. My problem is if the database goes down
and I lose my main data store, then I will lose all transactions back
to the time I did the pg_dump.
Let's make it clear that this kind of failure is EXTREMELY rare on real
database servers since they almost ALL run their data sets on RAID arrays.
While it is possible to lost >1 drive at the same time and all your
database, it is probably more likely to have a bad memory chip corrupt
your data silently, or a bad query delete data it shouldn't.
That said, there IS work ongoing to provide this facility for Postgresql,
but I would much rather have work done on making large complex queries run
faster, or fix the little issues with foreign keys cause deadlocks.
Other databases (i e Oracle) solves this by retaining their archive
logs in some physically separate storage. So, when you lose your data,
you can restore the data from back-up, and then apply your archive log,
and avoid losing any committed transactions.Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other place.
Please let me know how is hot backup procedure implemented in current
7.3 beta(2) release.
Again, you'll get better response to your questions if you call it "point
in time recovery" or pitr. Hot backup is the wrong word, and something
Postgresql DOES have.
It also supports WALs, which stands for Write ahead logs. These files
store what the database is about to do before it does it. Should the
database crash with transactions pending, the server will come back up and
process the pending transactions that are in the WAL files, ensuring the
integrity of your database.
Point in Time recovery is very nice, but it's the last step in many to
ensure a stable, coherent database, and will probably be in 7.4 or
somewhere around there. If you're running in a RAID array, then the loss
of your datastore should be a very remote possibility.
I'd have agree on most of what you said. I still think most crashes occur due to data corruption which can only be recovered by using a good backup.
Anyways my problem is I have a 5 gig database. I run a cron job every hour which runs pg_dump which takes over 30 minutes to run and degrades the db performance. I was hoping for something which can solve my problem and then I don't have to take backup every hour. Is there a plan on implementing incremental backup technique for pg_dump or Is it going to be same for next one or two releases.
Thanks much For you time
Sandeep.
-----Original Message-----
From: scott.marlowe [mailto:scott.marlowe@ihs.com]
Sent: Wednesday, October 09, 2002 12:19 PM
To: Sandeep Chadha
Cc: Tom Lane; pgsql-hackers@postgresql.org; pgsql-general
Subject: [GENERAL] Point in Time Recovery WAS: Hot Backup
Hi Sandeep. What you were calling Hot Backup is really called Point in
Time Recovery (PITR). Hot Backup means making a complete backup of the
database while it is running, something Postgresql has supported for a
very long time.
On Mon, 7 Oct 2002, Sandeep Chadha wrote:
Hello to all the Doers of Postgres!!!
Last time I went through forums, people spoke highly about 7.3 and its
capability to do hot backups. My problem is if the database goes down
and I lose my main data store, then I will lose all transactions back
to the time I did the pg_dump.
Let's make it clear that this kind of failure is EXTREMELY rare on real
database servers since they almost ALL run their data sets on RAID arrays.
While it is possible to lost >1 drive at the same time and all your
database, it is probably more likely to have a bad memory chip corrupt
your data silently, or a bad query delete data it shouldn't.
That said, there IS work ongoing to provide this facility for Postgresql,
but I would much rather have work done on making large complex queries run
faster, or fix the little issues with foreign keys cause deadlocks.
Other databases (i e Oracle) solves this by retaining their archive
logs in some physically separate storage. So, when you lose your data,
you can restore the data from back-up, and then apply your archive log,
and avoid losing any committed transactions.Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other place.
Please let me know how is hot backup procedure implemented in current
7.3 beta(2) release.
Again, you'll get better response to your questions if you call it "point
in time recovery" or pitr. Hot backup is the wrong word, and something
Postgresql DOES have.
It also supports WALs, which stands for Write ahead logs. These files
store what the database is about to do before it does it. Should the
database crash with transactions pending, the server will come back up and
process the pending transactions that are in the WAL files, ensuring the
integrity of your database.
Point in Time recovery is very nice, but it's the last step in many to
ensure a stable, coherent database, and will probably be in 7.4 or
somewhere around there. If you're running in a RAID array, then the loss
of your datastore should be a very remote possibility.
Import Notes
Resolved by subject fallback
On Wed, 2002-10-09 at 12:46, Sandeep Chadha wrote:
I'd have agree on most of what you said. I still think most crashes occur due to data corruption which can only be recovered by using a good backup.
Anyways my problem is I have a 5 gig database. I run a cron job every hour which runs pg_dump which takes over 30 minutes to run and degrades the db performance. I was hoping for something which can solve my problem and then I don't have to take backup every hour. Is there a plan on implementing incremental backup technique for pg_dump or Is it going to be same for next one or two releases.
Thanks much For you time
Oh, if thats your problem then use asynchronous replication instead. It
doesn't remove the slow time, but will distribute the slowness across
every transaction rather than all at once (via creation of replication
logs). Things won't degrade much during the snapshot transfer itself,
as there isn't very much work involved (differences only).
Now periodically backup the secondary box. Needs diskspace, but not
very much power otherwise.
#!/bin/sh
while(true)
do
asynchreplicate.sh
pg_dumpall > "`date`.bak"
done
-----Original Message-----
From: scott.marlowe [mailto:scott.marlowe@ihs.com]
Sent: Wednesday, October 09, 2002 12:19 PM
To: Sandeep Chadha
Cc: Tom Lane; pgsql-hackers@postgresql.org; pgsql-general
Subject: [GENERAL] Point in Time Recovery WAS: Hot BackupHi Sandeep. What you were calling Hot Backup is really called Point in
Time Recovery (PITR). Hot Backup means making a complete backup of the
database while it is running, something Postgresql has supported for a
very long time.On Mon, 7 Oct 2002, Sandeep Chadha wrote:
Hello to all the Doers of Postgres!!!
Last time I went through forums, people spoke highly about 7.3 and its
capability to do hot backups. My problem is if the database goes down
and I lose my main data store, then I will lose all transactions back
to the time I did the pg_dump.Let's make it clear that this kind of failure is EXTREMELY rare on real
database servers since they almost ALL run their data sets on RAID arrays.
While it is possible to lost >1 drive at the same time and all your
database, it is probably more likely to have a bad memory chip corrupt
your data silently, or a bad query delete data it shouldn't.That said, there IS work ongoing to provide this facility for Postgresql,
but I would much rather have work done on making large complex queries run
faster, or fix the little issues with foreign keys cause deadlocks.Other databases (i e Oracle) solves this by retaining their archive
logs in some physically separate storage. So, when you lose your data,
you can restore the data from back-up, and then apply your archive log,
and avoid losing any committed transactions.Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other place.
Please let me know how is hot backup procedure implemented in current
7.3 beta(2) release.Again, you'll get better response to your questions if you call it "point
in time recovery" or pitr. Hot backup is the wrong word, and something
Postgresql DOES have.It also supports WALs, which stands for Write ahead logs. These files
store what the database is about to do before it does it. Should the
database crash with transactions pending, the server will come back up and
process the pending transactions that are in the WAL files, ensuring the
integrity of your database.Point in Time recovery is very nice, but it's the last step in many to
ensure a stable, coherent database, and will probably be in 7.4 or
somewhere around there. If you're running in a RAID array, then the loss
of your datastore should be a very remote possibility.---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
--
Rod Taylor
Rod Taylor wrote:
<snip>
Oh, if thats your problem then use asynchronous replication instead.
For specific info, the contrib/rserv package does master->slave
asynchronous replication as Rod is suggesting. From memory it was
having troubles working with PostgreSQL 7.2.x, but someone recently
submitted patches that make it work.
There's a HOW-TO guide that a community member wrote on setting up rserv
with PostgreSQL 7.0.3, although it should be practically identical for
PostgreSQL 7.2.x (when rserv is patched to make it work).
http://techdocs.postgresql.org/techdocs/settinguprserv.php
That could be the basis for your async replication solution.
Hope that helps.
:-)
Regards and best wishes,
Justin Clift
It doesn't remove the slow time, but will distribute the slowness across
every transaction rather than all at once (via creation of replication
logs). Things won't degrade much during the snapshot transfer itself,
as there isn't very much work involved (differences only).Now periodically backup the secondary box. Needs diskspace, but not
very much power otherwise.#!/bin/sh
while(true)
do
asynchreplicate.sh
pg_dumpall > "`date`.bak"
done-----Original Message-----
From: scott.marlowe [mailto:scott.marlowe@ihs.com]
Sent: Wednesday, October 09, 2002 12:19 PM
To: Sandeep Chadha
Cc: Tom Lane; pgsql-hackers@postgresql.org; pgsql-general
Subject: [GENERAL] Point in Time Recovery WAS: Hot BackupHi Sandeep. What you were calling Hot Backup is really called Point in
Time Recovery (PITR). Hot Backup means making a complete backup of the
database while it is running, something Postgresql has supported for a
very long time.On Mon, 7 Oct 2002, Sandeep Chadha wrote:
Hello to all the Doers of Postgres!!!
Last time I went through forums, people spoke highly about 7.3 and its
capability to do hot backups. My problem is if the database goes down
and I lose my main data store, then I will lose all transactions back
to the time I did the pg_dump.Let's make it clear that this kind of failure is EXTREMELY rare on real
database servers since they almost ALL run their data sets on RAID arrays.
While it is possible to lost >1 drive at the same time and all your
database, it is probably more likely to have a bad memory chip corrupt
your data silently, or a bad query delete data it shouldn't.That said, there IS work ongoing to provide this facility for Postgresql,
but I would much rather have work done on making large complex queries run
faster, or fix the little issues with foreign keys cause deadlocks.Other databases (i e Oracle) solves this by retaining their archive
logs in some physically separate storage. So, when you lose your data,
you can restore the data from back-up, and then apply your archive log,
and avoid losing any committed transactions.Postgresql has been lacking this all along. I've installed postgres
7.3b2 and still don't see any archive's flushed to any other place.
Please let me know how is hot backup procedure implemented in current
7.3 beta(2) release.Again, you'll get better response to your questions if you call it "point
in time recovery" or pitr. Hot backup is the wrong word, and something
Postgresql DOES have.It also supports WALs, which stands for Write ahead logs. These files
store what the database is about to do before it does it. Should the
database crash with transactions pending, the server will come back up and
process the pending transactions that are in the WAL files, ensuring the
integrity of your database.Point in Time recovery is very nice, but it's the last step in many to
ensure a stable, coherent database, and will probably be in 7.4 or
somewhere around there. If you're running in a RAID array, then the loss
of your datastore should be a very remote possibility.---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly--
Rod Taylor---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi
On Wed, 2002-10-09 at 14:04, Justin Clift wrote:
Rod Taylor wrote:
<snip>
Oh, if thats your problem then use asynchronous replication instead.
For specific info, the contrib/rserv package does master->slave
Thanks. I was having a heck of a time remembering what it was called or
even where the DBA found it.
--
Rod Taylor
On Wed, Oct 09, 2002 at 09:42:56AM -0400, Sandeep Chadha wrote:
I'd say yes replication can solve lot of issues, but is there a way
to do replication in postgres(active-active or active-passive)
Yes. Check out the rserv code in contrib/, the (?) dbmirror code in
contrib/, or contact PostgreSQL, Inc for a commercial version of the
rserv code.
A
--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
On Tue, Oct 08, 2002 at 09:40:44AM -0400, Robert Treat wrote:
Is it me or do doomsdays scenarios sometimes seem a little silly? I'd
Not if your contract requires five-nines reliablility and no more
than 180 minutes of downtime _ever_. Is five-nines realistic? For
most purposes, probably not, according to recent pronouncements (see,
e.g. <http://www.bcr.com/bcrmag/2002/05/p22.asp>). But it's in
lots of contracts anyway.
like to ask just where are you storing your "incremental backups" with
Oracle/m$ sql ?? If it's on the same drive, then when you drive craps
The more or less standard way of doing this is to stream the
PITR-required stuff to another device on another controller -- lots
of people stream to tape. People have been doing this for ages,
partly because disks used to be (a) expensive and (b) unreliable.
A
--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
I think you missed the part of the thread where the nuclear bomb hit the
data center. hmm... maybe it wasn't a nuclear bomb, but it was getting
there. :-)
BTW - I believe we'll have real PITR in 7.4, about 6 months away.
Robert Treat
Show quoted text
On Tue, 2002-10-22 at 10:34, Andrew Sullivan wrote:
On Tue, Oct 08, 2002 at 09:40:44AM -0400, Robert Treat wrote:
Is it me or do doomsdays scenarios sometimes seem a little silly? I'd
Not if your contract requires five-nines reliablility and no more
than 180 minutes of downtime _ever_. Is five-nines realistic? For
most purposes, probably not, according to recent pronouncements (see,
e.g. <http://www.bcr.com/bcrmag/2002/05/p22.asp>). But it's in
lots of contracts anyway.like to ask just where are you storing your "incremental backups" with
Oracle/m$ sql ?? If it's on the same drive, then when you drive crapsThe more or less standard way of doing this is to stream the
PITR-required stuff to another device on another controller -- lots
of people stream to tape. People have been doing this for ages,
partly because disks used to be (a) expensive and (b) unreliable.
On Thu, Oct 24, 2002 at 10:42:00AM -0400, Robert Treat wrote:
I think you missed the part of the thread where the nuclear bomb hit the
data center. hmm... maybe it wasn't a nuclear bomb, but it was getting
there. :-)
No, I didn't miss it. Have a look at the Internet Society bid to run
.org -- it's available for public consumption on ICANN's site. One may
belive that, if people are launching nuclear attacks, suicide
bombings, and anthrax releases, the disposition of some set of data
one looks after is unlikely to be of tremendous importance. But
lawyers and insurers don't think that way, and if you really want
PostgreSQL to be taken seriously in the "enterprise market", you have
to please lawyers and insurers.
Having undertaken the exercise, I really can say that it is a little
strange to think about what would happen to data I am in charge of in
case a fairly large US centre were completely blown off the map. But
with a little careful planning, you actually _can_ think about that,
and provide strong assurances that things won't get lost. But it
doesn't pay to call such questions "silly", because they are
questions that people will demand answers to before they entrust you
with their millions of dollars of data.
A
--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M2P 2A8
+1 416 646 3304 x110