Best high availability solution ?
Hi list !
I have a small enterprise network (~15 workstations, 1 server),
all running windows OSes. Most of our work is done on a PostgreSQL
DB (on the windows server).
I am the only IT here, and my boss asked me to find a way to have
the database always online, without my intervention.
Last time I went on vacation, the server crashed and no one was
able to repair it.
Our application connects to PostgreSQL through ODBC, with a simple
TCP/IP connection.
I though that I would first install a Slony-I cluster. That would
be fine for data replication, but still if the main server
crashes, the database connections will not work anymore because
the name of the backup-server will be different than the name of
the master, so all ODBC connection should be changed to use the
new machine name.
Since I cannot ask anyone to do some DNS changes or things like
that, I am looking for a simple way to have my database always
online (note that I already have a UPS and RAID1 on the server to
prevent most failures).
After some searches, I found LifeKeeper, which looks very good but
is quite expensive !
Are there easier and/or better solutions than that ?
Thanks for your advices on this matter !
--
Arnaud
Hi Arnaud,
perhaps you can still use Slony-I for replication and have another tool
automatically handle connections (check out PgPool[1]http://pgfoundry.org/projects/pgpool/ or SQLRelay[2]http://sqlrelay.sourceforge.net/).
Or go for a middleware replication solution. Check C-JDBC[3]http://c-jdbc.objectweb.org/, perhaps
there is something similar for ODBC?
LifeKeeper seems to handle replication at a lower level (filesystem,
distributed memory, or such) and does not seem to be very well suited
for database replication. However, I've had just a quick glance at the
website.
Hope that helps
Markus
[1]: http://pgfoundry.org/projects/pgpool/
[2]: http://sqlrelay.sourceforge.net/
[3]: http://c-jdbc.objectweb.org/
Show quoted text
On Wed, 2006-05-31 at 09:36 +0200, Arnaud Lesauvage wrote:
Hi list !
I have a small enterprise network (~15 workstations, 1 server),
all running windows OSes. Most of our work is done on a PostgreSQL
DB (on the windows server).
I am the only IT here, and my boss asked me to find a way to have
the database always online, without my intervention.
Last time I went on vacation, the server crashed and no one was
able to repair it.Our application connects to PostgreSQL through ODBC, with a simple
TCP/IP connection.
I though that I would first install a Slony-I cluster. That would
be fine for data replication, but still if the main server
crashes, the database connections will not work anymore because
the name of the backup-server will be different than the name of
the master, so all ODBC connection should be changed to use the
new machine name.
Since I cannot ask anyone to do some DNS changes or things like
that, I am looking for a simple way to have my database always
online (note that I already have a UPS and RAID1 on the server to
prevent most failures).After some searches, I found LifeKeeper, which looks very good but
is quite expensive !
Are there easier and/or better solutions than that ?Thanks for your advices on this matter !
--
Arnaud---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Markus Schiltknecht a �crit :
Hi Arnaud,
perhaps you can still use Slony-I for replication and have another tool
automatically handle connections (check out PgPool[1] or SQLRelay[2]).Or go for a middleware replication solution. Check C-JDBC[3], perhaps
there is something similar for ODBC?LifeKeeper seems to handle replication at a lower level (filesystem,
distributed memory, or such) and does not seem to be very well suited
for database replication. However, I've had just a quick glance at the
website.Hope that helps
Markus
[1]: http://pgfoundry.org/projects/pgpool/
[2]: http://sqlrelay.sourceforge.net/
[3]: http://c-jdbc.objectweb.org/
Hi Markus !
Thanks for your suggestions, they are very helpful !
pgpool and SQLRelay looked really great, but there are no ports
for win32... :(
Maybe some kind of virtual-ip software would do the trick, but I
am still looking for it !
--
Arnaud
Hi list !
I have a small enterprise network (~15 workstations, 1
server), all running windows OSes. Most of our work is done
on a PostgreSQL DB (on the windows server).
I am the only IT here, and my boss asked me to find a way to
have the database always online, without my intervention.
Last time I went on vacation, the server crashed and no one
was able to repair it.Our application connects to PostgreSQL through ODBC, with a
simple TCP/IP connection.
I though that I would first install a Slony-I cluster. That
would be fine for data replication, but still if the main
server crashes, the database connections will not work
anymore because the name of the backup-server will be
different than the name of the master, so all ODBC connection
should be changed to use the new machine name.
Since I cannot ask anyone to do some DNS changes or things
like that, I am looking for a simple way to have my database
always online (note that I already have a UPS and RAID1 on
the server to prevent most failures).After some searches, I found LifeKeeper, which looks very
good but is quite expensive !
Are there easier and/or better solutions than that ?Thanks for your advices on this matter !
Since you're a Windows shop, you may already have the experience (and
even liceneses perhaps?) to run Microsoft Cluster Service (part of 2003
Enterprise Edition or 2000 Advanced Server). PostgreSQL will work fine
with it. Works with shared disks using either fibrechannel or iSCSI.
If you don't have the licenses for it already, it might turn out very
expensive. And if you don't already have fibrechannel, that part is
definitly expensive - but iSCSI could help you.
If you're willing to move off Windows for the server platform, you could
look at one of the solutions like slony+pgpool, or maybe DRBD+linux/ha.
That'll be less expensive in both hardware and licenses, but if you
don't have the people to maintain a new platform for it that's likely to
be prohibitive.
//Magnus
Import Notes
Resolved by subject fallback
Magnus Hagander a �crit :
Since you're a Windows shop, you may already have the experience (and
even liceneses perhaps?) to run Microsoft Cluster Service (part of 2003
Enterprise Edition or 2000 Advanced Server). PostgreSQL will work fine
with it. Works with shared disks using either fibrechannel or iSCSI.If you don't have the licenses for it already, it might turn out very
expensive. And if you don't already have fibrechannel, that part is
definitly expensive - but iSCSI could help you.
I am running Windows 2000 Server (the "normal" edition, not the
"advanced" one), so I don't have the cluster service available.
If you're willing to move off Windows for the server platform, you could
look at one of the solutions like slony+pgpool, or maybe DRBD+linux/ha.
That'll be less expensive in both hardware and licenses, but if you
don't have the people to maintain a new platform for it that's likely to
be prohibitive.
I don't have the Linux knowledge to move from windows unfortunately.
Otherwise, is the clustering service really necessary ?
Aren't there simple "virtual IP addresses" solutions available ?
I could have a single virtual IP for the 2 servers, and having an
automatic failover if the master server is down ? The slave would
never be accessed by clients directly (only after a failover) but
would by synchronized with the master (slony-I).
Does this kind of software exist in the windows world ?
--
Arnaud
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of
Arnaud Lesauvage
Sent: 31 May 2006 10:39
To: Magnus Hagander
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Best high availability solution ?Otherwise, is the clustering service really necessary ?
Aren't there simple "virtual IP addresses" solutions available ?
I could have a single virtual IP for the 2 servers, and having an
automatic failover if the master server is down ? The slave would
never be accessed by clients directly (only after a failover) but
would by synchronized with the master (slony-I).Does this kind of software exist in the windows world ?
Slony-I only exists for Windows in a not-even-beta state at the moment,
so even if you get things up and running using a virtual IP solution you
will be hand-holding Slony until it gets properly released.
The 'virtual IP' solution you are looking for is called Network Load
Balancing btw, and is on standard edition. I really don't see how you
would make it work safely though - it's designed more for load
distribution, and though you can weight rules towards one server,
there's no guarantee that it won't send a connection to the backup
server.
You would also need some way of automatically handling the Slony
failover which is definitely not recommended.
If I'm honest, I think your boss is going to be disappointed. You would
add a *lot* of complexity to the system to make it handle failures with
zero intervention, and that extra complexity is probably more likely to
go wrong than a single server. I'd spend your time and money on making
sure your raid & ups are good, that you are running on server grade
hardware with ECC RAM, and that you have good out of band management
facilities so even if you are away from the office you can connect via
VPN/modem or whatever and fix things.
Regards, Dave.
Import Notes
Resolved by subject fallback
Dave Page a �crit :
Slony-I only exists for Windows in a not-even-beta state at the moment,
so even if you get things up and running using a virtual IP solution you
will be hand-holding Slony until it gets properly released.
OK, I thought it was in production stage.
If I'm honest, I think your boss is going to be disappointed. You would
add a *lot* of complexity to the system to make it handle failures with
zero intervention, and that extra complexity is probably more likely to
go wrong than a single server. I'd spend your time and money on making
sure your raid & ups are good, that you are running on server grade
hardware with ECC RAM, and that you have good out of band management
facilities so even if you are away from the office you can connect via
VPN/modem or whatever and fix things.
I think you're right.
Is there a simple replication solution for windows then ? Or will
I have to stop the master postgresql at night to pgdump and
pgrestore on the backup server ?
If I have a synchronized backup server (even if it synchronizes
once or twice a day, this might be OK), I could just write a
simple script that would change the host file of the clients to
have the postgresql's name to point to the backup IP...
--
Arnaud
Arnaud Lesauvage schrieb:
...
I think you're right.
Is there a simple replication solution for windows then ? Or will I have
to stop the master postgresql at night to pgdump and pgrestore on the
backup server ?
pg_dump does not require you to stop the master database anyway.
(in fact it cannot even dump a stopped database :-)
If I have a synchronized backup server (even if it synchronizes once or
twice a day, this might be OK), I could just write a simple script that
would change the host file of the clients to have the postgresql's name
to point to the backup IP...
I think you can use PITR for this use-case:
http://www.postgresql.org/docs/current/static/backup-online.html
Regards
Tino
-----Original Message-----
From: Arnaud Lesauvage [mailto:thewild@freesurf.fr]
Sent: 31 May 2006 11:27
To: Dave Page
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Best high availability solution ?Dave Page a écrit :
Slony-I only exists for Windows in a not-even-beta state at
the moment,
so even if you get things up and running using a virtual IP
solution you
will be hand-holding Slony until it gets properly released.
OK, I thought it was in production stage.
The code is written, and is good as far as we are aware, but has not been through a beta/release cycle yet.
Is there a simple replication solution for windows then ? Or will
I have to stop the master postgresql at night to pgdump and
pgrestore on the backup server ?
You don't need to stop the master server to pg_dump (in fact, it won't work if you do). On the slave, you can just drop the DBs and pg_restore them.
PITR is another solution you might consider.
If I have a synchronized backup server (even if it synchronizes
once or twice a day, this might be OK), I could just write a
simple script that would change the host file of the clients to
have the postgresql's name to point to the backup IP...
Might be better if the users know they are being switched over - that way they will know to check for lost transactions etc. If you're using an ODBC app, consider using a separate DSN for the live and backup servers - similar solutions would apply to other interfaces of course.
Regards, Dave
Import Notes
Resolved by subject fallback
Arnaud Lesauvage schrieb:
Tino Wildenhain a �crit :
pg_dump does not require you to stop the master database anyway.
(in fact it cannot even dump a stopped database :-)Hello Tino,
I think I might just use this pg_dump solution...
Seems to be quite simple to set up.
personally I think the WAL approach is by far easier
to set up and maintain - the pg_dump is in fact easy,
but the restore to another database can be tricky
if you want it unattended and bullit-proof the same
time.
Import Notes
Reply to msg id not found: 447D7325.5090809@freesurf.fr
Dave Page a �crit :
The code is written, and is good as far as we are aware, but has not been through a beta/release cycle yet.
OK, that's already pretty good then.
Are there binary releases available ?
Is there a simple replication solution for windows then ? Or will
I have to stop the master postgresql at night to pgdump and
pgrestore on the backup server ?You don't need to stop the master server to pg_dump (in fact, it won't work if you do). On the slave, you can just drop the DBs and pg_restore them.
PITR is another solution you might consider.
It looks quite harder to script though... But I'll consider it as
a better solution.
Might be better if the users know they are being switched over - that way they will know to check for lost transactions etc. If you're using an ODBC app, consider using a separate DSN for the live and backup servers - similar solutions would apply to other interfaces of course.
There are no critical transactions here.
If the DB server fails and users are switched over to a backup
server with data of the day before, it is just fine.
Losing 1/2 of work is OK, but not working for 4 days is not.
So I might just write a script so that the person in charge when I
am away can run it and alert everyone.
--
Arnaud
-----Original Message-----
From: Arnaud Lesauvage [mailto:thewild@freesurf.fr]
Sent: 31 May 2006 11:53
To: Dave Page
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Best high availability solution ?Dave Page a écrit :
The code is written, and is good as far as we are aware,
but has not been through a beta/release cycle yet.
OK, that's already pretty good then.
Are there binary releases available ?
http://developer.pgadmin.org/~hiroshi/Slony-I/
That's built against 8.1 iirc.
USE AT YOUR OWN RISK!! IT MAY EAT YOUR DATA AND SET YOUR SERVER ON FIRE :-)
Regards, Dave.
Import Notes
Resolved by subject fallback
Tino Wildenhain a �crit :
personally I think the WAL approach is by far easier
to set up and maintain - the pg_dump is in fact easy,
but the restore to another database can be tricky
if you want it unattended and bullit-proof the same
time.
I'll have to study this more in-depth then.
If I got it right, the procedure would be :
- wal archiving enabled
- base backup once a day (pg_start_backup, copy the 'data'
directory, pg_stop_backup)
- create restore-script to be run on the second server, which would :
- copy the backup to the 'data' directory
- copy the wal files to the 'pg_xlog' directory
- create the recovery.conf in the data directory (should always
stay there maybe)
- start the postmaster
Then anyone could just run this script in case of a failure of the
master server to have an up-to-date database running.
Then with a script that would change my DNS so that
mypgserver.domain.tld (used in ODBC connection string) points to
CNAME mybackupserver.domain.tld instead of CNAME
mymasterserver.domain.tld, getting back to production ould be
quite easy...?
--
Arnaud
Dave Page a �crit :
http://developer.pgadmin.org/~hiroshi/Slony-I/
That's built against 8.1 iirc.
Great !
USE AT YOUR OWN RISK!! IT MAY EAT YOUR DATA AND SET YOUR SERVER ON FIRE :-)
I'll keep an eye on it then ! ;-)
--
Arnaud
Arnaud Lesauvage schrieb:
Tino Wildenhain a �crit :
personally I think the WAL approach is by far easier
to set up and maintain - the pg_dump is in fact easy,
but the restore to another database can be tricky
if you want it unattended and bullit-proof the same
time.I'll have to study this more in-depth then.
If I got it right, the procedure would be :
- wal archiving enabled
- base backup once a day (pg_start_backup, copy the 'data' directory,
pg_stop_backup)
I'd think you can skip that and just do it once at the very beginning.
But if you like to use the WAL files to recover a 3rd, new system, this
would be a good approach.
- create restore-script to be run on the second server, which would :
- copy the backup to the 'data' directory
- copy the wal files to the 'pg_xlog' directory
- create the recovery.conf in the data directory (should always stay
there maybe)
- start the postmaster
Then anyone could just run this script in case of a failure of the
master server to have an up-to-date database running.
Actually you would let the (master-) server run the script
and let it (trigger) copy and import of the WAL segment as
it gets ready. This way your backup server is very close
to the current state and you dont loose much if the first
machine completely dies suddenly.
Then with a script that would change my DNS so that
mypgserver.domain.tld (used in ODBC connection string) points to CNAME
mybackupserver.domain.tld instead of CNAME mymasterserver.domain.tld,
getting back to production ould be quite easy...?
I guess the users would start over anyway. So easiest if you
provide a copy of the app with that other connection and
signal them if the first server dies to just close the first
and start the backup-application.
Regards
Tino
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Tino
Wildenhain
Sent: 31 May 2006 12:14
To: Arnaud Lesauvage
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Best high availability solution ?Then with a script that would change my DNS so that
mypgserver.domain.tld (used in ODBC connection string)points to CNAME
mybackupserver.domain.tld instead of CNAME
mymasterserver.domain.tld,
getting back to production ould be quite easy...?
I guess the users would start over anyway. So easiest if you
provide a copy of the app with that other connection and
signal them if the first server dies to just close the first
and start the backup-application.
Yes - the DNS method would work, but you might run into caching issues
requiring the users to reboot or do a 'ipconfig /flushdns' before they
see the change.
Regards, Dave.
Import Notes
Resolved by subject fallback
Dave Page a �crit :
Yes - the DNS method would work, but you might run into caching issues
requiring the users to reboot or do a 'ipconfig /flushdns' before they
see the change.
Yes, but I am not a very nice adminsitrator, and when there is any
problem with a datawase, my users HAVE to reboot ! ;-)
So that might just be fine !
--
Arnaud
-----Original Message-----
From: Arnaud Lesauvage [mailto:thewild@freesurf.fr]
Sent: 31 May 2006 12:33
To: Dave Page
Cc: Tino Wildenhain; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Best high availability solution ?Dave Page a écrit :
Yes - the DNS method would work, but you might run into
caching issues
requiring the users to reboot or do a 'ipconfig /flushdns'
before they
see the change.
Yes, but I am not a very nice adminsitrator, and when there is any
problem with a datawase, my users HAVE to reboot ! ;-)
So that might just be fine !
Ah, but wasn't this intended for when you are not there standing over them with a bat?
:-)
Regards, Dave.
Import Notes
Resolved by subject fallback
Dave Page a �crit :
Ah, but wasn't this intended for when you are not there standing over them with a bat?
:-)
It was, but they know I will come back from vacation one day, and
then....
;-)
--
Arnaud
Tino Wildenhain a �crit :
If I understand you right, you suggest that the wal files should be
automatically copied to the backup server, which should parse them as
soon as they arrive ?eactly.
Indeed, it seems to be the best solution !
After reading the backup-online page, I see that the recovery
process will end after at the end of the wal log. So with your
suggestion, I should script the copy of the wal files to the
backup server (say, every 5 minutes), and script on the backup
server a recovery process every 5 minutes also ?
--
Arnaud
Import Notes
Reply to msg id not found: 447D8007.90207@wildenhain.de