Postgresql replication
Hello,
Currently we have only one database accessed by the headquarter and two branches but the performance in the branches is very poor and I was invited to discover a way to increase it.
One possible solution is replicate the headquarter DB into the two branches.
I read about slony-i, but then the replicated DBs will be read-only.
Pgcluster is a sync solution and I think is not fited for us becase the replicated DBs will be located remotely and we have a lot of updates on the DBs.
I think I�m looking for a master-slave assynchronous solution. I know pgReplicator can do it, but I think the project is not active any more.
Are there another solutions?
Thanks in advance!
Reimer
__________________________________________________
Converse com seus amigos em tempo real com o Yahoo! Messenger
http://br.download.yahoo.com/messenger/
Am Mittwoch, 24. August 2005 14:21 schrieb Carlos Henrique Reimer:
One possible solution is replicate the headquarter DB into the two
branches.I read about slony-i, but then the replicated DBs will be read-only.
That's because it's a master-slave replication. If you could sync the slave
back to the master it would be a master itself.
I think I´m looking for a master-slave assynchronous solution. I know
pgReplicator can do it, but I think the project is not active any more.
But Slony does master/slave replication.
Michael
--
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: meskes@jabber.org
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of Michael Meskes
Am Mittwoch, 24. August 2005 14:21 schrieb Carlos Henrique Reimer:
One possible solution is replicate the headquarter DB into the two
branches.
I read about slony-i, but then the replicated DBs will be read-only.
That's because it's a master-slave replication. If you could sync the slave
back to the master it would be a master itself.
I think I´m looking for a master-slave assynchronous solution. I know
pgReplicator can do it, but I think the project is not active any more.
But Slony does master/slave replication.
i think carlos is confused about master-slave vs multi-master.
carlos,
master-slave async is much easier than multi-master async. yes, pgreplicator
can in theory do multi-master async in a restricted sort of way (the
generalized multi-master async problem is fairly intractable), but you're
right in that pgreplicator is not maintained anymore (and moreover it depends
on a no-longer-maintained dialect of tcl.) if i had infinite time and energy,
i'd be working on a reimplementation of pgreplicator in C, but i don't have
either and i don't see anyone offering to pay me to do it, so it'll stay on
the list of wanna-do projects for the time being.
richard
Import Notes
Resolved by subject fallback
carlosreimer@yahoo.com.br (Carlos Henrique Reimer) writes:
Currently we have only one database accessed by the headquarter and
two branches but the performance in the �branches is very poor� and
I was invited to discover a way to increase it.One possible solution is replicate the headquarter DB into the two
branches.I read about slony-i, but then the replicated DBs will be read-only.
Correct.
Pgcluster is a sync solution and I think is not fited for us becase
the replicated DBs will be located remotely and we have a lot of
updates on the DBs.
Unfortunately, pgcluster isn't much maintained anymore.
I think I�m looking for a master-slave assynchronous solution. I
know pgReplicator can do it, but I think the project is not active
any more.
Slony-I is a master/slave asynchronous replication system; if you
already considered it unsuitable, then I see little likelihood of
other systems with the same sorts of properties being suitable.
What could conceivably be of use to you would be a *multimaster*
asynchronous replication system. Unfortunately, multimaster
*anything* is a really tough nut to crack.
There is a Slony-II project ongoing that is trying to construct a
more-or-less synchronous multimaster replication system (where part of
the cleverness involves trying to get as much taking place in an
asynchronous fashion as possible) that would almost certainly be of no
use to your "use case."
The most successful "multimaster asynchronous" replication system that
I am aware of is the PalmComputing "PalmSync" system.
It would presumably be possible to use some of the components of
Slony-I to construct a multimaster async replication system. A
pre-requisite would be the creation of some form of "distributed
sequence" which would try to minimize the conflicts that arise out of
auto-generation of sequence numbers.
But beyond that lies the larger challenge of conflict resolution.
Slony-I, as a single-master system, does not need to address
conflicts, as changes must be made on the "master" and propagate
elsewhere.
Synchronous multimaster systems address conflicts by detecting them
when they occur and rejecting one or another of the conflicting
transactions.
Asynchronous multimaster systems require some sort of conflict
management/resolution system for situations where tuples are being
concurrently updated on multiple nodes. How that is managed is, well,
troublesome :-(. The PalmSync approach is that if it finds conflicts,
it duplicates records and leaves you, the user, to clean things up.
That may not be suitable for every kind of application...
--
(reverse (concatenate 'string "gro.gultn" "@" "enworbbc"))
http://cbbrowne.com/info/slony.html
((LAMBDA (X) (X X)) (LAMBDA (X) (X X)))
I read some documents about replication and realized that if you plan on using asynchronous replication, your application should be designed from the outset with that in mind because asynchronous replication is not something that can be easily �added on� after the fact.
Am I right?
Reimer
__________________________________________________
Converse com seus amigos em tempo real com o Yahoo! Messenger
http://br.download.yahoo.com/messenger/
Carlos Henrique Reimer wrote:
I read some documents about replication and realized
that if you plan on using asynchronous replication, your
application should be designed from the outset with that
in mind because asynchronous replication is not something
that can be easily "added on" after the fact.
Am I right?
certainly, if your goal is a pgreplicator style multi-master async, this
is correct, as you have to make decisions about the direction of
data flow, id generation, and conflict resolution up front.
if you want slony-I style single master/multi slave, you don't have to
do so much advance thinking as records are only being inserted into
the system on the single master.
richard
Import Notes
Resolved by subject fallback
Chris Browne wrote:
Slony-I is a master/slave asynchronous replication system; if you
already considered it unsuitable, then I see little likelihood of
other systems with the same sorts of properties being suitable.What could conceivably be of use to you would be a *multimaster*
asynchronous replication system. Unfortunately, multimaster
*anything* is a really tough nut to crack.
In general that's a difficult problem, but in practice there may be a
solution.
For instance, perhaps the following configuration would be helpful:
Make a database for each physical server, called db1 ... dbN. Let your
logical tables in each database be table1 ... tableM. Now, for each
logical tableX (where 1 <= X <= M), make N physical tables, tableX_1 ...
tableX_N. Now, make a view called tableX that is the UNION of tableX_1
... tableX_N (tableX is not a real table, it's just a logical table).
Now, use Slony-I. For each dbY (where 1 <= Y <= N), make dbY a master
for tableX_Y (for all X where 1 <= X <= M) and a slave for tableX_Z (for
all X,Z where 1 <= X <= M, Z != Y).
Now, use a rule that replaces all INSERTs to tableX (where 1 <= X <= M)
on dbY (where 1 <= Y <= N) with INSERTs to tableX_Y.
That was my attempt at being unambiguous. In general what I mean is that
each database is master of one piece of a table, and slave to all the
other pieces of that table, and then you have a view which is the union
of those pieces. That view is the logical table. Then have a RULE which
makes INSERTs go to the physical table for which that database is master.
The advantages: if one machine goes down, the rest keep going, and
merely miss the updates from that one site to that table. If one machine
makes an insert to the table, it quickly propogates to the other
machines and transparently becomes a part of the logical tables on those
machines.
The disadvantages: UPDATEs are difficult, and might end up with a
complicated set of rules/procedures/triggers. You may have to program
the application defensively in case the database is unable to update a
remote database for various reasons (if the record to be updated is a
part of a table for which another database is master). Also, since the
solution is asynchronous, the databases may provide different results to
the same query.
In general, this solution does not account for all kinds of data
constraints. The conflict resolution is very simplified because it's
basically just the union of data. If that union could cause a constraint
violation itself, this solution might not be right for you. For
instance, let's say you're tracking video rentals, and store policy says
that you only rent one video per person. However, maybe they go to store
1 and rent a video, and run to store 2 and rent a video before store 1
sends the INSERT record over to store 2. Now, when they finally do
attempt to UNION the data for the view, you have an inconsistant state.
Many applications can get by just fine by UNIONing the data like that,
and if not, perhaps work around it.
I hope this is helpful. Let me know if there's some reason my plan won't
work.
Regards,
Jeff Davis
Jeff Davis writes:
The disadvantages:
one more: if you actually have m tables and n servers, you have
m x n tables in reality, which is pretty miserable scaling behavior.
i should think that rules, triggers, and embedded procedures would
explode in complexity rather rapidly.
i know i wouldn't want to administer one of these if there were a lot
of sites.
I hope this is helpful. Let me know if there's some reason my plan won't
work.
look at the solution in pgreplicator. site ids are embedded in the
id columns in the tables, so there only m tables, and a bit less insanity.
richard
(dropping out of this conversation, i'm unsubscribing while on vacation)
Import Notes
Resolved by subject fallback
Welty, Richard wrote:
Jeff Davis writes:
The disadvantages:
one more: if you actually have m tables and n servers, you have
m x n tables in reality, which is pretty miserable scaling behavior.
i should think that rules, triggers, and embedded procedures would
explode in complexity rather rapidly.i know i wouldn't want to administer one of these if there were a lot
of sites.
True, but in practice n will usually be fairly reasonable. In
particular, his setup sounded like it would be only a few.
Also, you're really talking about scalability of administration. I don't
think performance will be significantly impacted.
I hope this is helpful. Let me know if there's some reason my plan won't
work.look at the solution in pgreplicator. site ids are embedded in the
id columns in the tables, so there only m tables, and a bit less insanity.
That doesn't work with Slony-I unfortunately. I don't know much about
pgreplicator, but if it does something similar to what I'm talking
about, maybe it's a good thing to look into.
Regards,
Jeff Davis
Jeff Davis writes:
I hope this is helpful. Let me know if there's some reason my plan won't
work.
look at the solution in pgreplicator. site ids are embedded in the
id columns in the tables, so there only m tables, and a bit less insanity.
That doesn't work with Slony-I unfortunately. I don't know much about
pgreplicator, but if it does something similar to what I'm talking
about, maybe it's a good thing to look into.
it'd be an excellent thing to look into if it were in any way supported or
maintained. it's a dead project (unfortunately.)
richard
Import Notes
Resolved by subject fallback
Or, for something far easier, try
http://pgfoundry.org/projects/pgcluster/ which provides syncronous
multi-master clustering.
On Wed, Aug 24, 2005 at 12:53:34PM -0700, Jeff Davis wrote:
Chris Browne wrote:
Slony-I is a master/slave asynchronous replication system; if you
already considered it unsuitable, then I see little likelihood of
other systems with the same sorts of properties being suitable.What could conceivably be of use to you would be a *multimaster*
asynchronous replication system. Unfortunately, multimaster
*anything* is a really tough nut to crack.In general that's a difficult problem, but in practice there may be a
solution.For instance, perhaps the following configuration would be helpful:
Make a database for each physical server, called db1 ... dbN. Let your
logical tables in each database be table1 ... tableM. Now, for each
logical tableX (where 1 <= X <= M), make N physical tables, tableX_1 ...
tableX_N. Now, make a view called tableX that is the UNION of tableX_1
... tableX_N (tableX is not a real table, it's just a logical table).Now, use Slony-I. For each dbY (where 1 <= Y <= N), make dbY a master
for tableX_Y (for all X where 1 <= X <= M) and a slave for tableX_Z (for
all X,Z where 1 <= X <= M, Z != Y).Now, use a rule that replaces all INSERTs to tableX (where 1 <= X <= M)
on dbY (where 1 <= Y <= N) with INSERTs to tableX_Y.That was my attempt at being unambiguous. In general what I mean is that
each database is master of one piece of a table, and slave to all the
other pieces of that table, and then you have a view which is the union
of those pieces. That view is the logical table. Then have a RULE which
makes INSERTs go to the physical table for which that database is master.The advantages: if one machine goes down, the rest keep going, and
merely miss the updates from that one site to that table. If one machine
makes an insert to the table, it quickly propogates to the other
machines and transparently becomes a part of the logical tables on those
machines.The disadvantages: UPDATEs are difficult, and might end up with a
complicated set of rules/procedures/triggers. You may have to program
the application defensively in case the database is unable to update a
remote database for various reasons (if the record to be updated is a
part of a table for which another database is master). Also, since the
solution is asynchronous, the databases may provide different results to
the same query.In general, this solution does not account for all kinds of data
constraints. The conflict resolution is very simplified because it's
basically just the union of data. If that union could cause a constraint
violation itself, this solution might not be right for you. For
instance, let's say you're tracking video rentals, and store policy says
that you only rent one video per person. However, maybe they go to store
1 and rent a video, and run to store 2 and rent a video before store 1
sends the INSERT record over to store 2. Now, when they finally do
attempt to UNION the data for the view, you have an inconsistant state.Many applications can get by just fine by UNIONing the data like that,
and if not, perhaps work around it.I hope this is helpful. Let me know if there's some reason my plan won't
work.Regards,
Jeff Davis---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com 512-569-9461
Carlos Henrique Reimer wrote:
I read some documents about replication and realized that if you plan on
using asynchronous replication, your application should be designed from
the outset with that in mind because asynchronous replication is not
something that can be easily �added on� after the fact.
Yes, it requires a lot foresight to do multi-master replication --
especially across high latency connections. I do that now for 2
different projects. We have servers across the country replicating data
every X minutes with custom app logic resolves conflicting data.
Allocation of unique IDs that don't collide across servers is a must.
For 1 project, instead of using numeric IDs, we using CHAR and
pre-append a unique server code so record #1 on server A is A0000000001
versus ?x0000000001 on other servers. For the other project, we were too
far along in development to change all our numerics into chars so we
wrote custom sequence logic to divide our 10billion ID space into
1-Xbillion for server 1, X-Ybillion for server 2, etc.
With this step taken, we then had to isolate (1) transactions could run
on any server w/o issue (where we always take the newest record), (2)
transactions required an amalgam of all actions and (3) transactions had
to be limited to "home" servers. Record keeping stuff where we keep a
running history of all changes fell into the first category. It would
have been no different than 2 users on the same server updating the same
object at different times during the day. Updating of summary data fell
into category #2 and required parsing change history of individual
elements. Category #3 would be financial transactions requiring strict
locks were be divided up by client/user space and restricted to the
user's home server. This case would not allow auto-failover. Instead, it
would require some prolonged threshold of downtime for a server before
full financials are allowed on backup servers.
William Yu <wyu@talisys.com> writes:
Allocation of unique IDs that don't collide across servers is a must. For 1
project, instead of using numeric IDs, we using CHAR and pre-append a unique
server code so record #1 on server A is A0000000001 versus ?x0000000001 on other
servers. For the other project, we were too far along in development to change
all our numerics into chars so we wrote custom sequence logic to divide our
10billion ID space into 1-Xbillion for server 1, X-Ybillion for server 2, etc.
I would have thought setting the sequences to "INCREMENT BY 100" would let you
handle this simply by setting the sequences on each server to start at a
different value modulo 100.
I wonder if it might be handy to be able to set default sequence parameters on
a per-database level so that you could set this up and then just do a normal
pg_restore of the same schema and get proper non-conflicting sequences on each
server.
I suppose it's the least of your problems though.
--
greg
I know I am wadding into this discussion as an beginner compared to the rest who
have answered this thread, but doesn't something like pgpool provide relief for
pseudo-multimaster replication, and what about software like sqlrelay wouldn't
these suites help to some extent ? Looking forward to be enlightened.
Cheers,
Aly.
William Yu wrote:
Carlos Henrique Reimer wrote:
I read some documents about replication and realized that if you plan
on using asynchronous replication, your application should be designed
from the outset with that in mind because asynchronous replication is
not something that can be easily “added on” after the fact.Yes, it requires a lot foresight to do multi-master replication --
especially across high latency connections. I do that now for 2
different projects. We have servers across the country replicating data
every X minutes with custom app logic resolves conflicting data.Allocation of unique IDs that don't collide across servers is a must.
For 1 project, instead of using numeric IDs, we using CHAR and
pre-append a unique server code so record #1 on server A is A0000000001
versus ?x0000000001 on other servers. For the other project, we were too
far along in development to change all our numerics into chars so we
wrote custom sequence logic to divide our 10billion ID space into
1-Xbillion for server 1, X-Ybillion for server 2, etc.With this step taken, we then had to isolate (1) transactions could run
on any server w/o issue (where we always take the newest record), (2)
transactions required an amalgam of all actions and (3) transactions had
to be limited to "home" servers. Record keeping stuff where we keep a
running history of all changes fell into the first category. It would
have been no different than 2 users on the same server updating the same
object at different times during the day. Updating of summary data fell
into category #2 and required parsing change history of individual
elements. Category #3 would be financial transactions requiring strict
locks were be divided up by client/user space and restricted to the
user's home server. This case would not allow auto-failover. Instead, it
would require some prolonged threshold of downtime for a server before
full financials are allowed on backup servers.---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
--
Aly S.P Dharshi
aly.dharshi@telus.net
"A good speech is like a good dress
that's short enough to be interesting
and long enough to cover the subject"
Carlos Henrique Reimer wrote:
I read some documents about replication and realized that if you plan
on using asynchronous replication, your application should be designed
from the outset with that in mind because asynchronous replication is
not something that can be easily �added on� after the fact.
Am I right?
Depending on your needs, you may find pgpool and Slony to be a workable
combination. This is better when you have a lot of reads and only
occasional writes. This way writes get redirected back to the master,
and read-only transactions get run on the slaves.
Best Wishes,
Chris Travers
Metatron Technology Consulting
It provides pseudo relief if all your servers are in the same building.
Having a front-end pgpool connector pointing to servers across the world
is not workable -- performance ends up being completely decrepit due to
the high latency.
Which is the problem we face. Great, you've got multiple servers for
failover. Too bad it doesn't do much good if your building gets hit by
fire/earthquake/hurricane/etc.
Aly Dharshi wrote:
Show quoted text
I know I am wadding into this discussion as an beginner compared to the
rest who have answered this thread, but doesn't something like pgpool
provide relief for pseudo-multimaster replication, and what about
software like sqlrelay wouldn't these suites help to some extent ?
Looking forward to be enlightened.Cheers,
Aly.
William Yu wrote:
Carlos Henrique Reimer wrote:
I read some documents about replication and realized that if you plan
on using asynchronous replication, your application should be
designed from the outset with that in mind because asynchronous
replication is not something that can be easily �added on� after the
fact.Yes, it requires a lot foresight to do multi-master replication --
especially across high latency connections. I do that now for 2
different projects. We have servers across the country replicating
data every X minutes with custom app logic resolves conflicting data.Allocation of unique IDs that don't collide across servers is a must.
For 1 project, instead of using numeric IDs, we using CHAR and
pre-append a unique server code so record #1 on server A is
A0000000001 versus ?x0000000001 on other servers. For the other
project, we were too far along in development to change all our
numerics into chars so we wrote custom sequence logic to divide our
10billion ID space into 1-Xbillion for server 1, X-Ybillion for server
2, etc.With this step taken, we then had to isolate (1) transactions could
run on any server w/o issue (where we always take the newest record),
(2) transactions required an amalgam of all actions and (3)
transactions had to be limited to "home" servers. Record keeping stuff
where we keep a running history of all changes fell into the first
category. It would have been no different than 2 users on the same
server updating the same object at different times during the day.
Updating of summary data fell into category #2 and required parsing
change history of individual elements. Category #3 would be financial
transactions requiring strict locks were be divided up by client/user
space and restricted to the user's home server. This case would not
allow auto-failover. Instead, it would require some prolonged
threshold of downtime for a server before full financials are allowed
on backup servers.
I would have a slight offtopic question, this is issue only of pgsql or
there are some other db solutions which have good performance when doing
this kind of replication across the world.
Regards,
Bohdan
Show quoted text
On Thu, Aug 25, 2005 at 09:01:49AM +0200, William Yu wrote:
It provides pseudo relief if all your servers are in the same building.
Having a front-end pgpool connector pointing to servers across the world
is not workable -- performance ends up being completely decrepit due to
the high latency.Which is the problem we face. Great, you've got multiple servers for
failover. Too bad it doesn't do much good if your building gets hit by
fire/earthquake/hurricane/etc.
Bohdan Linda schrieb:
I would have a slight offtopic question, this is issue only of pgsql or
there are some other db solutions which have good performance when doing
this kind of replication across the world.
it depends entirely on your application. There is no "one size
fits all"
For example to have an online backup, WAL archiving to remote
sites is often sufficient.
However you cannot have synchronous multimaster replication
over slow lines and high performance with updates the same
time.
There is always a tradeoff in any (even in high cost
commercial solutions) you have to carefully consider.
Regards,
BohdanOn Thu, Aug 25, 2005 at 09:01:49AM +0200, William Yu wrote:
It provides pseudo relief if all your servers are in the same building.
Having a front-end pgpool connector pointing to servers across the world
is not workable -- performance ends up being completely decrepit due to
the high latency.Which is the problem we face. Great, you've got multiple servers for
failover. Too bad it doesn't do much good if your building gets hit by
fire/earthquake/hurricane/etc.
This would remove the application using that data too, or not? ;)
As far as I know, nobody has a generic solution for multi-master
replication where servers are not in close proximity. Single master
replication? Doable. Application specific conflict resolution? Doable.
Off the shelf package that somehow knows financial transactions on a
server shouldn't be duplicated on another? Uhh...I'd be wary of trying
it out myself.
Bohdan Linda wrote:
Show quoted text
I would have a slight offtopic question, this is issue only of pgsql or
there are some other db solutions which have good performance when doing
this kind of replication across the world.Regards,
BohdanOn Thu, Aug 25, 2005 at 09:01:49AM +0200, William Yu wrote:
It provides pseudo relief if all your servers are in the same building.
Having a front-end pgpool connector pointing to servers across the world
is not workable -- performance ends up being completely decrepit due to
the high latency.Which is the problem we face. Great, you've got multiple servers for
failover. Too bad it doesn't do much good if your building gets hit by
fire/earthquake/hurricane/etc.
Another tidbit I'd like to add. What has helped a lot in implementing
high-latency master-master replication writing our software with a
business process model in mind where data is not posted directly to the
final tables. Instead, users are generally allowed to enter anything --
could be incorrect, incomplete or the user does not have rights -- the
data is still dumped into "pending" tables for people with rights to
fix/review/approve later. Only after that process is the data posted to
the final tables. (Good data entered on the first try still gets pended
-- validation phase simply assumes the user who entered the data is also
the one who fixed/reviewed/approved.)
In terms of replication, this model allows for users to enter data on
any server. The pending records then get replicated to every server.
Each specific server then looks at it's own dataset of pendings to post
to final tables. Final data is then replicated back to all the
participating servers.
There may be a delay for the user if he/she is working on a server that
doesn't have rights to post his data. However, the pending->post model
gets users used to the idea of (1) entering all data in large swoop and
validating/posting it afterwards and (2) data can/will sit in pending
for a period of time until it is acted upon with somebody/some server
with the proper authority. Hence users aren't expecting results to pop
up on the screen the moment they press the submit button.
William Yu wrote:
Show quoted text
Yes, it requires a lot foresight to do multi-master replication --
especially across high latency connections. I do that now for 2
different projects. We have servers across the country replicating data
every X minutes with custom app logic resolves conflicting data.Allocation of unique IDs that don't collide across servers is a must.
For 1 project, instead of using numeric IDs, we using CHAR and
pre-append a unique server code so record #1 on server A is A0000000001
versus ?x0000000001 on other servers. For the other project, we were too
far along in development to change all our numerics into chars so we
wrote custom sequence logic to divide our 10billion ID space into
1-Xbillion for server 1, X-Ybillion for server 2, etc.With this step taken, we then had to isolate (1) transactions could run
on any server w/o issue (where we always take the newest record), (2)
transactions required an amalgam of all actions and (3) transactions had
to be limited to "home" servers. Record keeping stuff where we keep a
running history of all changes fell into the first category. It would
have been no different than 2 users on the same server updating the same
object at different times during the day. Updating of summary data fell
into category #2 and required parsing change history of individual
elements. Category #3 would be financial transactions requiring strict
locks were be divided up by client/user space and restricted to the
user's home server. This case would not allow auto-failover. Instead, it
would require some prolonged threshold of downtime for a server before
full financials are allowed on backup servers.