Help with data transfer please
Hi All!
I don't know if this is the right place to post this, but if anyone knows databases-you all do!
In brief, I'm open to suggestions to the cheapest most reliable way of either maintaining two seperate postgre servers or of using one server from an office 30 miles away.
The long version:
I have my postgre database running smoothly in my rural office, ( i.e., only telephone connections to the office). I'm about to open a second office 30 miles away that does not currently have internet access, but could. I will be alternating offices on an every other day basis and it is important the the information in the "database" goes with me so my staff can update it where I am. However, although I've now written the internent programs necessary to access the single database (currently in the rural office), AT&T will be more then happy to charge a small fortune to maintain telephone connection (long distance) between the two offices. Since I'm initially expecting zero income from the new office, it is not feasible at this time to be calling the rural office via telephone due to telephone costs. Also, I might add, no-one will be in the rural office durring the time that I am in the new office. The database uses many sequences for record uniqueness. Thus simply transporting the new or modified records (via floppy) would leave the sequences corrupted (I think??).
I'm sure this has been solved many thousand of times before, can someone please suggest a simple (cheap) solution?
Thanks,
ajw
Hi!
This guy was really creative! Unfortunately, only one
of the offices has a decent internet connection--the
other only a telephone line. My initial program
construction was to use a postgre-apache-php setup on
rh linux using telephone connections. AT&T would make
this to costly at three to four hours of user
long-distance charges. I'm thinking of perhaps adding
triggers to each of the files based on update or
insert and perhaps writing a program executed from
cron that will on a nigthly basis update two different
machines via telephone--it would be relatively quick.
What do you think?
Thanks,
ajw
--- "Peter A. Daly" <petedaly@ix.netcom.com> wrote:
This isn't really the answer you are looking for,
but I have seen it
done in the past.Discouraged by the lack of cost effective high speed
phone lines in
northern New York State, a guy started up his own
small ISP. He
couldn't afford a leased line for his little
venture. He purchased a
total of 12 phone lines. 4 in the area he wanted to
have access from, 4
in Syracuse, were he could get cost effective
internet access, and 4
"dummy phone lines" half way between. The 4 phone
lines in the middle
had regional calling access to both other ends, at
flat rate. He set
each of the 4 phone lines in the middle to do call
forwarding to the 4
phone lines in syracuse. With 8 modems, and a
little fancy networking,
he was able to setup a 134.4 baud "always on"
connection from "the
sticks" to "the big city" at a rock bottom price.You may only need one end to end line (3 lines
total) in your situation.
Another solution may be a small VPN between your
two locations. If you
have "flat rate" internet service in both locations,
set up a little 2
node virtual private network betweeen them.Sorry I don't have any answers at the Postgres
level, I hope others may.
I could come up with some, but none that I could
see being reliable,
other than maybe having the DB on a removable hard
disk, and taking it
with you (which may not be such a bad solution after
all.)Hope this helps,
-Pete
Alan wrote:
Hi All!
I don't know if this is the right place to post
this, but if anyone
knows databases-you all do!
In brief, I'm open to suggestions to the cheapest
most reliable way of
either maintaining two seperate postgre servers or
of using one server
from an office 30 miles away.
The long version:
I have my postgre database running smoothly in my
rural office, (
i.e., only telephone connections to the office).
I'm about to open a
second office 30 miles away that does not
currently have internet
access, but could. I will be alternating offices
on an every other day
basis and it is important the the information in
the "database" goes
with me so my staff can update it where I am.
However, although I've
now written the internent programs necessary to
access the single
database (currently in the rural office), AT&T
will be more then happy
to charge a small fortune to maintain telephone
connection (long
distance) between the two offices. Since I'm
initially expecting zero
income from the new office, it is not feasible at
this time to be
calling the rural office via telephone due to
telephone costs. Also, I
might add, no-one will be in the rural office
durring the time that I
am in the new office. The database uses many
sequences for record
uniqueness. Thus simply transporting the new or
modified records (via
floppy) would leave the sequences corrupted (I
think??).
I'm sure this has been solved many thousand of
times before, can
someone please suggest a simple (cheap) solution?
Thanks,
ajw
__________________________________________________
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com
Import Notes
Reply to msg id not found: 3D0354AB.6000200@ix.netcom.com | Resolved by subject fallback
Hi!
You're sure right about that! However, what do you
think about using triggers on update and insert from
two different machines, and then at night using cron
(rh linux) to call the other machine and
transfer/update only the records that have changed?
Although at this time the database only needs to be
active where I am, there will be about five other
people actively inputing/getting data from the server.
Initially, I was headed toward your second
proposal-until a quick calculation of the telephone
costs put it out of reach. Unfortunately, only one
office has decent internet connections other then by
phone line.
Thanks!
ajw
--- terry@greatgulfhomes.com wrote:
Syncing the database will probably turn out to be a
headache, you will
always have to remember to do it upon arrival and
before leaving, no matter
how much of a rush you are in.There are better solutions:
1) If you are the only person using the database,
and noone is using it
when you leave, why don't you just install it on a
laptop and bring the
laptop with you?2) You could also setup the database on a server in
one of the offices, and
make it accessible via the internet (this solution
requres careful attention
to security, don't use "trust" use passwords, and
enable TCP/IP access.
This would require a static IP on your server.Terry Fielder
Network Engineer
Great Gulf Homes / Ashton Woods Homes
terry@greatgulfhomes.com-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf
Of Alan
Sent: Sunday, June 09, 2002 2:27 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Help with data transfer pleaseHi All!
I don't know if this is the right place to post
this, but if anyone knows
databases-you all do!In brief, I'm open to suggestions to the cheapest
most reliable way of
either maintaining two seperate postgre servers or
of using one server from
an office 30 miles away.The long version:
I have my postgre database running smoothly in my
rural office, ( i.e.,
only telephone connections to the office). I'm about
to open a second office
30 miles away that does not currently have internet
access, but could. I
will be alternating offices on an every other day
basis and it is important
the the information in the "database" goes with me
so my staff can update it
where I am. However, although I've now written the
internent programs
necessary to access the single database (currently
in the rural office),
AT&T will be more then happy to charge a small
fortune to maintain telephone
connection (long distance) between the two offices.
Since I'm initially
expecting zero income from the new office, it is not
feasible at this time
to be calling the rural office via telephone due to
telephone costs. Also, I
might add, no-one will be in the rural office
durring the time that I am in
the new office. The database uses many sequences for
record uniqueness. Thus
simply transporting the new or modified records (via
floppy) would leave the
sequences corrupted (I think??).I'm sure this has been solved many thousand of
times before, can someone
please suggest a simple (cheap) solution?Thanks,
ajw
__________________________________________________
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com
Import Notes
Reply to msg id not found: 003d01c20fc0$f66be9c0$2766f30a@development.greatgulfhomes.com | Resolved by subject fallback
Alan, you may want to look into the contrib subdirectory.
I have seen some gizmo there that promises to do replication.
It seems to be using triggers etc. as you suggest. See
either in the source tree or if you don't have it around
but have installed the contrib stuff, all the readmes are
installed too, so you should find it.
good luck,
-Gunther
Alan Wayne wrote:
Hi!
You're sure right about that! However, what do you
think about using triggers on update and insert from
two different machines, and then at night using cron
(rh linux) to call the other machine and
transfer/update only the records that have changed?Although at this time the database only needs to be
active where I am, there will be about five other
people actively inputing/getting data from the server.Initially, I was headed toward your second
proposal-until a quick calculation of the telephone
costs put it out of reach. Unfortunately, only one
office has decent internet connections other then by
phone line.Thanks!
ajw--- terry@greatgulfhomes.com wrote:Syncing the database will probably turn out to be a
headache, you will
always have to remember to do it upon arrival and
before leaving, no matter
how much of a rush you are in.There are better solutions:
1) If you are the only person using the database,
and noone is using it
when you leave, why don't you just install it on a
laptop and bring the
laptop with you?2) You could also setup the database on a server in
one of the offices, and
make it accessible via the internet (this solution
requres careful attention
to security, don't use "trust" use passwords, and
enable TCP/IP access.
This would require a static IP on your server.Terry Fielder
Network Engineer
Great Gulf Homes / Ashton Woods Homes
terry@greatgulfhomes.com-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf
Of Alan
Sent: Sunday, June 09, 2002 2:27 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Help with data transfer pleaseHi All!
I don't know if this is the right place to post
this, but if anyone knows
databases-you all do!In brief, I'm open to suggestions to the cheapest
most reliable way of
either maintaining two seperate postgre servers or
of using one server from
an office 30 miles away.The long version:
I have my postgre database running smoothly in my
rural office, ( i.e.,
only telephone connections to the office). I'm about
to open a second office
30 miles away that does not currently have internet
access, but could. I
will be alternating offices on an every other day
basis and it is important
the the information in the "database" goes with me
so my staff can update it
where I am. However, although I've now written the
internent programs
necessary to access the single database (currently
in the rural office),
AT&T will be more then happy to charge a small
fortune to maintain telephone
connection (long distance) between the two offices.
Since I'm initially
expecting zero income from the new office, it is not
feasible at this time
to be calling the rural office via telephone due to
telephone costs. Also, I
might add, no-one will be in the rural office
durring the time that I am in
the new office. The database uses many sequences for
record uniqueness. Thus
simply transporting the new or modified records (via
floppy) would leave the
sequences corrupted (I think??).I'm sure this has been solved many thousand of
times before, can someone
please suggest a simple (cheap) solution?Thanks,
ajw__________________________________________________
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
--
Gunther Schadow, M.D., Ph.D. gschadow@regenstrief.org
Medical Information Scientist Regenstrief Institute for Health Care
Adjunct Assistant Professor Indiana University School of Medicine
tel:1(317)630-7960 http://aurora.regenstrief.org
On Sun, Jun 09, 2002 at 12:52:04PM -0500, Gunther Schadow wrote:
Alan, you may want to look into the contrib subdirectory.
I have seen some gizmo there that promises to do replication.
It seems to be using triggers etc. as you suggest. See
either in the source tree or if you don't have it around
but have installed the contrib stuff, all the readmes are
installed too, so you should find it.
I I understand his need correctly, what is needed is a two-way sync:
changes in office 1 need to be in office 2 for when he goes to office
2, and then changes made at office 2 need to get sent back to office
1. If I'm right, rserv won't work. Recent discussions of
replication did not, IIRC, turn up any multi-master replication
systems other than Postgres-R. (Maybe I don't RC, though.)
One answer might be to buy a laptop. Carry the database in the
laptop. It's another expense, of course, but it might work.
Alternatively, since the database can (presumably) be shut down every
night, why not shut down the postmaster at each end and use rsync to
synchronise the data directories? (I haven't tested that, and don't
know if it'll work, but I think it should.)
A
--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M6K 3E3
+1 416 646 3304 x110
On Mon, 2002-06-10 at 17:42, Andrew Sullivan wrote:
I I understand his need correctly, what is needed is a two-way sync:
changes in office 1 need to be in office 2 for when he goes to office
2, and then changes made at office 2 need to get sent back to office
1. If I'm right, rserv won't work. Recent discussions of
replication did not, IIRC, turn up any multi-master replication
systems other than Postgres-R. (Maybe I don't RC, though.)One answer might be to buy a laptop. Carry the database in the
laptop. It's another expense, of course, but it might work.
Good idea
Alternatively, since the database can (presumably) be shut down every
night, why not shut down the postmaster at each end and use rsync to
synchronise the data directories? (I haven't tested that, and don't
know if it'll work, but I think it should.)
SSH is your friend!
Set up a cron job that dumps the database.
SCP the dump from one machine to the other.
Drop the database and recreate it.
Import the dump file.
This is the system I will be putting into place for a client in
september. For another with non sensitive data the dump is e-mailed
daily as a backup. Just got to love aDSL!!
Cheers
Tony Grant
--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html
You could use removable media (Zip drive, things like that) or CDRW to
back up the whole data directory, then just overwrite it at the other end
before bringing up the database.
On Mon, Jun 10, 2002 at 08:40:36PM +0200, tony wrote:
Alternatively, since the database can (presumably) be shut down every
night, why not shut down the postmaster at each end and use rsync to
synchronise the data directories? (I haven't tested that, and don't
know if it'll work, but I think it should.)
Set up a cron job that dumps the database.
SCP the dump from one machine to the other.
Drop the database and recreate it.
Import the dump file.
But he said he was trying to reduce long-distance charges as much as
possible. Wouldn't rsync (maybe over ssh) keep those charges lower?
A full dump file on a database of any size can get pretty big, and if
only one or two data files have changed in the period, you'd only
need to copy the changes with rsync. (As I said, I don't know if
it would work, but it'd be worth testing.)
A
--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M6K 3E3
+1 416 646 3304 x110
Hey, don't forget:
His original problem was sometimes updating the dbase from the home site,
sometimes updating it from the remote site.
You cron job sounds like it always pushes the dbase to the remote site (or
vice versa).
You would need to make the script more complex so it decides if the version
received has newer data then the one it currently has, and only clobber the
dbase if newer.
Terry Fielder
Network Engineer
Great Gulf Homes / Ashton Woods Homes
terry@greatgulfhomes.com
Show quoted text
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of tony
Sent: Monday, June 10, 2002 2:41 PM
To: Andrew Sullivan
Cc: pgsql-general
Subject: Re: [GENERAL] Help with data transfer pleaseOn Mon, 2002-06-10 at 17:42, Andrew Sullivan wrote:
I I understand his need correctly, what is needed is a two-way sync:
changes in office 1 need to be in office 2 for when he goesto office
2, and then changes made at office 2 need to get sent back to office
1. If I'm right, rserv won't work. Recent discussions of
replication did not, IIRC, turn up any multi-master replication
systems other than Postgres-R. (Maybe I don't RC, though.)One answer might be to buy a laptop. Carry the database in the
laptop. It's another expense, of course, but it might work.Good idea
Alternatively, since the database can (presumably) be shut
down every
night, why not shut down the postmaster at each end and use rsync to
synchronise the data directories? (I haven't tested that, and don't
know if it'll work, but I think it should.)SSH is your friend!
Set up a cron job that dumps the database.
SCP the dump from one machine to the other.
Drop the database and recreate it.
Import the dump file.This is the system I will be putting into place for a client in
september. For another with non sensitive data the dump is e-mailed
daily as a backup. Just got to love aDSL!!Cheers
Tony Grant
--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html---------------------------(end of
broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
It is my understanding that copying the files that reside in the data
directory are not sufficient for a backup/restore, (because there are file
pointers within the data files I believe, pointers which are no longer valid
on a different server because different blocks get assigned to different
files). Therefore it follows that an rsync would not be sufficient to keep
the database in sync.
Someone please correct me if my understanding is wrong.
Terry Fielder
Network Engineer
Great Gulf Homes / Ashton Woods Homes
terry@greatgulfhomes.com
Show quoted text
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of
Andrew Sullivan
Sent: Monday, June 10, 2002 3:34 PM
To: pgsql-general
Subject: Re: [GENERAL] Help with data transfer pleaseOn Mon, Jun 10, 2002 at 08:40:36PM +0200, tony wrote:
Alternatively, since the database can (presumably) be
shut down every
night, why not shut down the postmaster at each end and
use rsync to
synchronise the data directories? (I haven't tested
that, and don't
know if it'll work, but I think it should.)
Set up a cron job that dumps the database.
SCP the dump from one machine to the other.
Drop the database and recreate it.
Import the dump file.But he said he was trying to reduce long-distance charges as much as
possible. Wouldn't rsync (maybe over ssh) keep those charges lower?
A full dump file on a database of any size can get pretty big, and if
only one or two data files have changed in the period, you'd only
need to copy the changes with rsync. (As I said, I don't know if
it would work, but it'd be worth testing.)A
-- ---- Andrew Sullivan 87 Mowat Avenue Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M6K 3E3 +1 416 646 3304 x110---------------------------(end of
broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
On Mon, Jun 10, 2002 at 04:30:26PM -0400, terry@greatgulfhomes.com wrote:
It is my understanding that copying the files that reside in the data
directory are not sufficient for a backup/restore, (because there are file
pointers within the data files I believe, pointers which are no longer valid
on a different server because different blocks get assigned to different
files). Therefore it follows that an rsync would not be sufficient to keep
the database in sync.
Hmm. That sounds plausible. But you can tar up a data directory,
and restore it later on a different physical device, and it will
work. Or, at least, it has worked for me. This, only if the
postmaster is shut down. But maybe my case was just a fluke.
Someone who knows should weigh in.
A
--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M6K 3E3
+1 416 646 3304 x110
Andrew Sullivan <andrew@libertyrms.info> writes:
On Mon, Jun 10, 2002 at 04:30:26PM -0400, terry@greatgulfhomes.com wrote:
It is my understanding that copying the files that reside in the data
directory are not sufficient for a backup/restore, (because there are file
pointers within the data files I believe, pointers which are no longer valid
on a different server because different blocks get assigned to different
files). Therefore it follows that an rsync would not be sufficient to keep
the database in sync.
Hmm. That sounds plausible. But you can tar up a data directory,
and restore it later on a different physical device, and it will
work. Or, at least, it has worked for me. This, only if the
postmaster is shut down. But maybe my case was just a fluke.
Andrew is correct: you can tar a complete $PGDATA tree and move it to
another machine (of the same architecture anyway --- differences in
endianness, datatype alignment requirements, or floating-point
representation can still mess you up).
What people keep trying to do that *does not* work is to copy only a
subset of the $PGDATA tree, such as $PGDATA/base/somedb/. This does not
work because only part of the system state is in the data files --- the
rest is in the transaction status log (pg_clog/ directory in 7.2, pg_log
table in older releases). You can't get away with copying a data file
into another installation that has different contents of pg_clog,
because the transaction numbers will be out of sync.
Ordinarily you need to copy the pg_xlog/ contents that go with the
data files too. For data transfer purposes (not disaster recovery)
you could omit that subdirectory in favor of running pg_resetxlog
at the destination; but I doubt it's worth the trouble.
And yes, you'd better have the postmaster shut down, because otherwise
tar is likely to collect pg_clog entries that are out of step with the
data files --- not to mention the prospect that there are unwritten
modifications still in memory.
As far as the original question goes --- I suppose you could shut down
the postmaster and run rsync to keep a remote backup copy up-to-date,
as long as the backup was just a cold-storage duplicate. Don't try to
start a postmaster in it till you want to make it your primary (else
transaction numbers will get out of sync).
regards, tom lane
Hi!
Where do I find rsync? I'm quite new to the language, what is Postgres-R?
I have no problem shutting down the postmaster at night to do whatever is
necessary. I do have a laptap which I do most of my programming on,
however, I fail to see how I can keep differenct machines in sync with each
machine having its own version of the sequence files used for unique record
id's.
Without yet knowing rsync, my first thought was to write a program that
would capture
insert and updates (after the fact) and a second program that would
essentially add or
modify the needed records at the second machine using user specific data as
candidate keys.
It would sure be great if there was already something out there to do this,
or that could
correctly keep all the sequences in sync! I have no desire to reinvent the
wheel, but will do
whatever is necessary.
Thanks!
ajw
----- Original Message -----
From: "Andrew Sullivan" <andrew@libertyrms.info>
To: "pgsql-general" <pgsql-general@postgresql.org>
Sent: Monday, June 10, 2002 10:42 AM
Subject: Re: [GENERAL] Help with data transfer please
Show quoted text
On Sun, Jun 09, 2002 at 12:52:04PM -0500, Gunther Schadow wrote:
Alan, you may want to look into the contrib subdirectory.
I have seen some gizmo there that promises to do replication.
It seems to be using triggers etc. as you suggest. See
either in the source tree or if you don't have it around
but have installed the contrib stuff, all the readmes are
installed too, so you should find it.I I understand his need correctly, what is needed is a two-way sync:
changes in office 1 need to be in office 2 for when he goes to office
2, and then changes made at office 2 need to get sent back to office
1. If I'm right, rserv won't work. Recent discussions of
replication did not, IIRC, turn up any multi-master replication
systems other than Postgres-R. (Maybe I don't RC, though.)One answer might be to buy a laptop. Carry the database in the
laptop. It's another expense, of course, but it might work.Alternatively, since the database can (presumably) be shut down every
night, why not shut down the postmaster at each end and use rsync to
synchronise the data directories? (I haven't tested that, and don't
know if it'll work, but I think it should.)A
-- ---- Andrew Sullivan 87 Mowat Avenue Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M6K 3E3 +1 416 646 3304 x110---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
Hi!
This sounds plausible and may very well be the solution. Sorry for my
ignorance, I know next to nothing about unix/linux scripts. What is
"SSH" and "SCP" ?
Can pg_dump be used to do this?
What would you think about dumping the database to a read/write CD?
Thanks,
ajw
----- Original Message -----
From: "tony" <tony@animaproductions.com>
To: "Andrew Sullivan" <andrew@libertyrms.info>
Cc: "pgsql-general" <pgsql-general@postgresql.org>
Sent: Monday, June 10, 2002 1:40 PM
Subject: Re: [GENERAL] Help with data transfer please
Show quoted text
On Mon, 2002-06-10 at 17:42, Andrew Sullivan wrote:
I I understand his need correctly, what is needed is a two-way sync:
changes in office 1 need to be in office 2 for when he goes to office
2, and then changes made at office 2 need to get sent back to office
1. If I'm right, rserv won't work. Recent discussions of
replication did not, IIRC, turn up any multi-master replication
systems other than Postgres-R. (Maybe I don't RC, though.)One answer might be to buy a laptop. Carry the database in the
laptop. It's another expense, of course, but it might work.Good idea
Alternatively, since the database can (presumably) be shut down every
night, why not shut down the postmaster at each end and use rsync to
synchronise the data directories? (I haven't tested that, and don't
know if it'll work, but I think it should.)SSH is your friend!
Set up a cron job that dumps the database.
SCP the dump from one machine to the other.
Drop the database and recreate it.
Import the dump file.This is the system I will be putting into place for a client in
september. For another with non sensitive data the dump is e-mailed
daily as a backup. Just got to love aDSL!!Cheers
Tony Grant
--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
Hi!
This sounds perfect for me!
Could you please tell me how or what directories I need to back
up to keep the system intact and (perhaps) how to do it? My last effort at
upgrading from 7.1 to
7.2 was somewhat of a disaster, necessitating my removeal of more
directories then
necessary, I'm sure.
I'd sure appreciate it if you could outline for me what to do.
Thanks!
ajw
----- Original Message -----
From: "Scott Marlowe" <scott.marlowe@ihs.com>
To: "pgsql-general" <pgsql-general@postgresql.org>
Sent: Monday, June 10, 2002 1:47 PM
Subject: Re: [GENERAL] Help with data transfer please
Show quoted text
You could use removable media (Zip drive, things like that) or CDRW to
back up the whole data directory, then just overwrite it at the other end
before bringing up the database.---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
Sorry for my stupidity, can you tell me where
the default base directory of $PGDATA is on
RH Linux (most recent version)? And perhaps
what tar options are necessary to follow the tree's
branches?
Thanks for any help,
ajw
----- Original Message -----
From: "Tom Lane" <tgl@sss.pgh.pa.us>
To: "Andrew Sullivan" <andrew@libertyrms.info>
Cc: "'pgsql-general'" <pgsql-general@postgresql.org>
Sent: Monday, June 10, 2002 9:00 PM
Subject: Re: [GENERAL] Help with data transfer please
Andrew Sullivan <andrew@libertyrms.info> writes:
On Mon, Jun 10, 2002 at 04:30:26PM -0400, terry@greatgulfhomes.com
wrote:
It is my understanding that copying the files that reside in the data
directory are not sufficient for a backup/restore, (because there are
file
pointers within the data files I believe, pointers which are no longer
valid
on a different server because different blocks get assigned to
different
files). Therefore it follows that an rsync would not be sufficient to
keep
Show quoted text
the database in sync.
Hmm. That sounds plausible. But you can tar up a data directory,
and restore it later on a different physical device, and it will
work. Or, at least, it has worked for me. This, only if the
postmaster is shut down. But maybe my case was just a fluke.Andrew is correct: you can tar a complete $PGDATA tree and move it to
another machine (of the same architecture anyway --- differences in
endianness, datatype alignment requirements, or floating-point
representation can still mess you up).What people keep trying to do that *does not* work is to copy only a
subset of the $PGDATA tree, such as $PGDATA/base/somedb/. This does not
work because only part of the system state is in the data files --- the
rest is in the transaction status log (pg_clog/ directory in 7.2, pg_log
table in older releases). You can't get away with copying a data file
into another installation that has different contents of pg_clog,
because the transaction numbers will be out of sync.Ordinarily you need to copy the pg_xlog/ contents that go with the
data files too. For data transfer purposes (not disaster recovery)
you could omit that subdirectory in favor of running pg_resetxlog
at the destination; but I doubt it's worth the trouble.And yes, you'd better have the postmaster shut down, because otherwise
tar is likely to collect pg_clog entries that are out of step with the
data files --- not to mention the prospect that there are unwritten
modifications still in memory.As far as the original question goes --- I suppose you could shut down
the postmaster and run rsync to keep a remote backup copy up-to-date,
as long as the backup was just a cold-storage duplicate. Don't try to
start a postmaster in it till you want to make it your primary (else
transaction numbers will get out of sync).regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
On Tue, 2002-06-11 at 04:37, Alan wrote:
This sounds plausible and may very well be the solution. Sorry for my
ignorance, I know next to nothing about unix/linux scripts. What is
"SSH" and "SCP" ?
Secure shell and secure copy. They replace telnet and cp.
Can pg_dump be used to do this?
What would you think about dumping the database to a read/write CD?
Yes this is a good idea. Mount your CD-RW and do a pg_dump to the
mounted disk.
Remember that you need to drop the database and do a restore from the
dump file each time.
I missed your first post. Why do you have high long distance charges? I
would have the database in one location and access it from the distant
site. Maybe through a web front end.
Cheers
Tony Grant
--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html
Hi!
I've been writing the interface via postgre-apache-php to use on the
internet. However, the "older" office, is very rural--no internet
connection. Telephone is about it. The application I've written would demand
almost constant telphone connection to the server for the front-office to
input/request data. Unfortunately, the offices are long distance from each
other. (And I'd hate to think what the cost of a special line would be).
Thanks,
ajw
----- Original Message -----
From: "tony" <tony@animaproductions.com>
To: "Alan" <AlanJWayne@yahoo.com>
Cc: "postgres list" <pgsql-general@postgresql.org>
Sent: Tuesday, June 11, 2002 1:54 AM
Subject: Re: [GENERAL] Help with data transfer please
Show quoted text
On Tue, 2002-06-11 at 04:37, Alan wrote:
This sounds plausible and may very well be the solution. Sorry for my
ignorance, I know next to nothing about unix/linux scripts. What is
"SSH" and "SCP" ?Secure shell and secure copy. They replace telnet and cp.
Can pg_dump be used to do this?
What would you think about dumping the database to a read/write CD?
Yes this is a good idea. Mount your CD-RW and do a pg_dump to the
mounted disk.Remember that you need to drop the database and do a restore from the
dump file each time.I missed your first post. Why do you have high long distance charges? I
would have the database in one location and access it from the distant
site. Maybe through a web front end.Cheers
Tony Grant
--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?