Copy entire schema A to a different schema B
Hi,
To copy the source schema A to target schema B in the same database in
PG10.3, I use psql to dump schema A and manually removes anything specific
to the schema in the text dump file before importing into schema B. How do
I achieve the same exporting from Schema A and importing into schema B
using pg_dump with the -Fc option? Since the dump file generated is binary,
I could not make modifications to the file. Is the procedure the same in
version 11?
Thank.
Tiff
On 2/11/19 8:00 AM, Tiffany Thang wrote:
Hi,
To copy the source schema A to target schema B in the same database in
PG10.3, I use psql to dump schema A and manually removes anything
specific to the schema in the text dump file before importing into
schema B. How do I achieve the same exporting from Schema A and
importing into schema B using pg_dump with the -Fc option? Since the
Use the -f option to pg_restore:
https://www.postgresql.org/docs/10/app-pgrestore.html
So something like:
pg_restore -f text_file.sql the_binary_file.out
This will get you back to a text file you can manipulate.
dump file generated is binary, I could not make modifications to the
file. Is the procedure the same in version 11?Thank.
Tiff
--
Adrian Klaver
adrian.klaver@aklaver.com
On 2/11/19 10:00 AM, Tiffany Thang wrote:
Hi,
To copy the source schema A to target schema B in the same database in
PG10.3, I use psql to dump schema A and manually removes anything specific
to the schema in the text dump file before importing into schema B. How do
I achieve the same exporting from Schema A and importing into schema B
using pg_dump with the -Fc option? Since the dump file generated is
binary, I could not make modifications to the file. Is the procedure the
same in version 11?
Why do you need to use "--format=custom" instead of "--format=plain"?
For example:
$ pg_dump --format=plain --schema-only --schema=A
--
Angular momentum makes the world go 'round.
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to achieve
was to dump the schema quickly and be able to restore a single or subset of
objects from the dump. As far as I understand, the only way of achieving
that is to use the custom format and the -j option. Is that correct? Are
there any other alternatives?
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:10 AM Ron <ronljohnsonjr@gmail.com> wrote:
Show quoted text
On 2/11/19 10:00 AM, Tiffany Thang wrote:
Hi,
To copy the source schema A to target schema B in the same database in
PG10.3, I use psql to dump schema A and manually removes anythingspecific
to the schema in the text dump file before importing into schema B. How
do
I achieve the same exporting from Schema A and importing into schema B
using pg_dump with the -Fc option? Since the dump file generated is
binary, I could not make modifications to the file. Is the procedure the
same in version 11?Why do you need to use "--format=custom" instead of "--format=plain"?
For example:
$ pg_dump --format=plain --schema-only --schema=A--
Angular momentum makes the world go 'round.
On 2/11/19 8:30 AM, Tiffany Thang wrote:
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
achieve was to dump the schema quickly and be able to restore a single
or subset of objects from the dump. As far as I understand, the only way
of achieving that is to use the custom format and the -j option. Is that
correct? Are there any other alternatives?
If you want to use -j then you need to use the -Fd output:
https://www.postgresql.org/docs/10/app-pgdump.html
"-j njobs
--jobs=njobs
Run the dump in parallel by dumping njobs tables simultaneously.
This option reduces the time of the dump but it also increases the load
on the database server. You can only use this option with the directory
output format because this is the only output format where multiple
processes can write their data at the same time."
If you need to grab just a subset of the schema then there are options
to do that depending on the object. From above link as examples:
"-n schema
--schema=schema
Dump only schemas matching schema; this selects both the schema
itself, and all its contained objects. ..."
"-t table
--table=table
Dump only tables with names matching table. .."
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:10 AM Ron <ronljohnsonjr@gmail.com
<mailto:ronljohnsonjr@gmail.com>> wrote:On 2/11/19 10:00 AM, Tiffany Thang wrote:
Hi,
To copy the source schema A to target schema B in the samedatabase in
PG10.3, I use psql to dump schema A and manually removes anything
specific
to the schema in the text dump file before importing into schema
B. How do
I achieve the same exporting from Schema A and importing into
schema B
using pg_dump with the -Fc option? Since the dump file generated is
binary, I could not make modifications to the file. Is theprocedure the
same in version 11?
Why do you need to use "--format=custom" instead of "--format=plain"?
For example:
$ pg_dump --format=plain --schema-only --schema=A--
Angular momentum makes the world go 'round.
--
Adrian Klaver
adrian.klaver@aklaver.com
Thanks Adrian!
Tiff
On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver <adrian.klaver@aklaver.com>
wrote:
Show quoted text
On 2/11/19 8:30 AM, Tiffany Thang wrote:
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
achieve was to dump the schema quickly and be able to restore a single
or subset of objects from the dump. As far as I understand, the only way
of achieving that is to use the custom format and the -j option. Is that
correct? Are there any other alternatives?If you want to use -j then you need to use the -Fd output:
https://www.postgresql.org/docs/10/app-pgdump.html
"-j njobs
--jobs=njobsRun the dump in parallel by dumping njobs tables simultaneously.
This option reduces the time of the dump but it also increases the load
on the database server. You can only use this option with the directory
output format because this is the only output format where multiple
processes can write their data at the same time."If you need to grab just a subset of the schema then there are options
to do that depending on the object. From above link as examples:"-n schema
--schema=schemaDump only schemas matching schema; this selects both the schema
itself, and all its contained objects. ...""-t table
--table=tableDump only tables with names matching table. .."
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:10 AM Ron <ronljohnsonjr@gmail.com
<mailto:ronljohnsonjr@gmail.com>> wrote:On 2/11/19 10:00 AM, Tiffany Thang wrote:
Hi,
To copy the source schema A to target schema B in the samedatabase in
PG10.3, I use psql to dump schema A and manually removes anything
specific
to the schema in the text dump file before importing into schema
B. How do
I achieve the same exporting from Schema A and importing into
schema B
using pg_dump with the -Fc option? Since the dump file generated
is
binary, I could not make modifications to the file. Is the
procedure the
same in version 11?
Why do you need to use "--format=custom" instead of "--format=plain"?
For example:
$ pg_dump --format=plain --schema-only --schema=A--
Angular momentum makes the world go 'round.--
Adrian Klaver
adrian.klaver@aklaver.com
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to my
original issue where I could not restore the table to a different schema.
For example,
I would like to backup testuser1.mytable and restore it to
testuser2.mytable.
pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h myserver
testdb
where mytable is in testuser1 schema
The dump completed fine but when I attempted to restore the table using
pg_restore to another database, it tried to create the table in testuser1
schema. The restore failed since testuser1 schema does not exist in the
target database. When I created a testuser1 schema in the target database,
the restore worked fine. Since the dump toc is in binary format, I could
not make the change to reflect the new target schema, testuser2.
So, how should I go about restoring tables from one schema to a different
schema name?
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver <adrian.klaver@aklaver.com>
wrote:
Show quoted text
On 2/11/19 8:30 AM, Tiffany Thang wrote:
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
achieve was to dump the schema quickly and be able to restore a single
or subset of objects from the dump. As far as I understand, the only way
of achieving that is to use the custom format and the -j option. Is that
correct? Are there any other alternatives?If you want to use -j then you need to use the -Fd output:
https://www.postgresql.org/docs/10/app-pgdump.html
"-j njobs
--jobs=njobsRun the dump in parallel by dumping njobs tables simultaneously.
This option reduces the time of the dump but it also increases the load
on the database server. You can only use this option with the directory
output format because this is the only output format where multiple
processes can write their data at the same time."If you need to grab just a subset of the schema then there are options
to do that depending on the object. From above link as examples:"-n schema
--schema=schemaDump only schemas matching schema; this selects both the schema
itself, and all its contained objects. ...""-t table
--table=tableDump only tables with names matching table. .."
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:10 AM Ron <ronljohnsonjr@gmail.com
<mailto:ronljohnsonjr@gmail.com>> wrote:On 2/11/19 10:00 AM, Tiffany Thang wrote:
Hi,
To copy the source schema A to target schema B in the samedatabase in
PG10.3, I use psql to dump schema A and manually removes anything
specific
to the schema in the text dump file before importing into schema
B. How do
I achieve the same exporting from Schema A and importing into
schema B
using pg_dump with the -Fc option? Since the dump file generated
is
binary, I could not make modifications to the file. Is the
procedure the
same in version 11?
Why do you need to use "--format=custom" instead of "--format=plain"?
For example:
$ pg_dump --format=plain --schema-only --schema=A--
Angular momentum makes the world go 'round.--
Adrian Klaver
adrian.klaver@aklaver.com
On 2/20/19 3:22 PM, Tiffany Thang wrote:
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to my
original issue where I could not restore the table to a different schema.For example,
I would like to backup testuser1.mytable and restore it to
testuser2.mytable.pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h
myserver testdbwhere mytable is in testuser1 schema
The dump completed fine but when I attempted to restore the table
using pg_restore to another database, it tried to create the table in
testuser1 schema. The restore failed since testuser1 schema does not
exist in the target database. When I created a testuser1 schema in the
target database, the restore worked fine. Since the dump toc is in
binary format, I could not make the change to reflect the new target
schema, testuser2.So, how should I go about restoring tables from one schema to a
different schema name?Thanks.
Tiff
Can you not simply rename the schema after importing it? Perhaps
renaming an exiting version first, if necessary. Rename schema A to B;
import schema A; rename A to C; rename B to A;
Hi Ron,
How would that work if I'm only interested in importing/refreshing a single
table in a target schema that contains several other tables?
Thanks.
Tiff
On Wed, Feb 20, 2019 at 5:36 PM Rob Sargent <robjsargent@gmail.com> wrote:
Show quoted text
On 2/20/19 3:22 PM, Tiffany Thang wrote:
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to my
original issue where I could not restore the table to a different schema.For example,
I would like to backup testuser1.mytable and restore it to
testuser2.mytable.pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h myserver
testdbwhere mytable is in testuser1 schema
The dump completed fine but when I attempted to restore the table using
pg_restore to another database, it tried to create the table in testuser1
schema. The restore failed since testuser1 schema does not exist in the
target database. When I created a testuser1 schema in the target database,
the restore worked fine. Since the dump toc is in binary format, I could
not make the change to reflect the new target schema, testuser2.So, how should I go about restoring tables from one schema to a different
schema name?Thanks.
Tiff
Can you not simply rename the schema after importing it? Perhaps renaming
an exiting version first, if necessary. Rename schema A to B; import schema
A; rename A to C; rename B to A;
On 2/20/19 4:21 PM, Tiffany Thang wrote:
Hi Ron,
How would that work if I'm only interested in importing/refreshing a
single table in a target schema that contains several other tables?Thanks.
Tiff
On Wed, Feb 20, 2019 at 5:36 PM Rob Sargent <robjsargent@gmail.com
<mailto:robjsargent@gmail.com>> wrote:On 2/20/19 3:22 PM, Tiffany Thang wrote:
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back
to my original issue where I could not restore the table to a
different schema.For example,
I would like to backup testuser1.mytable and restore it to
testuser2.mytable.pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h
myserver testdbwhere mytable is in testuser1 schema
The dump completed fine but when I attempted to restore the table
using pg_restore to another database, it tried to create the
table in testuser1 schema. The restore failed since testuser1
schema does not exist in the target database. When I created a
testuser1 schema in the target database, the restore worked fine.
Since the dump toc is in binary format, I could not make the
change to reflect the new target schema, testuser2.So, how should I go about restoring tables from one schema to a
different schema name?Thanks.
Tiff
Can you not simply rename the schema after importing it? Perhaps
renaming an exiting version first, if necessary. Rename schema A
to B; import schema A; rename A to C; rename B to A;
If nothing is in the way for the restore maybe all you need is
ALTER TABLE [ IF EXISTS ]/|name|/
SET SCHEMA/|new_schema|/
On 2/20/19 2:22 PM, Tiffany Thang wrote:
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to my
original issue where I could not restore the table to a different schema.For example,
I would like to backup testuser1.mytable and restore it to
testuser2.mytable.pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h
myserver testdb
Is table mytable big enough to actually need 8 parallel jobs?
More below.
where mytable is in testuser1 schema
The dump completed fine but when I attempted to restore the table using
pg_restore to another database, it tried to create the table in
testuser1 schema. The restore failed since testuser1 schema does not
exist in the target database. When I created a testuser1 schema in the
target database, the restore worked fine. Since the dump toc is in
binary format, I could not make the change to reflect the new target
schema, testuser2.
pg_restore -f is your friend.
If you do something like:
pg_restore -f mytable.sql c:\temp\testuser1
it will restore the pg_dump output to the file mytable.sql in text form,
where you can change things as you need.
So, how should I go about restoring tables from one schema to a
different schema name?Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver
<adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>> wrote:On 2/11/19 8:30 AM, Tiffany Thang wrote:
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
achieve was to dump the schema quickly and be able to restore asingle
or subset of objects from the dump. As far as I understand, the
only way
of achieving that is to use the custom format and the -j option.
Is that
correct? Are there any other alternatives?
If you want to use -j then you need to use the -Fd output:
https://www.postgresql.org/docs/10/app-pgdump.html
"-j njobs
--jobs=njobsRun the dump in parallel by dumping njobs tables simultaneously.
This option reduces the time of the dump but it also increases the load
on the database server. You can only use this option with the directory
output format because this is the only output format where multiple
processes can write their data at the same time."If you need to grab just a subset of the schema then there are options
to do that depending on the object. From above link as examples:"-n schema
--schema=schemaDump only schemas matching schema; this selects both the schema
itself, and all its contained objects. ...""-t table
--table=tableDump only tables with names matching table. .."
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:10 AM Ron <ronljohnsonjr@gmail.com
<mailto:ronljohnsonjr@gmail.com>
<mailto:ronljohnsonjr@gmail.com
<mailto:ronljohnsonjr@gmail.com>>> wrote:
On 2/11/19 10:00 AM, Tiffany Thang wrote:
> Hi,
> To copy the source schema A to target schema B in the same
database in
> PG10.3, I use psql to dump schema A and manually removesanything
specific
> to the schema in the text dump file before importing intoschema
B. How do
> I achieve the same exporting from Schema A and importing into
schema B
> using pg_dump with the -Fc option? Since the dump filegenerated is
> binary, I could not make modifications to the file. Is the
procedure the
> same in version 11?Why do you need to use "--format=custom" instead of
"--format=plain"?
For example:
$ pg_dump --format=plain --schema-only --schema=A--
Angular momentum makes the world go 'round.--
Adrian Klaver
adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>
--
Adrian Klaver
adrian.klaver@aklaver.com
On 2/20/19 6:16 PM, Adrian Klaver wrote:
On 2/20/19 2:22 PM, Tiffany Thang wrote:
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to my
original issue where I could not restore the table to a different schema.For example,
I would like to backup testuser1.mytable and restore it to
testuser2.mytable.pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h myserver
testdbIs table mytable big enough to actually need 8 parallel jobs?
A single table will only use a single thread, no matter how big it is, no?
(That is my experience with v9.6, at least.)
--
Angular momentum makes the world go 'round.
Hi Tiffany, Isn't it possible for you to do:
1) rename temporarily schema testuser1 to testuser2
2) dump that schema into binary format
3 rename back schema testuser2 to testuser1
4 restore backup to testuser2 schema on other DB?
Jura.
Show quoted text
On Wed, 20 Feb 2019 at 23:23, Tiffany Thang <tiffanythang@gmail.com> wrote:
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to my original issue where I could not restore the table to a different schema.For example,
I would like to backup testuser1.mytable and restore it to testuser2.mytable.pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h myserver testdb
where mytable is in testuser1 schema
The dump completed fine but when I attempted to restore the table using pg_restore to another database, it tried to create the table in testuser1 schema. The restore failed since testuser1 schema does not exist in the target database. When I created a testuser1 schema in the target database, the restore worked fine. Since the dump toc is in binary format, I could not make the change to reflect the new target schema, testuser2.
So, how should I go about restoring tables from one schema to a different schema name?
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver <adrian.klaver@aklaver.com> wrote:
On 2/11/19 8:30 AM, Tiffany Thang wrote:
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
achieve was to dump the schema quickly and be able to restore a single
or subset of objects from the dump. As far as I understand, the only way
of achieving that is to use the custom format and the -j option. Is that
correct? Are there any other alternatives?If you want to use -j then you need to use the -Fd output:
https://www.postgresql.org/docs/10/app-pgdump.html
"-j njobs
--jobs=njobsRun the dump in parallel by dumping njobs tables simultaneously.
This option reduces the time of the dump but it also increases the load
on the database server. You can only use this option with the directory
output format because this is the only output format where multiple
processes can write their data at the same time."If you need to grab just a subset of the schema then there are options
to do that depending on the object. From above link as examples:"-n schema
--schema=schemaDump only schemas matching schema; this selects both the schema
itself, and all its contained objects. ...""-t table
--table=tableDump only tables with names matching table. .."
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:10 AM Ron <ronljohnsonjr@gmail.com
<mailto:ronljohnsonjr@gmail.com>> wrote:On 2/11/19 10:00 AM, Tiffany Thang wrote:
Hi,
To copy the source schema A to target schema B in the samedatabase in
PG10.3, I use psql to dump schema A and manually removes anything
specific
to the schema in the text dump file before importing into schema
B. How do
I achieve the same exporting from Schema A and importing into
schema B
using pg_dump with the -Fc option? Since the dump file generated is
binary, I could not make modifications to the file. Is theprocedure the
same in version 11?
Why do you need to use "--format=custom" instead of "--format=plain"?
For example:
$ pg_dump --format=plain --schema-only --schema=A--
Angular momentum makes the world go 'round.--
Adrian Klaver
adrian.klaver@aklaver.com
Thanks everyone. Unfortunately the schema rename would not work since the
source database will be our production system. We have not gone live yet
but the system is expected to be constantly used.
I have multiple tables that I need to export ranging from 20GB to 60GB
each. The parallel will not work for a single table but would be beneficial
if I have multiple tables to dump.
I'm thinking maybe using what Adrian has suggested with the -f option and
then modify the file or maybe use a common public schema everywhere on the
source and target databases. I would have to restrict who has access to the
public schema.
Thanks.
Tiff
On Wed, Feb 20, 2019 at 11:14 PM Jiří Fejfar <jurafejfar@gmail.com> wrote:
Show quoted text
Hi Tiffany, Isn't it possible for you to do:
1) rename temporarily schema testuser1 to testuser2
2) dump that schema into binary format
3 rename back schema testuser2 to testuser1
4 restore backup to testuser2 schema on other DB?Jura.
On Wed, 20 Feb 2019 at 23:23, Tiffany Thang <tiffanythang@gmail.com>
wrote:Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to myoriginal issue where I could not restore the table to a different schema.
For example,
I would like to backup testuser1.mytable and restore it totestuser2.mytable.
pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h
myserver testdb
where mytable is in testuser1 schema
The dump completed fine but when I attempted to restore the table using
pg_restore to another database, it tried to create the table in testuser1
schema. The restore failed since testuser1 schema does not exist in the
target database. When I created a testuser1 schema in the target database,
the restore worked fine. Since the dump toc is in binary format, I could
not make the change to reflect the new target schema, testuser2.So, how should I go about restoring tables from one schema to a
different schema name?
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver <
adrian.klaver@aklaver.com> wrote:
On 2/11/19 8:30 AM, Tiffany Thang wrote:
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
achieve was to dump the schema quickly and be able to restore a single
or subset of objects from the dump. As far as I understand, the onlyway
of achieving that is to use the custom format and the -j option. Is
that
correct? Are there any other alternatives?
If you want to use -j then you need to use the -Fd output:
https://www.postgresql.org/docs/10/app-pgdump.html
"-j njobs
--jobs=njobsRun the dump in parallel by dumping njobs tables simultaneously.
This option reduces the time of the dump but it also increases the load
on the database server. You can only use this option with the directory
output format because this is the only output format where multiple
processes can write their data at the same time."If you need to grab just a subset of the schema then there are options
to do that depending on the object. From above link as examples:"-n schema
--schema=schemaDump only schemas matching schema; this selects both the schema
itself, and all its contained objects. ...""-t table
--table=tableDump only tables with names matching table. .."
Thanks.
Tiff
On Mon, Feb 11, 2019 at 11:10 AM Ron <ronljohnsonjr@gmail.com
<mailto:ronljohnsonjr@gmail.com>> wrote:On 2/11/19 10:00 AM, Tiffany Thang wrote:
Hi,
To copy the source schema A to target schema B in the samedatabase in
PG10.3, I use psql to dump schema A and manually removes
anything
specific
to the schema in the text dump file before importing into
schema
B. How do
I achieve the same exporting from Schema A and importing into
schema B
using pg_dump with the -Fc option? Since the dump file
generated is
binary, I could not make modifications to the file. Is the
procedure the
same in version 11?
Why do you need to use "--format=custom" instead of
"--format=plain"?
For example:
$ pg_dump --format=plain --schema-only --schema=A--
Angular momentum makes the world go 'round.--
Adrian Klaver
adrian.klaver@aklaver.com
On 2/21/19 11:52 AM, Tiffany Thang wrote:
Thanks everyone. Unfortunately the schema rename would not work since
the source database will be our production system. We have not gone live
yet but the system is expected to be constantly used.I have multiple tables that I need to export ranging from 20GB to 60GB
each. The parallel will not work for a single table but would be
beneficial if I have multiple tables to dump.I'm thinking maybe using what Adrian has suggested with the -f option
and then modify the file or maybe use a common public schema everywhere
on the source and target databases. I would have to restrict who has
access to the public schema.
You can further break this down by using -s and -a switches to only work
with the table definitions and table data respectively. This can also be
done on the pg_dump end.
Thanks.
Tiff
--
Adrian Klaver
adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>
--
Adrian Klaver
adrian.klaver@aklaver.com
Tiffany, have you tried the clone_schema function? It seems to me it does
exactly what you need, no dumping or restoring. There is
even an option to copy the data or not. Default is not.
On Thu, Feb 21, 2019 at 3:23 PM Adrian Klaver <adrian.klaver@aklaver.com>
wrote:
On 2/21/19 11:52 AM, Tiffany Thang wrote:
Thanks everyone. Unfortunately the schema rename would not work since
the source database will be our production system. We have not gone live
yet but the system is expected to be constantly used.I have multiple tables that I need to export ranging from 20GB to 60GB
each. The parallel will not work for a single table but would be
beneficial if I have multiple tables to dump.I'm thinking maybe using what Adrian has suggested with the -f option
and then modify the file or maybe use a common public schema everywhere
on the source and target databases. I would have to restrict who has
access to the public schema.You can further break this down by using -s and -a switches to only work
with the table definitions and table data respectively. This can also be
done on the pg_dump end.Thanks.
Tiff
--
Adrian Klaver
adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>--
Adrian Klaver
adrian.klaver@aklaver.com
--
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
Hi Melvin,
Unfortunately I was not able to use it because I was not able to access
pg_authid in RDS.
Thanks.
Tiff
On Thu, Feb 21, 2019 at 6:09 PM Melvin Davidson <melvin6925@gmail.com>
wrote:
Show quoted text
Tiffany, have you tried the clone_schema function? It seems to me it does
exactly what you need, no dumping or restoring. There is
even an option to copy the data or not. Default is not.On Thu, Feb 21, 2019 at 3:23 PM Adrian Klaver <adrian.klaver@aklaver.com>
wrote:On 2/21/19 11:52 AM, Tiffany Thang wrote:
Thanks everyone. Unfortunately the schema rename would not work since
the source database will be our production system. We have not gonelive
yet but the system is expected to be constantly used.
I have multiple tables that I need to export ranging from 20GB to 60GB
each. The parallel will not work for a single table but would be
beneficial if I have multiple tables to dump.I'm thinking maybe using what Adrian has suggested with the -f option
and then modify the file or maybe use a common public schema everywhere
on the source and target databases. I would have to restrict who has
access to the public schema.You can further break this down by using -s and -a switches to only work
with the table definitions and table data respectively. This can also be
done on the pg_dump end.Thanks.
Tiff
--
Adrian Klaver
adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>--
Adrian Klaver
adrian.klaver@aklaver.com--
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!