Moving large table between servers: logical replication or postgres_fdw
Greetings Folks,
I have a relatively large table (100m rows) that I want to move to a
new box with more resources. The table isn't doing anything...i.e its
not being updated or read from. Which approach would be faster to move
the data over:
a). Use pg_fdw and do "create local_table as select * from foreign_table".
b). setup logical replication between the two servers.
Regards,
Rhys
Peace & Love|Live Long & Prosper
On Dec 4, 2018, at 8:14 PM, Rhys A.D. Stewart <rhys.stewart@gmail.com> wrote:
Greetings Folks,
I have a relatively large table (100m rows) that I want to move to a
new box with more resources. The table isn't doing anything...i.e its
not being updated or read from. Which approach would be faster to move
the data over:a). Use pg_fdw and do "create local_table as select * from foreign_table".
b). setup logical replication between the two servers.Regards,
Rhys
Peace & Love|Live Long & Prosper
Can’t help but think an unread unwritten table goes to quietly to /dev/null :)
On 12/05/2018 12:05 AM, Rob Sargent wrote:
On Dec 4, 2018, at 8:14 PM, Rhys A.D. Stewart <rhys.stewart@gmail.com> wrote:
Greetings Folks,
I have a relatively large table (100m rows) that I want to move to a
new box with more resources. The table isn't doing anything...i.e its
not being updated or read from. Which approach would be faster to move
the data over:a). Use pg_fdw and do "create local_table as select * from foreign_table".
b). setup logical replication between the two servers.Can’t help but think an unread unwritten table goes to quietly to /dev/null :)
Sigh, no.
The legal and audit teams, and the customer, would be most upset if the data
you're legally obligated to keep for 7 years suddenly disappears.
--
Angular momentum makes the world go 'round.
I tend to believe that a backup (pg_dump) in custom format (-F c) using
multiple jobs (parallel) -> restore (pg_restore) also with multiple
concurrent jobs would be better.
Am Di., 4. Dez. 2018 um 21:14 Uhr schrieb Rhys A.D. Stewart <
rhys.stewart@gmail.com>:
Greetings Folks,
I have a relatively large table (100m rows) that I want to move to a
new box with more resources. The table isn't doing anything...i.e its
not being updated or read from. Which approach would be faster to move
the data over:a). Use pg_fdw and do "create local_table as select * from
foreign_table".
b). setup logical replication between the two servers.Regards,
Rhys
Peace & Love|Live Long & Prosper
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/
On 12/04/2018 09:14 PM, Rhys A.D. Stewart wrote:
Greetings Folks,
I have a relatively large table (100m rows) that I want to move to a
new box with more resources. The table isn't doing anything...i.e its
not being updated or read from. Which approach would be faster to move
the data over:a). Use pg_fdw and do "create local_table as select * from foreign_table".
b). setup logical replication between the two servers.
Or KISS and COPY WITH FORMAT BINARY.
--
Angular momentum makes the world go 'round.
pg_backup/pg_restore will work, but parallel is pointless on a single
(unpartitioned) table.
On 12/05/2018 12:13 AM, Rene Romero Benavides wrote:
I tend to believe that a backup (pg_dump) in custom format (-F c) using
multiple jobs (parallel) -> restore (pg_restore) also with multiple
concurrent jobs would be better.Am Di., 4. Dez. 2018 um 21:14 Uhr schrieb Rhys A.D. Stewart
<rhys.stewart@gmail.com <mailto:rhys.stewart@gmail.com>>:Greetings Folks,
I have a relatively large table (100m rows) that I want to move to a
new box with more resources. The table isn't doing anything...i.e its
not being updated or read from. Which approach would be faster to move
the data over:a). Use pg_fdw and do "create local_table as select * from
foreign_table".
b). setup logical replication between the two servers.Regards,
Rhys
Peace & Love|Live Long & Prosper
--
Angular momentum makes the world go 'round.
L
On Dec 4, 2018, at 11:13 PM, Rene Romero Benavides <rene.romero.b@gmail.com> wrote:
I tend to believe that a backup (pg_dump) in custom format (-F c) using multiple jobs (parallel) -> restore (pg_restore) also with multiple concurrent jobs would be better.
Am Di., 4. Dez. 2018 um 21:14 Uhr schrieb Rhys A.D. Stewart <rhys.stewart@gmail.com>:
Greetings Folks,I have a relatively large table (100m rows) that I want to move to a
new box with more resources. The table isn't doing anything...i.e its
not being updated or read from. Which approach would be faster to move
the data over:a). Use pg_fdw and do "create local_table as select * from foreign_table".
b). setup logical replication between the two servers.Regards,
Rhys
Peace & Love|Live Long & Prosper--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/
L
Let’s compromise. Copy out as described. Tell the auditors where the file is. Skip the copy in.
If you truly don’t need the data online going forward this might actually pass muster.