pg_dump streaming fails - PostgreSQL 9.2

Started by drum.lucas@gmail.comabout 10 years ago4 messagesgeneral
Jump to latest
#1drum.lucas@gmail.com
drum.lucas@gmail.com

Hi there!

I'm running this command: *(Trying to copy a full DB (2TB) from a
hot-standby server to a master (test) server)*

ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
--exclude-table-data='junk.*' -—format=custom master_db" |
/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp --exit-on-error
—-verbose

Then, after 3 GB I got this error:

pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to
conflict with recovery
DETAIL: User was holding a relation lock for too long.
pg_dump: The command was: COPY dm.invoices (invoice_id,
format_version, ts_issue, ts_ack, customer_id, code, tag,
account_data, customer_data, invoice_data, invoice_items_data) TO
stdout;

I've tried the pg_dump command even with: "--no-unlogged-table-data"
option, but it was unsuccessful (
/messages/by-id/E1TutDO-0001Hb-2v@wrigleys.postgresql.org
)

DB size: 2 TB

How can I solve the problem? What's going on? Thanks

#2Joshua D. Drake
jd@commandprompt.com
In reply to: drum.lucas@gmail.com (#1)
Re: pg_dump streaming fails - PostgreSQL 9.2

On 01/28/2016 05:23 PM, drum.lucas@gmail.com wrote:

Hi there!

I'm running this command: *(Trying to copy a full DB (2TB) from a
hot-standby server to a master (test) server)*

|ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
--exclude-table-data='junk.*' -—format=custom
master_db"|/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp
--exit-on-error —-verbose|

Then, after 3 GB I got this error:

|pg_dump:Dumping the contents
oftable"invoices"failed:PQgetResult()failed.pg_dump:Error message
fromserver:ERROR:canceling statement due toconflict withrecovery
DETAIL:Userwas holding a relation lock fortoo long.pg_dump:The command
was:COPY dm.invoices
(invoice_id,format_version,ts_issue,ts_ack,customer_id,code,tag,account_data,customer_data,invoice_data,invoice_items_data)TOstdout;|

I've tried the pg_dump command even with: "--no-unlogged-table-data"
option, but it was unsuccessful
(/messages/by-id/E1TutDO-0001Hb-2v@wrigleys.postgresql.org)

DB size: 2 TB

How can I solve the problem? What's going on? Thanks

Back up everything from the master.

JD

--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#3Andreas Joseph Krogh
andreas@visena.com
In reply to: Joshua D. Drake (#2)
Sv: Re: [GENERAL] pg_dump streaming fails - PostgreSQL 9.2

På fredag 29. januar 2016 kl. 02:30:59, skrev Joshua D. Drake <
jd@commandprompt.com <mailto:jd@commandprompt.com>>:
On 01/28/2016 05:23 PM, drum.lucas@gmail.com wrote:

Hi there!

I'm running this command: *(Trying to copy a full DB (2TB) from a
hot-standby server to a master (test) server)*

|ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
--exclude-table-data='junk.*' -—format=custom
master_db"|/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp
--exit-on-error —-verbose|

Then, after 3 GB I got this error:

|pg_dump:Dumping the contents
oftable"invoices"failed:PQgetResult()failed.pg_dump:Error message
fromserver:ERROR:canceling statement due toconflict withrecovery
DETAIL:Userwas holding a relation lock fortoo long.pg_dump:The command
was:COPY dm.invoices

(invoice_id,format_version,ts_issue,ts_ack,customer_id,code,tag,account_data,customer_data,invoice_data,invoice_items_data)TOstdout;|

     I've tried the pg_dump command even with: "--no-unlogged-table-data"
     option, but it was unsuccessful
   

 (/messages/by-id/E1TutDO-0001Hb-2v@wrigleys.postgresql.org)

     DB size: 2 TB

How can I solve the problem? What's going on? Thanks

Back up everything from the master.

JD
 
Hi.
 
I'm still getting this on standby running 10.1 (both on primary and standby)
 
Isn't this supposed to work? Isn't some of the point of having a hot-standby
like this to be able to backup from it?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andreas@visena.com <mailto:andreas@visena.com>
www.visena.com <https://www.visena.com&gt;
<https://www.visena.com&gt;

 

#4Jeff Janes
jeff.janes@gmail.com
In reply to: Andreas Joseph Krogh (#3)
Re: Sv: Re: [GENERAL] pg_dump streaming fails - PostgreSQL 9.2

On Jan 9, 2018 03:49, "Andreas Joseph Krogh" <andreas@visena.com> wrote:

På fredag 29. januar 2016 kl. 02:30:59, skrev Joshua D. Drake <
jd@commandprompt.com>:

On 01/28/2016 05:23 PM, drum.lucas@gmail.com wrote:

Hi there!

I'm running this command: *(Trying to copy a full DB (2TB) from a
hot-standby server to a master (test) server)*

|ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
--exclude-table-data='junk.*' -—format=custom
master_db"|/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp
--exit-on-error —-verbose|

Then, after 3 GB I got this error:

|pg_dump:Dumping the contents
oftable"invoices"failed:PQgetResult()failed.pg_dump:Error message
fromserver:ERROR:canceling statement due toconflict withrecovery
DETAIL:Userwas holding a relation lock fortoo long.pg_dump:The command
was:COPY dm.invoices
(invoice_id,format_version,ts_issue,ts_ack,customer_id,code,

tag,account_data,customer_data,invoice_data,invoice_items_data)TOstdout;|

Hi.

I'm still getting this on standby running 10.1 (both on primary and standby)

Surely you are not getting recovery conflicts on the master.

Isn't this supposed to work?

You can avoid recovery conflicts by setting

max_standby_streaming_delay=-1

On the hot standby. This might render the hot standby less useful for
other purposes, in which case you might need different hot standbys for
different purposes.

Isn't some of the point of having a hot-standby like this to be able to

backup from it?

Not for me. I might use one to dump one or a few tables, but if I wanted a
full backup I'd use pg_basebackup, not pg_dump.

Cheers,

Jeff