Postgresql crashing during pg_dump
Hello,
I have a huge table with 141456059 records on a PostgreSQL 10.18 database.
When I try to do a pg_dump on that table, postgresql gives a segfault,
displaying this message:
2021-12-22 14:08:03.437 UTC [15267] LOG: server process (PID 25854) was
terminated by signal 11: Segmentation fault
2021-12-22 14:08:03.437 UTC [15267] DETAIL: Failed process was running:
COPY ********** TO stdout;
2021-12-22 14:08:03.437 UTC [15267] LOG: terminating any other active
server processes
2021-12-22 14:08:03.438 UTC [15267] LOG: archiver process (PID 16034)
exited with exit code 2
2021-12-22 14:08:04.196 UTC [15267] LOG: all server processes terminated;
reinitializing
2021-12-22 14:08:05.785 UTC [25867] LOG: database system was interrupted
while in recovery at log time 2021-12-22 14:02:29 UTC
2021-12-22 14:08:05.785 UTC [25867] HINT: If this has occurred more than
once some data might be corrupted and you might need to choose an earlier
recovery target.
On the linux log I only see this:
Dec 22 14:08:03 kernel: postmaster[25854]: segfault at 14be000 ip
00007f828fabb5f9 sp 00007fffe43538b8 error 6 in libc-2.17.so
[7f828f96d000+1c2000]
I'm guessing I'm hitting some (memory?) limit, is there anything I can do
to prevent this? Shouldn't PostgreSQL have some different behavior instead
of crashing the server?
--
Paulo Silva <paulojjs@gmail.com>
Paulo Silva <paulojjs@gmail.com> writes:
I have a huge table with 141456059 records on a PostgreSQL 10.18 database.
When I try to do a pg_dump on that table, postgresql gives a segfault,
displaying this message:
2021-12-22 14:08:03.437 UTC [15267] LOG: server process (PID 25854) was
terminated by signal 11: Segmentation fault
What this sounds like is corrupt data somewhere in that table.
There's some advice about dealing with such cases here:
https://wiki.postgresql.org/wiki/Corruption
If this is extremely valuable data, you might prefer to hire somebody
who specializes in data recovery, rather than trying to handle it
yourself. I'd still follow the wiki page's "first response" advice,
ie take a physical backup ASAP.
regards, tom lane
On 12/22/21 8:40 AM, Tom Lane wrote:
Paulo Silva <paulojjs@gmail.com> writes:
I have a huge table with 141456059 records on a PostgreSQL 10.18 database.
When I try to do a pg_dump on that table, postgresql gives a segfault,
displaying this message:
2021-12-22 14:08:03.437 UTC [15267] LOG: server process (PID 25854) was
terminated by signal 11: Segmentation faultWhat this sounds like is corrupt data somewhere in that table.
There's some advice about dealing with such cases here:
https://wiki.postgresql.org/wiki/Corruption
If this is extremely valuable data, you might prefer to hire somebody
who specializes in data recovery, rather than trying to handle it
yourself. I'd still follow the wiki page's "first response" advice,
ie take a physical backup ASAP.
COPY the table in PK ranges to narrow down the offending record?
--
Angular momentum makes the world go 'round.