Disaster!

Started by Christopher Kings-Lynneabout 22 years ago44 messageshackers
Jump to latest
#1Christopher Kings-Lynne
chriskl@familyhealth.com.au

We ran out of disk space on our main server, and now I've freed up
space, we cannot start postgres!

Jan 23 12:18:50 canaveral postgres[563]: [2-1] LOG: checkpoint record
is at 2/96500B94
Jan 23 12:18:50 canaveral postgres[563]: [3-1] LOG: redo record is at
2/964BD23C; undo record is at 0/0; shutdown FALSE
Jan 23 12:18:50 canaveral postgres[563]: [4-1] LOG: next transaction
ID: 14216463; next OID: 4732327
Jan 23 12:18:50 canaveral postgres[563]: [5-1] LOG: database system was
not properly shut down; automatic recovery in progress
Jan 23 12:18:50 canaveral postgres[563]: [6-1] LOG: redo starts at
2/964BD23C
Jan 23 12:18:51 canaveral postgres[563]: [7-1] PANIC: could not access
status of transaction 14286850
Jan 23 12:18:51 canaveral postgres[563]: [7-2] DETAIL: could not read
from file "/usr/local/pgsql/data/pg_clog/000D" at offset 163840:
Undefined error: 0
Jan 23 12:18:51 canaveral postgres[567]: [1-1] FATAL: the database
system is starting up
Jan 23 12:18:52 canaveral postgres[558]: [1-1] LOG: startup process
(PID 563) was terminated by signal 6
Jan 23 12:18:52 canaveral postgres[558]: [2-1] LOG: aborting startup
due to startup process failure

What can I do?

Chris

#2Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Christopher Kings-Lynne (#1)
Re: Disaster!

pg_clog information:

# cd pg_clog
# ls -al
total 3602
drwx------ 2 pgsql pgsql 512 Jan 23 03:49 .
drwx------ 6 pgsql pgsql 512 Jan 23 12:30 ..
-rw------- 1 pgsql pgsql 262144 Jan 18 19:43 0000
-rw------- 1 pgsql pgsql 262144 Jan 18 19:43 0001
-rw------- 1 pgsql pgsql 262144 Jan 18 19:43 0002
-rw------- 1 pgsql pgsql 262144 Jan 18 19:43 0003
-rw------- 1 pgsql pgsql 262144 Jan 19 08:35 0004
-rw------- 1 pgsql pgsql 262144 Jan 21 10:35 0005
-rw------- 1 pgsql pgsql 262144 Jan 20 10:07 0006
-rw------- 1 pgsql pgsql 262144 Jan 21 19:30 0007
-rw------- 1 pgsql pgsql 262144 Jan 21 17:24 0008
-rw------- 1 pgsql pgsql 262144 Jan 22 06:50 0009
-rw------- 1 pgsql pgsql 262144 Jan 22 13:01 000A
-rw------- 1 pgsql pgsql 262144 Jan 23 06:45 000B
-rw------- 1 pgsql pgsql 262144 Jan 23 09:37 000C
-rw------- 1 pgsql pgsql 163840 Jan 23 12:30 000D

I don't have debug symbols, so backtrace isn't that helpful:

(gdb) bt
#0 0x2840a848 in kill () from /usr/lib/libc.so.4
#1 0x2844ee90 in abort () from /usr/lib/libc.so.4
#2 0x81b33ba in ?? ()
#3 0x8092a0c in ?? ()
#4 0x8092450 in ?? ()
#5 0x808a9d7 in ?? ()
#6 0x808adf4 in ?? ()
#7 0x808ae8c in ?? ()
#8 0x808bffa in ?? ()
#9 0x809082f in ?? ()
#10 0x8095204 in ?? ()
#11 0x8135c6b in ?? ()
#12 0x813309c in ?? ()
#13 0x8108dc3 in ?? ()
#14 0x806d49b in ?? ()

I have a backup of the data dir for future analysis, but this is kind of
an urgent fix right now! (Especially since it's 4am my time :( )

Thanks,

Chris

Christopher Kings-Lynne wrote:

Show quoted text

We ran out of disk space on our main server, and now I've freed up
space, we cannot start postgres!

Jan 23 12:18:50 canaveral postgres[563]: [2-1] LOG: checkpoint record
is at 2/96500B94
Jan 23 12:18:50 canaveral postgres[563]: [3-1] LOG: redo record is at
2/964BD23C; undo record is at 0/0; shutdown FALSE
Jan 23 12:18:50 canaveral postgres[563]: [4-1] LOG: next transaction
ID: 14216463; next OID: 4732327
Jan 23 12:18:50 canaveral postgres[563]: [5-1] LOG: database system was
not properly shut down; automatic recovery in progress
Jan 23 12:18:50 canaveral postgres[563]: [6-1] LOG: redo starts at
2/964BD23C
Jan 23 12:18:51 canaveral postgres[563]: [7-1] PANIC: could not access
status of transaction 14286850
Jan 23 12:18:51 canaveral postgres[563]: [7-2] DETAIL: could not read
from file "/usr/local/pgsql/data/pg_clog/000D" at offset 163840:
Undefined error: 0
Jan 23 12:18:51 canaveral postgres[567]: [1-1] FATAL: the database
system is starting up
Jan 23 12:18:52 canaveral postgres[558]: [1-1] LOG: startup process
(PID 563) was terminated by signal 6
Jan 23 12:18:52 canaveral postgres[558]: [2-1] LOG: aborting startup
due to startup process failure

What can I do?

Chris

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

#3Dann Corbit
DCorbit@connx.com
In reply to: Christopher Kings-Lynne (#2)
Re: Disaster!

-----Original Message-----
From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au]
Sent: Friday, January 23, 2004 12:29 PM
To: PostgreSQL-development
Cc: Tom Lane
Subject: [HACKERS] Disaster!

We ran out of disk space on our main server, and now I've freed up
space, we cannot start postgres!

Jan 23 12:18:50 canaveral postgres[563]: [2-1] LOG:
checkpoint record
is at 2/96500B94
Jan 23 12:18:50 canaveral postgres[563]: [3-1] LOG: redo
record is at
2/964BD23C; undo record is at 0/0; shutdown FALSE
Jan 23 12:18:50 canaveral postgres[563]: [4-1] LOG: next transaction
ID: 14216463; next OID: 4732327
Jan 23 12:18:50 canaveral postgres[563]: [5-1] LOG: database
system was
not properly shut down; automatic recovery in progress
Jan 23 12:18:50 canaveral postgres[563]: [6-1] LOG: redo starts at
2/964BD23C
Jan 23 12:18:51 canaveral postgres[563]: [7-1] PANIC: could
not access
status of transaction 14286850
Jan 23 12:18:51 canaveral postgres[563]: [7-2] DETAIL: could
not read
from file "/usr/local/pgsql/data/pg_clog/000D" at offset 163840:
Undefined error: 0
Jan 23 12:18:51 canaveral postgres[567]: [1-1] FATAL: the database
system is starting up
Jan 23 12:18:52 canaveral postgres[558]: [1-1] LOG: startup process
(PID 563) was terminated by signal 6
Jan 23 12:18:52 canaveral postgres[558]: [2-1] LOG: aborting startup
due to startup process failure

What can I do?

I would restore from backup. Just start an empty database on another
machine.

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Christopher Kings-Lynne (#1)
Re: Disaster!

Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:

We ran out of disk space on our main server, and now I've freed up
space, we cannot start postgres!

Jan 23 12:18:51 canaveral postgres[563]: [7-1] PANIC: could not access
status of transaction 14286850
Jan 23 12:18:51 canaveral postgres[563]: [7-2] DETAIL: could not read
from file "/usr/local/pgsql/data/pg_clog/000D" at offset 163840:
Undefined error: 0

I'd suggest extending that file with 8K of zeroes (might need more than
that, but probably not).

regards, tom lane

#5Martín Marqués
martin@bugs.unl.edu.ar
In reply to: Christopher Kings-Lynne (#1)
Re: Disaster!

Mensaje citado por Christopher Kings-Lynne <chriskl@familyhealth.com.au>:

I'd suggest extending that file with 8K of zeroes (might need more than
that, but probably not).

How do I do that? Sorry - I'm not sure of the quickest way, and I'm
reading man pages as we speak!

# dd if=/dev/zeros of=somefile
# cat file1 somefile >> newfile
# mv newfile file1

file1 is "/usr/local/pgsql/data/pg_clog/000D"

--
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
-------------------------------------------------------
Mart�n Marqu�s | Programador, DBA
Centro de Telem�tica | Administrador
Universidad Nacional
del Litoral
-------------------------------------------------------

#6Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Tom Lane (#4)
Re: Disaster!

I'd suggest extending that file with 8K of zeroes (might need more than
that, but probably not).

How do I do that? Sorry - I'm not sure of the quickest way, and I'm
reading man pages as we speak!

Thanks Tom,

Chris

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Christopher Kings-Lynne (#6)
Re: Disaster!

Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:

I'd suggest extending that file with 8K of zeroes (might need more than
that, but probably not).

How do I do that? Sorry - I'm not sure of the quickest way, and I'm
reading man pages as we speak!

Something like "dd if=/dev/zero bs=8k count=1 >>clogfile", but check the
dd man page (and make sure you have a /dev/zero).

regards, tom lane

#8Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Tom Lane (#4)
Re: Disaster!

I'd suggest extending that file with 8K of zeroes (might need more than
that, but probably not).

OK, I've done

dd if=/dev/zero of=zeros count=16

Then cat zero >> 000D

Now I can start it up! Thanks!

What should I do now?

Chris

#9Martín Marqués
martin@bugs.unl.edu.ar
In reply to: Christopher Kings-Lynne (#1)
Re: Disaster!

Mensaje citado por Tom Lane <tgl@sss.pgh.pa.us>:

Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:

Now I can start it up! Thanks!

What should I do now?

Go home and get some sleep ;-). If the WAL replay succeeded, you're up
and running, nothing else to do.

Tom, could you give a small insight on what occurred here, why those 8k of zeros
fixed it, and what is a "WAL replay"?

I am very curious about it.

--
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
-------------------------------------------------------
Mart�n Marqu�s | Programador, DBA
Centro de Telem�tica | Administrador
Universidad Nacional
del Litoral
-------------------------------------------------------

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Christopher Kings-Lynne (#8)
Re: Disaster!

Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:

Now I can start it up! Thanks!

What should I do now?

Go home and get some sleep ;-). If the WAL replay succeeded, you're up
and running, nothing else to do.

regards, tom lane

#11Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Tom Lane (#10)
Re: Disaster!

What should I do now?

Go home and get some sleep ;-). If the WAL replay succeeded, you're up
and running, nothing else to do.

Cool, thanks heaps Tom.

Are you interested in real backtraces, any of the old data directory,
etc. to debug the problem?

Obviously it ran out of disk space, but surely postgres should be able
to start up somehow?

Chris

#12Dann Corbit
DCorbit@connx.com
In reply to: Christopher Kings-Lynne (#11)
Re: Disaster!

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Friday, January 23, 2004 1:01 PM
To: Christopher Kings-Lynne
Cc: PostgreSQL-development
Subject: Re: [HACKERS] Disaster!

Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:

Now I can start it up! Thanks!

What should I do now?

Go home and get some sleep ;-). If the WAL replay succeeded,
you're up and running, nothing else to do.

This seems a very serious problem, if a database can be broken [into a
non-startable condition] by running out of space.

Is it certain that no data was lost?

If it is totally safe to extend the WAL file with zeros and restart, why
not build it into PostgreSQL to do so automatically?

Can I get a 15 sentence speech on what happened, what the repair did,
and why we know that the result can be trusted?
I think it would reassure more than just myself.

#13Rod Taylor
rbt@rbt.ca
In reply to: Tom Lane (#10)
Re: Disaster!

On Fri, 2004-01-23 at 16:00, Tom Lane wrote:

Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:

Now I can start it up! Thanks!

What should I do now?

Go home and get some sleep ;-). If the WAL replay succeeded, you're up
and running, nothing else to do.

Granted, running out of diskspace is a bad idea, but can (has?)
something be put into place to prevent manual intervention from being
required in restarting the database?

#14Tom Lane
tgl@sss.pgh.pa.us
In reply to: Christopher Kings-Lynne (#11)
Re: Disaster!

Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:

Are you interested in real backtraces, any of the old data directory,
etc. to debug the problem?

If you could recompile with debug support and get a backtrace from the
panic, it would be helpful. I suspect what we need to do is make the
clog code more willing to interpret a zero-length read as 8K of zeroes
instead of an error, at least during recovery. But I kinda thought
there was such an escape hatch already, so I want to see exactly how
it got to the point of the failure.

Also, which PG version are you running exactly?

Obviously it ran out of disk space, but surely postgres should be able
to start up somehow?

See the response I'm about to write to Mart�n.

regards, tom lane

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Martín Marqués (#9)
Re: Disaster!

=?iso-8859-1?b?TWFydO1uIA==?= =?iso-8859-1?b?TWFycXXpcw==?= <martin@bugs.unl.edu.ar> writes:

Tom, could you give a small insight on what occurred here, why those
8k of zeros fixed it, and what is a "WAL replay"?

I think what happened is that there was insufficient space to write out
a new page of the clog (transaction commit) file. This would result in
a database panic, which is fine --- you're not gonna get much done
anyway if you are down to zero free disk space. However, after Chris
freed up space, the system needed to replay the WAL from the last
checkpoint to ensure consistency. The WAL entries evidently included
references to transactions whose commit bits were in the unwritten page.
Now there would also be WAL entries recording those commits, so once the
replay was complete everything would be cool. But the clog access code
evidently got confused by being asked to read a page that didn't exist
in the file. I'm not sure yet how that sequence of events occurred,
which is why I asked Chris for a stack trace.

Adding a page of zeroes fixed it by eliminating the read error
condition. It was okay to do so because zeroes is the correct initial
state for a clog page (all transactions in it "still in progress").
After WAL replay, any completed transactions would be updated in the page.

regards, tom lane

#16Alvaro Herrera
alvherre@dcc.uchile.cl
In reply to: Martín Marqués (#9)
Re: Disaster!

On Fri, Jan 23, 2004 at 05:58:33PM -0300, Mart�n Marqu�s wrote:

Tom, could you give a small insight on what occurred here, why those 8k of zeros
fixed it, and what is a "WAL replay"?

If I may ...

- the disk filled up
- Postgres registered something in WAL that required some commit status
(WAL log space is preallocated on disk, so this didn't fail)
- the clog code tried to store information about the commit bits, but
noticed that it needed to extend the clog file.
- the extension failed because the disk was full
- the server went down and a WAL replay was in order, but ...
- the WAL replay could not be done because the code tried to read a
commit status in pg_clog that wasn't there

(some time later)

- Chris emptied up some space and extended the clog
- WAL replay completed, reading an "uncommitted" status from the clog.

Here, "clog" is the "commit log", an area which indicates for each
transaction whether it committed or aborted. A WAL replay is the
operation of bringing the data files (tables, indexes, etc) up to date
by reading the "Write-ahead log" (WAL).

Tom's answer will be undoubtly better ...

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"�Que diferencia tiene para los muertos, los hu�rfanos, y aquellos que han
perdido su hogar, si la loca destrucci�n ha sido realizada bajo el nombre
del totalitarismo o del santo nombre de la libertad y la democracia?" (Gandhi)

#17Tom Lane
tgl@sss.pgh.pa.us
In reply to: Rod Taylor (#13)
Re: Disaster!

Rod Taylor <pg@rbt.ca> writes:

Granted, running out of diskspace is a bad idea, but can (has?)
something be put into place to prevent manual intervention from being
required in restarting the database?

See subsequent discussion. I do want to modify the code to avoid this
problem in future, but we do not need to make Chris work on it at 5 AM
his time. It can wait till tomorrow.

regards, tom lane

#18Alvaro Herrera
alvherre@dcc.uchile.cl
In reply to: Tom Lane (#15)
Re: Disaster!

On Fri, Jan 23, 2004 at 04:21:04PM -0500, Tom Lane wrote:

But the clog access code evidently got confused by being asked to read
a page that didn't exist in the file. I'm not sure yet how that
sequence of events occurred, which is why I asked Chris for a stack
trace.

There was a very similar episode some time ago, and Elein summed it up
nicely in General Bits:

http://www.varlena.com/GeneralBits/45.php

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"El sentido de las cosas no viene de las cosas, sino de
las inteligencias que las aplican a sus problemas diarios
en busca del progreso." (Ernesto Hern�ndez-Novich)

#19Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#14)
Re: Disaster!

Tom Lane wrote:

Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:

Are you interested in real backtraces, any of the old data directory,
etc. to debug the problem?

If you could recompile with debug support and get a backtrace from the
panic, it would be helpful. I suspect what we need to do is make the
clog code more willing to interpret a zero-length read as 8K of zeroes
instead of an error, at least during recovery. But I kinda thought
there was such an escape hatch already, so I want to see exactly how
it got to the point of the failure.

Also, which PG version are you running exactly?

I asked him, 7.4.1.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#20Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#16)
Re: Disaster!

Alvaro Herrera <alvherre@dcc.uchile.cl> writes:

Tom's answer will be undoubtly better ...

Nope, I think you got all the relevant points.

The only thing I'd add after having had more time to think about it is
that this seems very much like the problem we noticed recently with
recovery-from-WAL being broken by the new code in bufmgr.c that tries to
validate the header fields of any page it reads in. We had to add an
escape hatch to disable that check while InRecovery, and I expect what
we will end up with here is a few lines added to slru.c to make it treat
read-past-EOF as a non-error condition when InRecovery. Now the clog
code has always had all that paranoid error checking, but because it
deals in such tiny volumes of data (only 2 bits per transaction), it's
unlikely to suffer an out-of-disk-space condition. That's why we hadn't
seen this failure mode before.

regards, tom lane

#21Gavin Sherry
swm@linuxworld.com.au
In reply to: Tom Lane (#20)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Gavin Sherry (#21)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#22)
#24Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Tom Lane (#23)
#25Tom Lane
tgl@sss.pgh.pa.us
In reply to: Christopher Kings-Lynne (#24)
#26Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#25)
#27Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Tom Lane (#25)
In reply to: Christopher Kings-Lynne (#1)
#29Manfred Spraul
manfred@colorfullife.com
In reply to: Bruce Momjian (#26)
#30Bruce Momjian
bruce@momjian.us
In reply to: Manfred Spraul (#29)
#31Manfred Spraul
manfred@colorfullife.com
In reply to: Bruce Momjian (#30)
#32Bruce Momjian
bruce@momjian.us
In reply to: Manfred Spraul (#31)
#33Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#32)
#34Michael Glaesemann
grzm@seespotcode.net
In reply to: Tom Lane (#33)
#35Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Tom Lane (#33)
#36Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#23)
#37Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#33)
#38Alvaro Herrera
alvherre@dcc.uchile.cl
In reply to: Michael Glaesemann (#34)
#39Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Alvaro Herrera (#38)
#40Gaetano Mendola
mendola@bigfoot.com
In reply to: Tom Lane (#33)
#41Christoph Haller
ch@rodos.fzk.de
In reply to: Bruce Momjian (#36)
#42Tom Lane
tgl@sss.pgh.pa.us
In reply to: Christoph Haller (#41)
In reply to: Tom Lane (#25)
#44Tom Lane
tgl@sss.pgh.pa.us
In reply to: Randolf Richardson (#28)