vacuumdb failed

Started by George Robinson IIover 25 years ago4 messagesgeneral
Jump to latest
#1George Robinson II
george.robinson@eurekabroadband.com

Last night, while my perl script was doing a huge insert operation, I
got this error...

DBD::Pg::st execute failed: ERROR: copy: line 4857, pg_atoi: error
reading "2244904358": Result too large

Now, I'm not sure if this is related, but while trying to do vacuumdb
<dbname>, I got...

NOTICE: FlushRelationBuffers(all_flows, 500237): block 171439 is
referenced (private 0, global 1)
FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
connection to server was lost
vacuumdb: vacuum failed

Any ideas? I'm trying a couple other things right now. By the way,
this database has one table that is HUGE. What is the limit on table
size in postgresql7? The faq says unlimited. If that's true, how do
you get around the 2G file size limit that (at least) I have in solaris
2.6?

Thank you.

-g2

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: George Robinson II (#1)
Re: vacuumdb failed

George Robinson II <george.robinson@eurekabroadband.com> writes:

Last night, while my perl script was doing a huge insert operation, I
got this error...

DBD::Pg::st execute failed: ERROR: copy: line 4857, pg_atoi: error
reading "2244904358": Result too large

Now, I'm not sure if this is related, but while trying to do vacuumdb
<dbname>, I got...

NOTICE: FlushRelationBuffers(all_flows, 500237): block 171439 is
referenced (private 0, global 1)
FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2

Probably not related. We've seen sporadic reports of this error in 7.0,
but it's been tough to get enough info to figure out the cause. If you
can find a reproducible way to create the block-is-referenced condition
we'd sure like to know about it!

As a quick-hack recovery, you should find that stopping and restarting the
postmaster will eliminate the VACUUM failure. The block-is-referenced
condition is not all that dangerous in itself; VACUUM is just being
paranoid about the possibility that someone is using the table that it
thinks it has an exclusive lock on.

Any ideas? I'm trying a couple other things right now. By the way,
this database has one table that is HUGE. What is the limit on table
size in postgresql7? The faq says unlimited. If that's true, how do
you get around the 2G file size limit that (at least) I have in solaris
2.6?

We break tables into multiple physical files of 1Gb apiece.

regards, tom lane

#3Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Tom Lane (#2)
RE: vacuumdb failed

-----Original Message-----
From: Tom Lane

Now, I'm not sure if this is related, but while trying to

do vacuumdb

<dbname>, I got...

NOTICE: FlushRelationBuffers(all_flows, 500237): block 171439 is
referenced (private 0, global 1)
FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2

Probably not related. We've seen sporadic reports of this error in 7.0,
but it's been tough to get enough info to figure out the cause. If you
can find a reproducible way to create the block-is-referenced condition
we'd sure like to know about it!

The following is an example which doesn't release the reference count.
Neither CommitTransaction() nor AbortTransaction() is called after '\q'

begin;
declare myc cursor for select * from ..;
fetch in myc;
\q

Regards.

Hiroshi Inoue
Inoue@tpf.co.jp

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hiroshi Inoue (#3)
Re: vacuumdb failed

"Hiroshi Inoue" <Inoue@tpf.co.jp> writes:

The following is an example which doesn't release the reference count.
Neither CommitTransaction() nor AbortTransaction() is called after '\q'

begin;
declare myc cursor for select * from ..;
fetch in myc;
\q

Hmm, you are right. Looks like normal exit from PostgresMain() should,
but does not, call AbortCurrentTransaction(). Any objections?

regards, tom lane