"Tuple too big" when the tuple is not that big...

Started by Paulo Janabout 25 years ago2 messagesgeneral
Jump to latest
#1Paulo Jan
admin@digital.ddnet.es

Hi all:

I have a problem here, using Postgres 6.5.3 on a Red Hat Linux 6.0. I
have a table where, each time I do a "vacuum analyze", the database
complains saying "ERROR: Tuple is too big: size 10460"... and the
problem is that there isn't any record as far as I know that goes beyond
the 8K limit.
Some background: the table in question was initially created with a
"text" field, and it gave us endless problems (crashes, coredumps,
etc.). After searching the archives and finding a number of people
warning against using the "text" field (specially in the 6.x series), I
dumped the table contents (with COPY) and recreated it using
"varchar(8088)" instead. When importing the data back Postgres didn't
say anything, and I assume that if there had been any field bigger than
8K it would have complained. BUT... right after importing the data in
the brand new table, I try a "vacuum analyze" again and it does the same
thing.
Some other facts:

-"Vacuum" works fine. It's just "vacuum analyze" what gives problems.
-The table doesn't have any indices.
-Everytime I try to do a "\d (table)", Postgres dumps core with the
"backend closed the channel unexpectedly".

Any ideas? (Aside of upgrading to 7.x; we can't do that for now). Do
you need any other information?

Paulo Jan.
DDnet.

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Paulo Jan (#1)
Re: "Tuple too big" when the tuple is not that big...

Paulo Jan <admin@mail.ddnet.es> writes:

I have a problem here, using Postgres 6.5.3 on a Red Hat Linux 6.0.

Upgrade to 7.0.3.

Any ideas? (Aside of upgrading to 7.x; we can't do that for now).

If you insist on living with 6.5's bugs, this is one of 'em you'll have
to live with...

regards, tom lane