Re: [HACKERS] "CANNOT EXTEND" -

Started by Tim Perduealmost 27 years ago18 messages
#1Tim Perdue
perdue@raccoon.com

I still cannot get pg_dump to work since I fixed the system yesterday. This
is a real mess and I need to make sure I have a current backup.

-I upgraded from 6.4 -> 6.4.2 and applied the 2GB patch
-I did "initdb" from the postgres user account
-I cannot get pg_dump to work:

-------
[tim@db /]$ pg_dump db_domain > /fireball/pg_dumps/db_domain.dump
pg_dump error in finding the template1 database
-------

At this point, the postmaster dies and restarts.

I think I'm getting to where I need some real help getting this thing to
dump again.

The database is up and running just fine - I just cannot dump.

Any tips or advice is GREATLY needed and appreciated at this point. All I
need is it to take a dump ;-)

Tim

tim@perdue.net

-----Original Message-----
From: Bruce Momjian <maillist@candle.pha.pa.us>
To: tim@perdue.net <tim@perdue.net>
Date: Monday, March 15, 1999 11:57 AM
Subject: Re: [SQL] Re: [HACKERS] URGENT -

Show quoted text

From 6.4 -> 6.4.2

The production database is working well, but pg_dump doesn't work. Now
I'm worried that my database will corrupt again and I won't have it
backed up.

6.4 to 6.4.2 should work just fine, and the patch should not change
that. Are you saying the application of the patch caused the system to
be un-dumpable? Or perhaps was it the stopping of the postmaster. I
can work with you to get it dump-able if needed?

--
Bruce Momjian                        |  http://www.op.net/~candle
maillist@candle.pha.pa.us            |  (610) 853-3000
+  If your life is a hard drive,     |  830 Blythe Avenue
+  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#2Tim Perdue
perdue@raccoon.com
In reply to: Tim Perdue (#1)

Thanks Bruce.

Do you have any idea what happens with pg_dump when it hits 2GB??? Is it set
up to segment the files on linux? If not, I may have hit a brick wall here,
and have no way to back this baby up.

Tim

-----Original Message-----
From: Bruce Momjian <maillist@candle.pha.pa.us>
To: perdue@raccoon.com <perdue@raccoon.com>
Cc: pgsql-sql@hub.org <pgsql-sql@hub.org>
Date: Monday, March 15, 1999 7:53 PM
Subject: Re: [SQL] Re: [HACKERS] URGENT -

Call me at my signature phone number.

[Charset iso-8859-1 unsupported, filtering to ASCII...]

I still cannot get pg_dump to work since I fixed the system yesterday.

This

is a real mess and I need to make sure I have a current backup.

-I upgraded from 6.4 -> 6.4.2 and applied the 2GB patch
-I did "initdb" from the postgres user account
-I cannot get pg_dump to work:

-------
[tim@db /]$ pg_dump db_domain > /fireball/pg_dumps/db_domain.dump
pg_dump error in finding the template1 database
-------

At this point, the postmaster dies and restarts.

I think I'm getting to where I need some real help getting this thing to
dump again.

The database is up and running just fine - I just cannot dump.

Any tips or advice is GREATLY needed and appreciated at this point. All I
need is it to take a dump ;-)

Tim

tim@perdue.net

-----Original Message-----
From: Bruce Momjian <maillist@candle.pha.pa.us>
To: tim@perdue.net <tim@perdue.net>
Date: Monday, March 15, 1999 11:57 AM
Subject: Re: [SQL] Re: [HACKERS] URGENT -

From 6.4 -> 6.4.2

The production database is working well, but pg_dump doesn't work. Now
I'm worried that my database will corrupt again and I won't have it
backed up.

6.4 to 6.4.2 should work just fine, and the patch should not change
that. Are you saying the application of the patch caused the system to
be un-dumpable? Or perhaps was it the stopping of the postmaster. I
can work with you to get it dump-able if needed?

--
Bruce Momjian                        |  http://www.op.net/~candle
maillist@candle.pha.pa.us            |  (610) 853-3000
+  If your life is a hard drive,     |  830 Blythe Avenue
+  Christ can be your backup.        |  Drexel Hill, Pennsylvania

19026

Show quoted text
--
Bruce Momjian                        |  http://www.op.net/~candle
maillist@candle.pha.pa.us            |  (610) 853-3000
+  If your life is a hard drive,     |  830 Blythe Avenue
+  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#3Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Tim Perdue (#2)

[Charset iso-8859-1 unsupported, filtering to ASCII...]

Thanks Bruce.

Do you have any idea what happens with pg_dump when it hits 2GB??? Is it set
up to segment the files on linux? If not, I may have hit a brick wall here,
and have no way to back this baby up.

pg_dump only dumps a flat unix file. That can be any size your OS
supports. It does not segment. However, a 2gig table will dump to a
much smaller version than 2gig because of the overhead for every record.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#4Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Bruce Momjian (#3)

Bruce, it is failing on the first select:

-->findLastBuiltinOid(void) {
--> SELECT oid from pg_database where datname = 'template1'
}

template1=> SELECT oid from pg_database where datname = 'template1';
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally before
or while pr
ocessing the request.
We have lost the connection to the backend, so further processing is
impossible.
Terminating.

That's where it bombs out.

Tim

Can someone suggest why this would be happening? The SELECT looks
pretty simple.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#5J.M.
darcy@druid.net
In reply to: Bruce Momjian (#4)

Thus spake Bruce Momjian

template1=> SELECT oid from pg_database where datname = 'template1';
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally before

Can someone suggest why this would be happening? The SELECT looks
pretty simple.

It works for me but I have the same thing happening on a database
where I do the following.

SELECT * INTO TABLE tempset FROM trash WHERE "Shape" ~* 'Prow' ;

The tempset table doesn't exist and trash is just a table that the
user is trying to grab a subselect from. I don't know if there is
a connection here but it brought to mind a question I have been meaning
to ask, how do I read the pg_log file? I can't seem to find anything
that reads this. Also, what sort of stuff gets put into that log?

-- 
D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
http://www.druid.net/darcy/                |  and a sheep voting on
+1 416 424 2871     (DoD#0082)    (eNTP)   |  what's for dinner.
#6Peter T Mount
peter@retep.org.uk
In reply to: Bruce Momjian (#3)

On Mon, 15 Mar 1999, Bruce Momjian wrote:

[Charset iso-8859-1 unsupported, filtering to ASCII...]

Thanks Bruce.

Do you have any idea what happens with pg_dump when it hits 2GB??? Is it set
up to segment the files on linux? If not, I may have hit a brick wall here,
and have no way to back this baby up.

pg_dump only dumps a flat unix file. That can be any size your OS
supports. It does not segment. However, a 2gig table will dump to a
much smaller version than 2gig because of the overhead for every record.

Hmmm, I think that, as some people are now using >2Gig tables, we should
think of adding segmentation to pg_dump as an option, otherwise this is
going to become a real issue at some point.

Also, I think we could do with having some standard way of dumping and
restoring large objects.

--
Peter T Mount peter@retep.org.uk
Main Homepage: http://www.retep.org.uk
PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres
Java PDF Generator: http://www.retep.org.uk/pdf

#7Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Peter T Mount (#6)

pg_dump only dumps a flat unix file. That can be any size your OS
supports. It does not segment. However, a 2gig table will dump to a
much smaller version than 2gig because of the overhead for every record.

Hmmm, I think that, as some people are now using >2Gig tables, we should
think of adding segmentation to pg_dump as an option, otherwise this is
going to become a real issue at some point.

So the OS doesn't get a table over 2 gigs. Does anyone have a table
that dumps a flat file over 2gig's, whose OS can't support files over 2
gigs. Never heard of a complaint.

Also, I think we could do with having some standard way of dumping and
restoring large objects.

I need to add a separate large object type.
-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#8Tim Perdue
perdue@raccoon.com
In reply to: Bruce Momjian (#7)

It won't be long for me until that happens. Not long at all. Considering
I've amassed 2.2 GB in just 3-4 weeks....

I'm really surprised to see that Linux has such a lame file limitation. I
think even the macintosh can handle single files in the terabyte range now.

Tim

-----Original Message-----
From: Bruce Momjian <maillist@candle.pha.pa.us>
To: Peter T Mount <peter@retep.org.uk>
Cc: perdue@raccoon.com <perdue@raccoon.com>; pgsql-hackers@hub.org
<pgsql-hackers@hub.org>
Date: Wednesday, March 17, 1999 5:29 PM
Subject: Re: [HACKERS] "CANNOT EXTEND" -

pg_dump only dumps a flat unix file. That can be any size your OS
supports. It does not segment. However, a 2gig table will dump to a
much smaller version than 2gig because of the overhead for every

record.

Show quoted text

Hmmm, I think that, as some people are now using >2Gig tables, we should
think of adding segmentation to pg_dump as an option, otherwise this is
going to become a real issue at some point.

So the OS doesn't get a table over 2 gigs. Does anyone have a table
that dumps a flat file over 2gig's, whose OS can't support files over 2
gigs. Never heard of a complaint.

Also, I think we could do with having some standard way of dumping and
restoring large objects.

I need to add a separate large object type.
--
Bruce Momjian                        |  http://www.op.net/~candle
maillist@candle.pha.pa.us            |  (610) 853-3000
+  If your life is a hard drive,     |  830 Blythe Avenue
+  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#9Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Tim Perdue (#8)

[Charset iso-8859-1 unsupported, filtering to ASCII...]

It won't be long for me until that happens. Not long at all. Considering
I've amassed 2.2 GB in just 3-4 weeks....

How large are your flat files. Also, the postgresql problems are with
files that are exactly 2gig. It is possible files over that will be ok.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#10Tim Perdue
perdue@raccoon.com
In reply to: Bruce Momjian (#9)

The flat files are sitting at 1.9 GB right now (the primary email table).

So I'm approaching the point where I can't backup using the COPY command.

I'm probably using the wrong OS for this. But the performance of postgres is
still dazzling me (some people complain about the performance, but when I
see it pick 10,000 rows out of 1.4 million, sort them, and return them in a
second, I'm blown away).

Tim

-----Original Message-----
From: Bruce Momjian <maillist@candle.pha.pa.us>
To: perdue@raccoon.com <perdue@raccoon.com>
Cc: pgsql-hackers@hub.org <pgsql-hackers@hub.org>
Date: Wednesday, March 17, 1999 8:46 PM
Subject: Re: [HACKERS] "CANNOT EXTEND" -

Show quoted text

[Charset iso-8859-1 unsupported, filtering to ASCII...]

It won't be long for me until that happens. Not long at all. Considering
I've amassed 2.2 GB in just 3-4 weeks....

How large are your flat files. Also, the postgresql problems are with
files that are exactly 2gig. It is possible files over that will be ok.

--
Bruce Momjian                        |  http://www.op.net/~candle
maillist@candle.pha.pa.us            |  (610) 853-3000
+  If your life is a hard drive,     |  830 Blythe Avenue
+  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#11Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Tim Perdue (#10)

[Charset iso-8859-1 unsupported, filtering to ASCII...]

The flat files are sitting at 1.9 GB right now (the primary email table).

So I'm approaching the point where I can't backup using the COPY command.

I'm probably using the wrong OS for this. But the performance of postgres is
still dazzling me (some people complain about the performance, but when I
see it pick 10,000 rows out of 1.4 million, sort them, and return them in a
second, I'm blown away).

Please see if you can create files over 2 gig. I believe is it only OS
bugs in dealing with exactly 2gig files that is the problem.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#12Hannu Krosing
hannu@trust.ee
In reply to: Bruce Momjian (#11)

Bruce Momjian wrote:

[Charset iso-8859-1 unsupported, filtering to ASCII...]

The flat files are sitting at 1.9 GB right now (the primary email table).

So I'm approaching the point where I can't backup using the COPY command.

I'm probably using the wrong OS for this. But the performance of postgres is
still dazzling me (some people complain about the performance, but when I
see it pick 10,000 rows out of 1.4 million, sort them, and return them in a
second, I'm blown away).

Please see if you can create files over 2 gig. I believe is it only OS
bugs in dealing with exactly 2gig files that is the problem.

also note that pg_dump can write to a pipe, so you can use it thus :

pg_dump megabase | split -b 500000k - megabase.dump.

createdb new_megabase

cat megabase.dump.* | psql new_megabase

to achieve space-saving, you can also pipe the thing through g(un)zip

---------------------
Hannu

#13Zeugswetter Andreas IZ5
Andreas.Zeugswetter@telecom.at
In reply to: Hannu Krosing (#12)
AW: [HACKERS] "CANNOT EXTEND" -

So the OS doesn't get a table over 2 gigs. Does anyone have a table
that dumps a flat file over 2gig's, whose OS can't support files over 2
gigs. Never heard of a complaint.

Probably because people dump to tape or pipes, that compress the dump
with gzip and split it with split -b ? That is what I would do with my
backups:

pg_dump | gzip --fast | split -b512m - backup.monday.gz.

Andreas

#14Zeugswetter Andreas IZ5
Andreas.Zeugswetter@telecom.at
In reply to: Zeugswetter Andreas IZ5 (#13)
AW: [HACKERS] "CANNOT EXTEND" -

copy tbl_name to 'file';

Have any ideas for splitting that?

I was going to answer with the following:

#!/bin/sh
mkfifo tapepipe.$$
( gzip --fast -c < tapepipe.$$ | split -b512m - tbl_name.unl. ) &
psql -c "copy tbl_name to 'tapepipe.$$'" regression
rm tapepipe.$$

but it does not work, since psql does not like the named pipe. So use:

psql -qc 'copy onek to stdout' regression | gzip --fast -c | split -b512m -
onek.unl.

Andreas

#15Zeugswetter Andreas IZ5
Andreas.Zeugswetter@telecom.at
In reply to: Zeugswetter Andreas IZ5 (#14)
AW: [HACKERS] "CANNOT EXTEND" -

psql -qc 'copy onek to stdout' regression | gzip --fast -c | split -b512m

-

onek.unl.

Is there any danger when you split these files? I'm worried about
corruption, etc.

On the contrary, gzip will notice file corruption when decompressing,
since it checks the CRC. You won't notice otherwise. We do our Informix
backups this way, that are about 1 - 15 Gb (compression factor ~ 1:4).
We haven't had problems since using this method (3 years now).

So what is the command to pull it back from the segments?

You get files with suffix aa ab ac and so on.
If you make sure no other files lurk around, that match the following mask,
you simply restore with:

cat onek.unl.* | gzip -cd | psql -qc 'copy onek from stdin' regression

Andreas

#16Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Zeugswetter Andreas IZ5 (#15)

Bruce, for the most part, I have worked around the problems with
pg_dump.

But I am getting this error now, and it is worrisome:

NOTICE: Can't truncate multi-segments relation tbl_mail_archive
ERROR: Tuple is too big: size 10024

Can someone comment on this? It is coming from
./backend/storage/smgr/md.c in mdtruncate(). Is this table very large?

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#17Peter Mount
petermount@it.maidstone.gov.uk
In reply to: Bruce Momjian (#16)
RE: [HACKERS] "CANNOT EXTEND" -

Is this from vacuum or pg_dump?

I'm just wondering if vacuum cannot cope with a table that is >1Gb
(hence in two segments), and when cleaned down, is <1Gb (one segment).

Saying that, why is it saying the tuple is too big? What is creating
that tuple?

-----Original Message-----
From: Bruce Momjian [mailto:maillist@candle.pha.pa.us]
Sent: Tuesday, March 23, 1999 2:55 PM
To: tim@perdue.net
Cc: hackers@postgreSQL.org
Subject: Re: [HACKERS] "CANNOT EXTEND" -

Bruce, for the most part, I have worked around the problems with
pg_dump.

But I am getting this error now, and it is worrisome:

NOTICE: Can't truncate multi-segments relation tbl_mail_archive
ERROR: Tuple is too big: size 10024

Can someone comment on this? It is coming from
./backend/storage/smgr/md.c in mdtruncate(). Is this table very large?

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania
19026
#18Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Peter Mount (#17)

Is this from vacuum or pg_dump?

I'm not sure about the original message. In my case, I got

NOTICE: Can't truncate multi-segments relation tbl_mail_archive

while doing vacuum on a multi-segment relation.

However I didn't get:

ERROR: Tuple is too big: size 10024

I'm not sure if I was just lucky.

I'm just wondering if vacuum cannot cope with a table that is >1Gb
(hence in two segments), and when cleaned down, is <1Gb (one segment).

I don't think vacuum is currently usable for a segmented relation.

Saying that, why is it saying the tuple is too big? What is creating
that tuple?

Seems that comes from RelationPutHeapTupleAtEnd(). It's saying
requested tuple length is too big to fit into a page. (10024 is
apparently bigger than 8192) Someting very weird is going on...
---
Tatsuo Ishii