Re: [HACKERS] Problems with >2GB tables on Linux 2.0
Say guys,
I just noticed that RELSEG_SIZE still hasn't been reduced per the
discussion from early February. Let's make sure that doesn't slip
through the cracks, OK?
I think Peter Mount was supposed to be off testing this issue.
Peter, did you learn anything further?
We should probably apply the patch to REL6_4 as well...
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofSun07Feb1999220422-050013830.918443062@sss.pgh.pa.us
I reposted the patch from home yesterday, as bruce pointed it out in
another thread.
Peter
--
Peter T Mount, IT Section
petermount@it.maidstone.gov.uk
Anything I write here are my own views, and cannot be taken as the
official words of Maidstone Borough Council
-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Sunday, March 14, 1999 5:52 PM
To: pgsql-hackers@postgreSQL.org
Subject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0
Say guys,
I just noticed that RELSEG_SIZE still hasn't been reduced per the
discussion from early February. Let's make sure that doesn't slip
through the cracks, OK?
I think Peter Mount was supposed to be off testing this issue.
Peter, did you learn anything further?
We should probably apply the patch to REL6_4 as well...
regards, tom lane
Import Notes
Resolved by subject fallback
Just a question. Does your patch let vacuum handle segmented tables?
--
Tatsuo Ishii
Show quoted text
I reposted the patch from home yesterday, as bruce pointed it out in
another thread.Peter
--
Peter T Mount, IT Section
petermount@it.maidstone.gov.uk
Anything I write here are my own views, and cannot be taken as the
official words of Maidstone Borough Council-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Sunday, March 14, 1999 5:52 PM
To: pgsql-hackers@postgreSQL.org
Subject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0Say guys,
I just noticed that RELSEG_SIZE still hasn't been reduced per the
discussion from early February. Let's make sure that doesn't slip
through the cracks, OK?I think Peter Mount was supposed to be off testing this issue.
Peter, did you learn anything further?We should probably apply the patch to REL6_4 as well...
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofMon15Mar1999090322GMT.A9DCBD548069D211924000C00D001C44187B9A@exchange.maidstone.gov.uk | Resolved by subject fallback
It simply reduces the size of each segment from 2Gb to 1Gb. The problem
was that some OS's (Linux in my case) don't like files exactly 2Gb in
size. I don't know how vacuum interacts with the storage manager, but in
theory it should be transparent.
--
Peter T Mount, IT Section
petermount@it.maidstone.gov.uk
Anything I write here are my own views, and cannot be taken as the
official words of Maidstone Borough Council
-----Original Message-----
From: Tatsuo Ishii [mailto:t-ishii@sra.co.jp]
Sent: Tuesday, March 16, 1999 1:41 AM
To: Peter Mount
Cc: Tom Lane; pgsql-hackers@postgreSQL.org
Subject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0
Just a question. Does your patch let vacuum handle segmented tables?
--
Tatsuo Ishii
Show quoted text
I reposted the patch from home yesterday, as bruce pointed it out in
another thread.Peter
--
Peter T Mount, IT Section
petermount@it.maidstone.gov.uk
Anything I write here are my own views, and cannot be taken as the
official words of Maidstone Borough Council-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Sunday, March 14, 1999 5:52 PM
To: pgsql-hackers@postgreSQL.org
Subject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0Say guys,
I just noticed that RELSEG_SIZE still hasn't been reduced per the
discussion from early February. Let's make sure that doesn't slip
through the cracks, OK?I think Peter Mount was supposed to be off testing this issue.
Peter, did you learn anything further?We should probably apply the patch to REL6_4 as well...
regards, tom lane
Import Notes
Resolved by subject fallback
Just a question. Does your patch let vacuum handle segmented tables?
--
Tatsuo Ishii
It simply reduces the size of each segment from 2Gb to 1Gb. The problem
was that some OS's (Linux in my case) don't like files exactly 2Gb in
size. I don't know how vacuum interacts with the storage manager, but in
theory it should be transparent.
Ok. So we still have following problem:
test=> vacuum smallcat;
NOTICE: Can't truncate multi-segments relation smallcat
VACUUM
Maybe this should be added to TODO if it's not already there.
--
Tatsuo Ishii
Import Notes
Reply to msg id not found: YourmessageofTue16Mar1999075214GMT.A9DCBD548069D211924000C00D001C4418A024@exchange.maidstone.gov.uk | Resolved by subject fallback
I am still not able to pg_dump my data after recovering from this disaster.
My files are now segmented at 1GB, vacuuming is fine, but pg_dump has a
problem "locating the template1 database".
I sure hope someone can help me come up with a way to safely backup this
data. Someone sent me a patch for the Linux Kernel that will allow it to
handle files > 2GB, but that won't help me with my backup problems.
Thanks,
Tim Perdue
geocrawler.com
-----Original Message-----
From: Tatsuo Ishii <t-ishii@sra.co.jp>
To: Peter Mount <petermount@it.maidstone.gov.uk>
Cc: t-ishii@sra.co.jp <t-ishii@sra.co.jp>; Tom Lane <tgl@sss.pgh.pa.us>;
pgsql-hackers@postgreSQL.org <pgsql-hackers@postgreSQL.org>
Date: Tuesday, March 16, 1999 10:45 PM
Subject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0
Show quoted text
Just a question. Does your patch let vacuum handle segmented tables?
--
Tatsuo IshiiIt simply reduces the size of each segment from 2Gb to 1Gb. The problem
was that some OS's (Linux in my case) don't like files exactly 2Gb in
size. I don't know how vacuum interacts with the storage manager, but in
theory it should be transparent.Ok. So we still have following problem:
test=> vacuum smallcat;
NOTICE: Can't truncate multi-segments relation smallcat
VACUUMMaybe this should be added to TODO if it's not already there.
--
Tatsuo Ishii
Import Notes
Resolved by subject fallback
Thus spake Tim Perdue
I am still not able to pg_dump my data after recovering from this disaster.
My files are now segmented at 1GB, vacuuming is fine, but pg_dump has a
problem "locating the template1 database".
I recall once creating a shell script that dumped a table. I can't
remember why I didn't use pg_dump but the shell script was pretty simple.
If you can read the data, you should be able to make this work. Worst
case you may have to handcraft each table dump but beyond that it should
work pretty good.
Here's how I would do it in Python. Requires the PostgreSQL module
found at http://www.druid.net/pygresql/.
#! /usr/bin/env python
from pg import DB
db = DB() # opens default database on local machine - adjust as needed
for t in db.query("SELECT * FROM mytable").dictresult():
print """INSERT INTO mytable (id, desc, val)
VALUES (%(id), '%(desc)', %(val));""" % t
Then feed that back into a new database. I assume you have the schema
backed up somewhere. You may have to get fancy with formatting special
characters and stuff. If you can still read the schema you may be able
to automate this more. It would depend on the amount of tables in your
database and the complexity. See the get_tables() and get_attnames()
methods in PyGreSQL.
--
D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.