a "huge" table with small rows and culumns
hi,
I have been a postgres user for serveral months. But recently I find a "mysterious" things
in my /usr/local/pgsql/data/base/The_Database_Name/The_table_name.
In a particular table name "fred" there are only 50 rows and 13 columns. The column types are int,
float, varchar, where
varchar's size is restricted to 50. But its size of
/usr/local/pgsql/data/base/The_Database_Name/fred is more than 20M, and it keeps growing while the
rows number is still 50.
I am using linux (slackware), with postgres as database, Jonas as EJB server, Tomcat and apache
as webserver, the "fred" table is constantly looked up (no updated) by the user.
Thanks in advance for any suggestions.
Cheers
Feng
Do you do a lot of updates and/or deletes? If so, you need to VACUUM your
database. When rows are modified, they are not actually deleted. They are
just flagged to be ignored.
Vacuum cleans up the rows.
Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
----- Original Message -----
From: "Feng Xue" <feng@axe.net.au>
To: <pgsql-general@postgresql.org>
Sent: Monday, December 18, 2000 4:59 PM
Subject: [GENERAL] a "huge" table with small rows and culumns
hi,
I have been a postgres user for serveral months. But recently I find a
"mysterious" things
in my /usr/local/pgsql/data/base/The_Database_Name/The_table_name.
In a particular table name "fred" there are only 50 rows and 13 columns.
The column types are int,
float, varchar, where
varchar's size is restricted to 50. But its size of
/usr/local/pgsql/data/base/The_Database_Name/fred is more than 20M, and
it keeps growing while the
rows number is still 50.
I am using linux (slackware), with postgres as database, Jonas as EJB
server, Tomcat and apache
as webserver, the "fred" table is constantly looked up (no updated) by the
user.
Show quoted text
Thanks in advance for any suggestions.
Cheers
Feng
On Tue, 19 Dec 2000, Feng Xue wrote:
In a particular table name "fred" there are only 50 rows and 13 columns. The column types are int,
float, varchar, where
varchar's size is restricted to 50. But its size of
/usr/local/pgsql/data/base/The_Database_Name/fred is more than 20M, and it keeps growing while the
rows number is still 50.
You need to run VACCUM on the table to 'compress' it -- when data is
deleted from a table, is is actually only marked as deleted and not used
any more, with the new row being used instead. Running the vacuum will
remove the data marked as deleted and shrink the table file size down. It
will also improve performance.
-- Brett
http://www.chapelperilous.net/~bmccoy/
---------------------------------------------------------------------------
((lambda (foo) (bar foo)) (baz))
hi, Adam and all,
Thanks for your reply, but the table has not been updated or deletes constantly, it is only looked
up frequently by other programs.
Actually this table is accessed by JDBC from a Jonas EJB entity bean. and it is constantly looked
up, and after I
use "ls -al" I can find that in /usr/local/pgsql/data/base/The_data_base_name/The_huge_table
timestamp is updated as the user access that table. I have no idea why the timestamp is keeping
updated and the size is keeping growing even through there are no updateing and the rows are only
50.
Thanks for the suggestion.
Cheers
feng
feng@axe.net.au
Axe Online Pty. Ltd.
T (02) 9437 0920
F (02) 9437 0261
----- Original Message -----
From: "Adam Lang" <aalang@rutgersinsurance.com>
To: <pgsql-general@postgresql.org>
Sent: Tuesday, December 19, 2000 9:06 AM
Subject: Re: a "huge" table with small rows and culumns
Show quoted text
Do you do a lot of updates and/or deletes? If so, you need to VACUUM your
database. When rows are modified, they are not actually deleted. They are
just flagged to be ignored.Vacuum cleans up the rows.
Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
----- Original Message -----
From: "Feng Xue" <feng@axe.net.au>
To: <pgsql-general@postgresql.org>
Sent: Monday, December 18, 2000 4:59 PM
Subject: [GENERAL] a "huge" table with small rows and culumnshi,
I have been a postgres user for serveral months. But recently I find a
"mysterious" things
in my /usr/local/pgsql/data/base/The_Database_Name/The_table_name.
In a particular table name "fred" there are only 50 rows and 13 columns.
The column types are int,
float, varchar, where
varchar's size is restricted to 50. But its size of
/usr/local/pgsql/data/base/The_Database_Name/fred is more than 20M, andit keeps growing while the
rows number is still 50.
I am using linux (slackware), with postgres as database, Jonas as EJB
server, Tomcat and apache
as webserver, the "fred" table is constantly looked up (no updated) by the
user.
Thanks in advance for any suggestions.
Cheers
Feng
"Feng Xue" <feng@axe.net.au> writes:
Actually this table is accessed by JDBC from a Jonas EJB entity bean. and it is constantly looked
up, and after I
use "ls -al" I can find that in /usr/local/pgsql/data/base/The_data_base_name/The_huge_table
timestamp is updated as the user access that table. I have no idea why the timestamp is keeping
updated and the size is keeping growing even through there are no updateing and the rows are only
50.
Clearly, the table *is* being updated, whether you think it is or not.
You might want to check to see just what queries that bean is really
issuing. One way to do that is to restart the postmaster with -d2
and look to see what queries get logged. (Don't forget to redirect
the postmaster's stdout/stderr to some appropriate log file, and do
*not* use -S in the postmaster switches.)
Another possible source of unexpected updates is forgotten rules or
triggers.
regards, tom lane
hi, brett and all,
after I use VACUUM command to clean up the database, the table size shrinked from 25M to 0.5M, which
is a BIG IMPROVEMENT!
I am now trying to find out why the size of this table is keeping growing while there are no add or
delete or change in the table (by using select * from table I can not find any changes), what other
people's suggestion may be right, this table is updating but I can not see it.
Thanks a lot for the advice.
Merry Christmas and Happy New Year.
feng
feng@axe.net.au
Axe Online Pty. Ltd.
T (02) 9437 0920
F (02) 9437 0261
----- Original Message -----
From: "Brett W. McCoy" <bmccoy@chapelperilous.net>
To: "Feng Xue" <feng@axe.net.au>
Cc: <pgsql-general@postgresql.org>
Sent: Tuesday, December 19, 2000 9:32 AM
Subject: Re: [GENERAL] a "huge" table with small rows and culumns
On Tue, 19 Dec 2000, Feng Xue wrote:
In a particular table name "fred" there are only 50 rows and 13 columns. The column types are
int,
float, varchar, where
varchar's size is restricted to 50. But its size of
/usr/local/pgsql/data/base/The_Database_Name/fred is more than 20M, and it keeps growing while
the
Show quoted text
rows number is still 50.
You need to run VACCUM on the table to 'compress' it -- when data is
deleted from a table, is is actually only marked as deleted and not used
any more, with the new row being used instead. Running the vacuum will
remove the data marked as deleted and shrink the table file size down. It
will also improve performance.-- Brett
http://www.chapelperilous.net/~bmccoy/
---------------------------------------------------------------------------
((lambda (foo) (bar foo)) (baz))hi,
I have been a postgres user for serveral months. But recently I find a
"mysterious" things
in my /usr/local/pgsql/data/base/The_Database_Name/The_table_name.
In a particular table name "fred" there are only 50 rows and 13 columns.
The column types are int,
float, varchar, where
varchar's size is restricted to 50. But its size of
/usr/local/pgsql/data/base/The_Database_Name/fred is more than 20M, andit keeps growing while the
rows number is still 50.
I am using linux (slackware), with postgres as database, Jonas as EJB
server, Tomcat and apache
as webserver, the "fred" table is constantly looked up (no updated) by the
user.
Thanks in advance for any suggestions.
Cheers
Feng
On Tue, 19 Dec 2000, Feng Xue wrote:
after I use VACUUM command to clean up the database, the table size
shrinked from 25M to 0.5M, which is a BIG IMPROVEMENT!
If this is a production database that is being used by a lot of people,
you should run the vacuum on a regular basis, like maybe overnight as a
cron job.
-- Brett
http://www.chapelperilous.net/~bmccoy/
---------------------------------------------------------------------------
History is curious stuff
You'd think by now we had enough
Yet the fact remains I fear
They make more of it every year.
On Tuesday 19 December 2000 17:31, you wrote:
On Tue, 19 Dec 2000, Feng Xue wrote:
after I use VACUUM command to clean up the database, the table size
shrinked from 25M to 0.5M, which is a BIG IMPROVEMENT!If this is a production database that is being used by a lot of people,
you should run the vacuum on a regular basis, like maybe overnight as a
cron job.
That's really bad idea if you have 7.0.x. There are deadlocks possible. :-(((
--
Sincerely Yours,
Denis Perchine
----------------------------------
E-Mail: dyp@perchine.com
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
----------------------------------
Denis Perchine wrote (19 Dec 00,):
On Tuesday 19 December 2000 17:31, you wrote:
If this is a production database that is being used by a lot of people,
you should run the vacuum on a regular basis, like maybe overnight as a
cron job.That's really bad idea if you have 7.0.x. There are deadlocks possible. :-(((
Hey, I thought this is a standard method???!!! What do you suggest
instead?
Esa Pikkarainen
On Tue, 19 Dec 2000, Denis Perchine wrote:
If this is a production database that is being used by a lot of people,
you should run the vacuum on a regular basis, like maybe overnight as a
cron job.That's really bad idea if you have 7.0.x. There are deadlocks possible. :-(((
What do you suggest instead?
-- Brett
http://www.chapelperilous.net/~bmccoy/
---------------------------------------------------------------------------
Bahdges? We don't need no stinkin' bahdges!
-- "The Treasure of Sierra Madre"
If this is a production database that is being used by a lot of people,
you should run the vacuum on a regular basis, like maybe overnight as a
cron job.That's really bad idea if you have 7.0.x. There are deadlocks possible.
:-(((Hey, I thought this is a standard method???!!! What do you suggest
instead?
Yes, this is standard method. But there is really bad deadlock inside 7.0.x
code. And it is possible that if you have high load on database, vacuum will
just locked waiting for a lock, while some other backend will need lock
vacuum have for other operation. This fixed in current CVS a month ago, but
there is no patch for 7.0.x. I am not so sure in my knowledge of postgresql
internals to fix this. If you are not worry, you can run on 7.1.x (I would
not recommend this).
All above means that there is no real solutions. If your database is not
heavily loaded you will never see such problems. I saw them each day until I
did not remove vacuum from crontab. I do it manually now.
--
Sincerely Yours,
Denis Perchine
----------------------------------
E-Mail: dyp@perchine.com
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
----------------------------------
On Tue, 19 Dec 2000, Denis Perchine wrote:
Yes, this is standard method. But there is really bad deadlock inside 7.0.x
code. And it is possible that if you have high load on database, vacuum will
just locked waiting for a lock, while some other backend will need lock
vacuum have for other operation. This fixed in current CVS a month ago, but
there is no patch for 7.0.x. I am not so sure in my knowledge of postgresql
internals to fix this. If you are not worry, you can run on 7.1.x (I would
not recommend this).All above means that there is no real solutions. If your database is not
heavily loaded you will never see such problems. I saw them each day until I
did not remove vacuum from crontab. I do it manually now.
This is why I suggested doing overnight when there may be little or no
load on the database. If it's a backend to a website that could possibly
be used 24/7, obviously this can be a problem, unless you schedule and
announce a short downtime once a week for maintenance or however often you
need to vacuum.
Probably a good idea to run a back up during the same downtime, before the
vacuum is run. :-)
-- Brett
http://www.chapelperilous.net/~bmccoy/
---------------------------------------------------------------------------
Each of us bears his own Hell.
-- Publius Vergilius Maro (Virgil)
Hi,
AFAIK when an entity EJB is instantiated it will be read from the DB.
Similarly, when it is destroyed, it will be serialized to the DB again.
Thus an update, which causes your table to grow.
Maarten
----
Maarten Boekhold, maarten.boekhold@reuters.com
Reuters Consulting
Entrada 308
1096 ED Amsterdam
The Netherlands
tel: +31 (0)20-6601000 (switchboard)
+31 (0)20-6601066 (direct)
+31 (0)20-6601005 (fax)
+31 (0)651585137 (mobile)
Feng Xue <feng@axe.net.au>
18/12/00 23:55
To: pgsql-general@postgresql.org
cc: (bcc: Maarten Boekhold/PAL/US/Reuters)
Subject: [GENERAL] Re: a "huge" table with small rows and culumns
Header: Internal Use Only
hi, Adam and all,
Thanks for your reply, but the table has not been updated or deletes
constantly, it is only looked
up frequently by other programs.
Actually this table is accessed by JDBC from a Jonas EJB entity bean. and
it is constantly looked
up, and after I
use "ls -al" I can find that in
/usr/local/pgsql/data/base/The_data_base_name/The_huge_table
timestamp is updated as the user access that table. I have no idea why the
timestamp is keeping
updated and the size is keeping growing even through there are no
updateing and the rows are only
50.
Thanks for the suggestion.
Cheers
feng
feng@axe.net.au
Axe Online Pty. Ltd.
T (02) 9437 0920
F (02) 9437 0261
----- Original Message -----
From: "Adam Lang" <aalang@rutgersinsurance.com>
To: <pgsql-general@postgresql.org>
Sent: Tuesday, December 19, 2000 9:06 AM
Subject: Re: a "huge" table with small rows and culumns
Do you do a lot of updates and/or deletes? If so, you need to VACUUM
your
database. When rows are modified, they are not actually deleted. They
are
just flagged to be ignored.
Vacuum cleans up the rows.
Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
----- Original Message -----
From: "Feng Xue" <feng@axe.net.au>
To: <pgsql-general@postgresql.org>
Sent: Monday, December 18, 2000 4:59 PM
Subject: [GENERAL] a "huge" table with small rows and culumnshi,
I have been a postgres user for serveral months. But recently I find a
"mysterious" things
in my /usr/local/pgsql/data/base/The_Database_Name/The_table_name.
In a particular table name "fred" there are only 50 rows and 13
columns.
The column types are int,
float, varchar, where
varchar's size is restricted to 50. But its size of
/usr/local/pgsql/data/base/The_Database_Name/fred is more than 20M,
and
it keeps growing while the
rows number is still 50.
I am using linux (slackware), with postgres as database, Jonas as EJB
server, Tomcat and apache
as webserver, the "fred" table is constantly looked up (no updated) by
the
user.
Thanks in advance for any suggestions.
Cheers
Feng
-----------------------------------------------------------------
Visit our Internet site at http://www.reuters.com
Any views expressed in this message are those of the individual
sender, except where the sender specifically states them to be
the views of Reuters Ltd.
Import Notes
Resolved by subject fallback