Trying to Tunning DB

Started by Cristina Surrocaabout 23 years ago3 messagesgeneral
Jump to latest
#1Cristina Surroca
cris@dmcid.net

Hi,

I'm doing a project and I've have some problems.

My table has (ID, atr1,..., atn), but there isn't any primary key because of the specifications, it's very similar to a Data warehouse, where every event is stored. It hasn't any delete, only inserts (more than 10^7) and many updates.

The thing is that it takes too much time in execute. (In my case, before each update I've to do select to find which row I have to modify, and I suppose it does a table scan. To improve it, I've thought to use a Btree for ID column).

I use JDBC. To make it works better, I use:

a.. Prepared Statements,
b.. setAutocommit(false), ( in my case I don't need transactions).
c.. I'm been reading PostgreSQL mailing list archive to try to discover other kinds of "tune", or how to improve it.
d.. I've also read FAQ and I've found "3.6) How do I tune the data base engine for better performance".
But in my case, would you think vacuum and analyze are good options? Can I do everything else? Can I also disable the catalog?

Thanks a lot

yours,

Cris..

#2Manfred Koizar
mkoi-pg@aon.at
In reply to: Cristina Surroca (#1)
Re: Trying to Tunning DB

On Sat, 5 Apr 2003 16:15:37 +0200, "Cristina Surroca" <cris@dmcid.net>
wrote:

It hasn't any delete, only inserts (more than 10^7) and many updates.

10^7 inserts per day? If it has updates, it implicitly has deletes.
How *many* updates?

The thing is that it takes too much time in execute.

We can help you better, if you show us some actual SQL.

b.. setAutocommit(false), ( in my case I don't need transactions).

There is no I_dont_need_transactions mode in Postgres. You always
have transactions. Either you control them or Postgres performs each
statement in its own transaction. As far as I understand, autocommit
off means that you don't have to say BEGIN to start a transaction and
the transaction stays active until you COMMIT. Your application or
JDBC might do this automatically.

But in my case, would you think vacuum and analyze are good options?

Definitely!

Can I also disable the catalog?

I don't know what you mean, but I am pretty sure, you can't.

Servus
Manfred

#3Nikolaus Dilger
nikolaus@dilger.cc
In reply to: Manfred Koizar (#2)
Re: Trying to Tunning DB

Cris,

In order to improve your performance try
1) vacuum after inserting a significant amount of data
into your table.
2) Build an index on one or more of the columns used in
the WHERE clause when selecting what row to update.
Btree works best if the column has a high cardinality.
I.e. many different values.
Indexes have a performance penalty during inserts but
help during the select.
3) If you have enough RAM increase the number of shared
buffers.

Regards,
Nikolaus Dilger

"Cristina Surroca" wrote:

Hi,

 
    I'm doing a
project and I've have some problems.
    My table has (ID,
atr1,..., atn), but there isn't any primary key because
of the specifications,
it's very similar to a Data warehouse, where every
event is stored. It hasn't
any delete, only inserts (more than 10^7) and many
updates.
    The thing is that
it takes too much time in execute. (In my case, before
each update I've to
do select to find which row I have to modify, and I
suppose it does a table
scan. To improve it, I've thought to use a Btree for ID
column).
    I use JDBC. To
make it works better, I use:

 Prepared
Statements, 

setAutocommit(false),
( in my case I don't need transactions).

I'm been reading PostgreSQL
mailing list archive to try to discover other kinds
of "tune", or how
to improve it.

I've also read FAQ and I've found
"3.6) How do I tune the data base engine for better
performance". 
    But in my case,
would you think vacuum and analyze are good options?
Can I do everything else?
Can I  also disable the catalog?
 
Thanks a lot
 
yours,
 
 
Cris..