Speed problems

Started by Warren Bellover 20 years ago9 messagesgeneral
Jump to latest
#1Warren Bell
warren@clarksnutrition.com

I am having problems with performance. I think this is a simple question and
I am in the right place, if not, please redirect me.

I have a table with 36 fields that slows down quite a bit after some light
use. There are only 5 clients connected to this DB and they are doing mostly
inserts and updates. There is no load on this server or db at all. This
table has had no more than 10,000 records and is being accesessd at the rate
of once per 5 seconds. It will slow down quite a bit. It will take 10
seconds to do a `SELECT * FROM` query. I delete all records except one
perform a VACUUM and this will not speed it up. I drop the table and
recreate it and insert one record and it speeds right back up takeing only
100 ms to do the query.

I am fairly new to Postgres. What do I need to do to keep this table from
slowing down?

Thanks,

Warren Bell

#2Ben
bench@silentmedia.com
In reply to: Warren Bell (#1)
Re: Speed problems

When you say VACUUM do you really mean VACUUM ANALYZE? Have you tried a
simple ANALYZE?

I'm assuming of course that you have indexes that you want to use........

Warren Bell wrote:

Show quoted text

I am having problems with performance. I think this is a simple question and
I am in the right place, if not, please redirect me.

I have a table with 36 fields that slows down quite a bit after some light
use. There are only 5 clients connected to this DB and they are doing mostly
inserts and updates. There is no load on this server or db at all. This
table has had no more than 10,000 records and is being accesessd at the rate
of once per 5 seconds. It will slow down quite a bit. It will take 10
seconds to do a `SELECT * FROM` query. I delete all records except one
perform a VACUUM and this will not speed it up. I drop the table and
recreate it and insert one record and it speeds right back up takeing only
100 ms to do the query.

I am fairly new to Postgres. What do I need to do to keep this table from
slowing down?

Thanks,

Warren Bell

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

http://archives.postgresql.org

#3Scott Marlowe
smarlowe@g2switchworks.com
In reply to: Warren Bell (#1)
Re: Speed problems

On Tue, 2005-09-13 at 13:20, Warren Bell wrote:

I am having problems with performance. I think this is a simple question and
I am in the right place, if not, please redirect me.

I have a table with 36 fields that slows down quite a bit after some light
use. There are only 5 clients connected to this DB and they are doing mostly
inserts and updates. There is no load on this server or db at all. This
table has had no more than 10,000 records and is being accesessd at the rate
of once per 5 seconds. It will slow down quite a bit. It will take 10
seconds to do a `SELECT * FROM` query. I delete all records except one
perform a VACUUM and this will not speed it up. I drop the table and
recreate it and insert one record and it speeds right back up takeing only
100 ms to do the query.

This sounds like classic table / index bloat.

Are you updating all 10,000 rows every 5 seconds? Good lord, that's a
lot of updates. If so, then do a vacuum immediately after the update
(or a delete), or change the system so it doesn't update every row every
time.

Next time, try a vacuum full instead of a drop and recreate and see if
that helps.

I am fairly new to Postgres. What do I need to do to keep this table from
slowing down?

Vacuum this table more often. You might want to look at using the
autovacuum daemon to do this for you.

You might want to post a little more info on what, exactly, you're doing
to see if we can spot any obvious problems.

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Warren Bell (#1)
Re: Speed problems

"Warren Bell" <warren@clarksnutrition.com> writes:

I have a table with 36 fields that slows down quite a bit after some light
use. There are only 5 clients connected to this DB and they are doing mostly
inserts and updates. There is no load on this server or db at all. This
table has had no more than 10,000 records and is being accesessd at the rate
of once per 5 seconds. It will slow down quite a bit. It will take 10
seconds to do a `SELECT * FROM` query. I delete all records except one
perform a VACUUM and this will not speed it up. I drop the table and
recreate it and insert one record and it speeds right back up takeing only
100 ms to do the query.

It sounds to me like the table needs to be vacuumed vastly more often
than you are doing. (You could confirm this by using VACUUM VERBOSE
and noting how big it says the table is physically --- what you need to
do is vacuum often enough to keep the table size in check.)

You might consider setting up pg_autovacuum.

It's also worth asking whether you have indexes set up to handle your
common queries --- normally, only sequential-scan queries are really
sensitive to the physical table size.

regards, tom lane

#5Warren Bell
warren@clarksnutrition.com
In reply to: Scott Marlowe (#3)
Re: Speed problems

On Tue, 2005-09-13 at 13:20, Warren Bell wrote:

I am having problems with performance. I think this is a simple question

and

I am in the right place, if not, please redirect me.

I have a table with 36 fields that slows down quite a bit after some

light

use. There are only 5 clients connected to this DB and they are doing

mostly

table has had no more than 10,000 records and is being accesessd at the

rate

of once per 5 seconds. It will slow down quite a bit. It will take 10
seconds to do a `SELECT * FROM` query. I delete all records except one
perform a VACUUM and this will not speed it up. I drop the table and
recreate it and insert one record and it speeds right back up takeing

only

100 ms to do the query.

This sounds like classic table / index bloat.

Are you updating all 10,000 rows every 5 seconds? Good lord, that's a
lot of updates. If so, then do a vacuum immediately after the update
(or a delete), or change the system so it doesn't update every row every
time.

Next time, try a vacuum full instead of a drop and recreate and see if
that helps.

I am fairly new to Postgres. What do I need to do to keep this table from
slowing down?

Vacuum this table more often. You might want to look at using the
autovacuum daemon to do this for you.

You might want to post a little more info on what, exactly, you're doing
to see if we can spot any obvious problems.

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

I have three indexes on this table. One index is a 1 column, one index is a
5 column multi and one is a 2 column multi. I have run EXPLAIN ANALYZE on
all of my queries and they seem to be taking advantage of these indexes.

Would three indexes of this sort be considered "index bloat"?

I am updating no more than 200 records at a time. Here are some examples of
my queries:

UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false

UPDATE table SET (several columns = something) WHERE char_col_1 = 'blah' AND
int4_col_1 = 11

UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false AND
boolean_col_3 = false AND boolean_col_4 = false AND boolean_col_5 = false
AND boolean_col_6 = false

Is my use of indexes correct?

In the meantime, I will start using autovacuum or VACUUM.

Thanks for your help,

#6Scott Marlowe
smarlowe@g2switchworks.com
In reply to: Warren Bell (#5)
Re: Speed problems

On Tue, 2005-09-13 at 21:01, Warren Bell wrote:

On Tue, 2005-09-13 at 13:20, Warren Bell wrote:

I am having problems with performance. I think this is a simple question

and

I am in the right place, if not, please redirect me.

I have a table with 36 fields that slows down quite a bit after some

light

use. There are only 5 clients connected to this DB and they are doing

mostly

table has had no more than 10,000 records and is being accesessd at the

rate

of once per 5 seconds. It will slow down quite a bit. It will take 10
seconds to do a `SELECT * FROM` query. I delete all records except one
perform a VACUUM and this will not speed it up. I drop the table and
recreate it and insert one record and it speeds right back up takeing

only

100 ms to do the query.

This sounds like classic table / index bloat.

Are you updating all 10,000 rows every 5 seconds? Good lord, that's a
lot of updates. If so, then do a vacuum immediately after the update
(or a delete), or change the system so it doesn't update every row every
time.

Next time, try a vacuum full instead of a drop and recreate and see if
that helps.

I am fairly new to Postgres. What do I need to do to keep this table from
slowing down?

Vacuum this table more often. You might want to look at using the
autovacuum daemon to do this for you.

You might want to post a little more info on what, exactly, you're doing
to see if we can spot any obvious problems.

I have three indexes on this table. One index is a 1 column, one index is a
5 column multi and one is a 2 column multi. I have run EXPLAIN ANALYZE on
all of my queries and they seem to be taking advantage of these indexes.

Would three indexes of this sort be considered "index bloat"?

No, index bloat is a different problem. In the days of yore, postgresql
had a tendency to grow its indexes over time without reclaiming lost
space in them, which lead to bloated indexes (back in the day, I once
had a 100k table with an 80 meg index after a while... Now that is
bloat)

Today, index bloat is generally not a problem, as vacuum can reclaim
much more space in an index than it once could. I'm guessing you're
suffering from a bloating of tables and indexes caused by not vacuuming
enough. Use a vacuum full once to clear up the bloated tables and
indexes, and then regularly scheduled plain vacuums to keep them at a
reasonable size.

I am updating no more than 200 records at a time. Here are some examples of
my queries:

UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false

UPDATE table SET (several columns = something) WHERE char_col_1 = 'blah' AND
int4_col_1 = 11

UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false AND
boolean_col_3 = false AND boolean_col_4 = false AND boolean_col_5 = false
AND boolean_col_6 = false

OK. But how many are you updating between regular vacuums? That's the
real issue. If your regular vacuums aren't often enough, postgresql
starts lengthening the tables instead of reusing the space in them that
was freed by the last updates / deletes.

Keep in mind, that in postgresql, all updates are really insert / delete
pairs, as far as storage is concerned. So, updates create dead tuples
just like deletes would.

Is my use of indexes correct?

Seems good to me.

#7Warren Bell
warren@clarksnutrition.com
In reply to: Scott Marlowe (#6)
Re: Speed problems

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of Scott Marlowe
Sent: Wednesday, September 14, 2005 8:24 AM
To: Warren Bell
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Speed problems

On Tue, 2005-09-13 at 21:01, Warren Bell wrote:

On Tue, 2005-09-13 at 13:20, Warren Bell wrote:

I am having problems with performance. I think this is a

simple question

and

I am in the right place, if not, please redirect me.

I have a table with 36 fields that slows down quite a bit after some

light

use. There are only 5 clients connected to this DB and they are doing

mostly

table has had no more than 10,000 records and is being

accesessd at the

rate

of once per 5 seconds. It will slow down quite a bit. It will take 10
seconds to do a `SELECT * FROM` query. I delete all records

except one

perform a VACUUM and this will not speed it up. I drop the table and
recreate it and insert one record and it speeds right back up takeing

only

100 ms to do the query.

This sounds like classic table / index bloat.

Are you updating all 10,000 rows every 5 seconds? Good lord, that's a
lot of updates. If so, then do a vacuum immediately after the update
(or a delete), or change the system so it doesn't update every

row every

time.

Next time, try a vacuum full instead of a drop and recreate and see if
that helps.

I am fairly new to Postgres. What do I need to do to keep

this table from

slowing down?

Vacuum this table more often. You might want to look at using the
autovacuum daemon to do this for you.

You might want to post a little more info on what, exactly,

you're doing

to see if we can spot any obvious problems.

I have three indexes on this table. One index is a 1 column,

one index is a

5 column multi and one is a 2 column multi. I have run EXPLAIN

ANALYZE on

all of my queries and they seem to be taking advantage of these indexes.

Would three indexes of this sort be considered "index bloat"?

No, index bloat is a different problem. In the days of yore, postgresql
had a tendency to grow its indexes over time without reclaiming lost
space in them, which lead to bloated indexes (back in the day, I once
had a 100k table with an 80 meg index after a while... Now that is
bloat)

Today, index bloat is generally not a problem, as vacuum can reclaim
much more space in an index than it once could. I'm guessing you're
suffering from a bloating of tables and indexes caused by not vacuuming
enough. Use a vacuum full once to clear up the bloated tables and
indexes, and then regularly scheduled plain vacuums to keep them at a
reasonable size.

I am updating no more than 200 records at a time. Here are some

examples of

my queries:

UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false

UPDATE table SET (several columns = something) WHERE char_col_1

= 'blah' AND

int4_col_1 = 11

UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false AND
boolean_col_3 = false AND boolean_col_4 = false AND

boolean_col_5 = false

AND boolean_col_6 = false

OK. But how many are you updating between regular vacuums? That's the
real issue. If your regular vacuums aren't often enough, postgresql
starts lengthening the tables instead of reusing the space in them that
was freed by the last updates / deletes.

Keep in mind, that in postgresql, all updates are really insert / delete
pairs, as far as storage is concerned. So, updates create dead tuples
just like deletes would.

Is my use of indexes correct?

Seems good to me.

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

I have installed pg_autovacuum and also did a VACUUM FULL on the tables.
Speed has improved quite a bit.

Are there any set rules on what the pg_autovacuum -v and -V arguments should
be set to?

I went with the defaults

Thanks for your help,

#8johnf
jfabiani@yolo.com
In reply to: Scott Marlowe (#6)
Re: Speed problems

On Wednesday 14 September 2005 08:23, Scott Marlowe wrote:

OK. But how many are you updating between regular vacuums? That's the
real issue. If your regular vacuums aren't often enough, postgresql
starts lengthening the tables instead of reusing the space in them that
was freed by the last updates / deletes.

Keep in mind, that in postgresql, all updates are really insert / delete
pairs, as far as storage is concerned. So, updates create dead tuples
just like deletes would.

Is my use of indexes correct?

Seems good to me.

Ok but this does seem to be a not a lot of records. Even if the user updated
500 times a day (500 * 200) will only add 100000 records. I would not expect
that performance would suffer adding 100000 per day for at least a week.
Even if the number was double (in case I mis-read the user prior emails)
200000 or 1000000 at the end of the week would not account for the slow down?
Or am I miss reading?
John

#9Scott Marlowe
smarlowe@g2switchworks.com
In reply to: johnf (#8)
Re: Speed problems

On Wed, 2005-09-14 at 21:06, John Fabiani wrote:

On Wednesday 14 September 2005 08:23, Scott Marlowe wrote:

OK. But how many are you updating between regular vacuums? That's the
real issue. If your regular vacuums aren't often enough, postgresql
starts lengthening the tables instead of reusing the space in them that
was freed by the last updates / deletes.

Keep in mind, that in postgresql, all updates are really insert / delete
pairs, as far as storage is concerned. So, updates create dead tuples
just like deletes would.

Is my use of indexes correct?

Seems good to me.

Ok but this does seem to be a not a lot of records. Even if the user updated
500 times a day (500 * 200) will only add 100000 records. I would not expect
that performance would suffer adding 100000 per day for at least a week.
Even if the number was double (in case I mis-read the user prior emails)
200000 or 1000000 at the end of the week would not account for the slow down?
Or am I miss reading?

I think he was saying he updated 200 at a go, but he was doing a LOT of
updates each day. Not sure, I don't have the OP in my email client
anymore.