Performance degrades until dump/restore
Hello,
I'm using PG 9.1. Data is streaming into one particularly large table (at
11 million rows currently) on a constant basis. It is pretty much all
inserts, very little updates or deletes (if any).
After a week or so, query performance on this table turns abysmal. If I
dump the db, then restore, performance turns great for a while before it
starts to bog down again in the following days.
So far, the only answers I can find by searching is to turn autovacuum on.
But it should already by on by default. The only setting that I have
modified was to increase shared buffer.
I'm not really sure where to go from here. Anybody have any suggestions on
what the problem(s) might be and how to rectify it?
Thanks!
Is autovacuum running?
Are tables being analyzed from time to time?
Best Wishes,
Chris Travers
what performance, insert or select?
On Thu, Apr 19, 2012 at 8:35 AM, Chris <bajasands@gmail.com> wrote:
Hello,
I'm using PG 9.1. Data is streaming into one particularly large table (at
11 million rows currently) on a constant basis. It is pretty much all
inserts, very little updates or deletes (if any).
After a week or so, query performance on this table turns abysmal. If I
dump the db, then restore, performance turns great for a while before it
starts to bog down again in the following days.So far, the only answers I can find by searching is to turn autovacuum
on. But it should already by on by default. The only setting that I have
modified was to increase shared buffer.I'm not really sure where to go from here. Anybody have any suggestions on
what the problem(s) might be and how to rectify it?Thanks!
--
"Patriotism is the conviction that your country is superior to all others
because you were born in it." -- George Bernard Shaw
Hi Chris,
It is postgres 9.1, with default settings. The autovacuum settings are all
commented out, I have not change dthem. My understanding is that analyze is
also run automatically by default.
So, I believe the answer to both questions is 'Yes'.
On Thu, Apr 19, 2012 at 12:47 AM, Chris Travers <chris.travers@gmail.com>wrote:
Show quoted text
Is autovacuum running?
Are tables being analyzed from time to time?
Best Wishes,
Chris Travers
On 4/19/2012 1:35 AM, Chris wrote:
Hello,
I'm using PG 9.1. Data is streaming into one particularly large table
(at 11 million rows currently) on a constant basis. It is pretty much
all inserts, very little updates or deletes (if any).
After a week or so, query performance on this table turns abysmal. If I
dump the db, then restore, performance turns great for a while before it
starts to bog down again in the following days.
Can you post an "explain analyze" when its fast, and then again when its
slow?
Are you leaving transactions open? (does "select * from
pg_stat_activity" show any "idle in transaction"?)
Does memory usage increase during the week so it starts using swap?
-Andy
On 19 April 2012 16:09, Chris <bajasands@gmail.com> wrote:
Hi Chris,
It is postgres 9.1, with default settings. The autovacuum settings are all
commented out, I have not change dthem. My understanding is that analyze is
also run automatically by default.So, I believe the answer to both questions is 'Yes'.
Most likely autovacuum is falling behind with default settings. If
that's the case, increasing the frequency with which it checks that
particular table should help.
Or you can explicitly run VACUUM ANALYZE after a batch of inserts finished.
--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.
Read these two wiki pages and then include more info.
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
http://wiki.postgresql.org/wiki/SlowQueryQuestions
Hi Andy,
Thanks for your reply. I don't know the answers to your questions off the
top of my head, so you have given me some things to look into (open
transactions, memory swap, etc).
Now I am suspecting that memory is a partial culprit as I just discovered
that rebooting (hadn't had the opportunity to do that) also seems to boost
query performance to a lesser degree. I'm going to take a hard look at that.
It will take me several days to come up with an "explain analyze" when it's
fast and when it's slow, so I'll come back with that when I have it ready
if I had not resolved the issue by then. Good idea.
Show quoted text
Can you post an "explain analyze" when its fast, and then again when its
slow?Are you leaving transactions open? (does "select * from pg_stat_activity"
show any "idle in transaction"?)Does memory usage increase during the week so it starts using swap?
-Andy
That's a great idea Alban. I think that between your suggestion and
looking into memory & swap, we may be on to something. I'll post back with
more details (per the wiki suggestions) if I'm not able to get this
resolved.
Thanks!
On Thu, Apr 19, 2012 at 9:40 AM, Alban Hertroys <haramrae@gmail.com> wrote:
Show quoted text
On 19 April 2012 16:09, Chris <bajasands@gmail.com> wrote:
Hi Chris,
It is postgres 9.1, with default settings. The autovacuum settings areall
commented out, I have not change dthem. My understanding is that analyze
is
also run automatically by default.
So, I believe the answer to both questions is 'Yes'.
Most likely autovacuum is falling behind with default settings. If
that's the case, increasing the frequency with which it checks that
particular table should help.Or you can explicitly run VACUUM ANALYZE after a batch of inserts finished.
--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.