postgres getting slow under heavy load though autivacuum is enabled

Started by tamanna madaanalmost 17 years ago5 messagesgeneral
Jump to latest
#1tamanna madaan
tamanna.madan@globallogic.com

Hi

I am using postgres 8.1.2 with slony 1.1.5 used for replication between two nodes. Very high number of db operations like (2.8 million inserts, 1.4 million update and 4.5 lakhs deletes.) are being done on db in one transaction and this is repeated for 5-6 times a day at an interval of let say 2 hours. This process is runnning for 5 consective days. It is obeserved that db is getting very slow with time. The number of dead tuples getting increased in pg_listener, sl_log_1 and sl_seqlog tables with time though I have autovacuum enabled and slony related tables like (sl_log_1 , sl_seqlog etc) are configured not to be processed by autovacuum . Please let me know what could be the reason of increasing dead tuples in these tables and postgres getting slow.

Thanks..
Tamanna

#2Grzegorz Jaśkiewicz
gryzman@gmail.com
In reply to: tamanna madaan (#1)
Re: postgres getting slow under heavy load though autivacuum is enabled

On Tue, Jun 9, 2009 at 9:37 AM, tamanna
madaan<tamanna.madan@globallogic.com> wrote:

Hi

I am using postgres 8.1.2 with slony 1.1.5 used for replication between two
nodes. Very high number of db operations like (2.8 million inserts, 1.4
million update and 4.5 lakhs deletes.) are being done on db in one
transaction and this is repeated for 5-6 times a day  at an interval of let
say 2 hours. This process is runnning for 5 consective days. It is obeserved
that db is getting very slow with time. The number of dead tuples getting
increased in pg_listener,  sl_log_1 and sl_seqlog tables with time though I
have autovacuum enabled and slony related tables like (sl_log_1 , sl_seqlog
etc) are configured not to be processed by autovacuum . Please let me know
what could be the reason of increasing dead tuples in these tables and
postgres getting slow.

increase fsm size, in config.
upgrade to newest release of 8.1.x
The slony tables perhaps should be vacuumed too.

--
GJ

#3Bill Moran
wmoran@potentialtech.com
In reply to: tamanna madaan (#1)
Re: postgres getting slow under heavy load though autivacuum is enabled

In response to "tamanna madaan" <tamanna.madan@globallogic.com>:

I am using postgres 8.1.2 with slony 1.1.5 used for replication between two nodes. Very high number of db operations like (2.8 million inserts, 1.4 million update and 4.5 lakhs deletes.) are being done on db in one transaction and this is repeated for 5-6 times a day at an interval of let say 2 hours. This process is runnning for 5 consective days. It is obeserved that db is getting very slow with time. The number of dead tuples getting increased in pg_listener, sl_log_1 and sl_seqlog tables with time though I have autovacuum enabled and slony related tables like (sl_log_1 , sl_seqlog etc) are configured not to be processed by autovacuum . Please let me know what could be the reason of increasing dead tuples in these tables and postgres getting slow.

The slony docs state, and I quote:
"Long running transactions are Evil"
http://slony.info/documentation/slonyadmin.html#BESTPRACTICES

I'm going to guess that a single transaction with multi millions of
tuple changes is about as evil as it gets.

This is a known shortcoming of Slony. You're going to need carefully
tuned vacuum, well designed schema, adequate hardware, and clever
schema design to keep a system like that healthy. That is, if you can't
figure out a way to avoid the huge transactions.

Considering you mention nothing about your hardware, it's possible that
you simply don't have enough. We do some huge transactions over Slony
(although not into the millions per transaction) but we have enough
free RAM, free disk space, and free CPU cycles to clean up after it so
it's not hurting us.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

#4Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Bill Moran (#3)
Re: postgres getting slow under heavy load though autivacuum is enabled

Bill Moran <wmoran@potentialtech.com> writes:

In response to "tamanna madaan" <tamanna.madan@globallogic.com>:

I am using postgres 8.1.2 with slony 1.1.5 used for replication
between two nodes. Very high number of db operations like (2.8
million inserts, 1.4 million update and 4.5 lakhs deletes.) are being
done on db in one transaction and this is repeated for 5-6 times a
day at an interval of let say 2 hours. This process is runnning for 5
consective days. It is obeserved that db is getting very slow with
time. The number of dead tuples getting increased in pg_listener,
sl_log_1 and sl_seqlog tables with time though I have autovacuum
enabled and slony related tables like (sl_log_1 , sl_seqlog etc) are
configured not to be processed by autovacuum . Please let me know
what could be the reason of increasing dead tuples in these tables
and postgres getting slow.

The slony docs state, and I quote:
"Long running transactions are Evil"
http://slony.info/documentation/slonyadmin.html#BESTPRACTICES

I'm going to guess that a single transaction with multi millions of
tuple changes is about as evil as it gets.

This is a known shortcoming of Slony. You're going to need carefully
tuned vacuum, well designed schema, adequate hardware, and clever
schema design to keep a system like that healthy. That is, if you can't
figure out a way to avoid the huge transactions.

Ok, if we're going to ask about the environment leading to the problem,
I have to point this:
http://wiki.postgresql.org/wiki/Londiste_Tutorial#Londiste_is_eating_all_my_CPU_and_lag_is_raising

With Skytools (Londiste), you can set pgq_lazy_fetch such as the
replicas are using cursors to consume big batches of events, and it runs
smoothly.

HTH, Regards,
--
dim

#5Vick Khera
vivek@khera.org
In reply to: Grzegorz Jaśkiewicz (#2)
Re: postgres getting slow under heavy load though autivacuum is enabled

2009/6/9 Grzegorz Jaśkiewicz <gryzman@gmail.com>:

The slony tables perhaps should be vacuumed too.

slony vacuum's its own tables as necessary.