time penalties on triggers?
Does anyone know, or can anyone point to information about how much
triggers penalize inserts in PG tables? I'm getting a report that it is
substantial, and before I investigate more. The triggers in question
look like:
#
# CREATE OR REPLACE FUNCTION touch_lasttouched() RETURNS TRIGGER AS
$touch_lasttouched$
# BEGIN
# NEW.lasttouched = NOW();
# RETURN NEW;
# END;
# $touch_lasttouched$ LANGUAGE plpgsql;
#
# DROP TRIGGER IF EXISTS trigger_touch_ams_faults_data ON
ams_faults_and_failures_data;
#
# CREATE TRIGGER trigger_touch_ams_faults_data
# BEFORE INSERT OR UPDATE
# ON ams_faults_and_failures_data
# FOR EACH ROW
# EXECUTE PROCEDURE touch_lasttouched();
#
Here the "lasttouched" column of a table is like:
#
# lasttouched TIMESTAMP(6)
#
and is intentionally left nullable.
On 10/4/07, Jan Theodore Galkowski <bayesianlogic@acm.org> wrote:
Does anyone know, or can anyone point to information about how much
triggers penalize inserts in PG tables? I'm getting a report that it is
substantial, and before I investigate more.
Using your DDL, with slight variations, I came up with this:
create table ttest2 (i serial, lasttouched timestamp);
\timing
first test with no primary key and no now():
Time: 414.226 ms
-- now with primary key:
Time: 1180.121 ms
-- No pk, with now:
Time: 792.013 ms
-- pk and now():
1240.343 ms
-- Now with a trigger, primary key, and of course now() from the trigger:
4823.051 ms
The basic rule of thumb is that if you need fast triggers, you write them in C.
Jan Theodore Galkowski wrote:
Does anyone know, or can anyone point to information about how much
triggers penalize inserts in PG tables? I'm getting a report that it is
substantial, and before I investigate more. The triggers in question
look like:
I use triggers everywhere. About a year ago I did some very simple
tests, 1000's of inserts to tables, then 1000s of inserts to the same
tables with all triggers disabled.
What I found was that the largest factor in performance was the size of
the row being inserted. This makes sense since Postgres uses MVCC.
I also found it very hard to pin down the penalty of the trigger, but
came up with rough figures of 30-50% overhead. The complexity of the
trigger did not matter.
This all made sense based on the fact that writing to disk takes
infinitely more time than processing trigger code, so the penalty seems
to come into play as overhead in firing the trigger.
--
Kenneth Downs
Secure Data Software, Inc.
www.secdat.com www.andromeda-project.org
631-689-7200 Fax: 631-689-0527
cell: 631-379-0010