Question on trigger

Started by veem valmost 2 years ago6 messagesgeneral
Jump to latest
#1veem v
veema0000@gmail.com

Hi, We used to use Oracle database in which we had audit triggers(something
as below) mandated for all tables by the control team. Now we are going to
use the postgresql 15.4 database for one of our applications. So,wanted to
understand if there exists any downside of such audit trigger setup for all
the tables? Will it impact the bulk data insert/update/delete OR slowdown
of any of the DML operations significantly (and thus will not be advisable
to use for all tables but selected ones)?

CREATE OR REPLACE TRIGGER TAB_AUD_TRG
BEFORE DELETE OR INSERT OR UPDATE
ON tab
FOR EACH ROW
BEGIN
IF inserting THEN
:NEW.create_timestamp := systimestamp;
:NEW.create_userid := sys_context('USERENV','SESSION_USER');
:NEW.update_timestamp := systimestamp;
:NEW.update_userid := sys_context('USERENV','SESSION_USER');
ELSIF updating THEN
IF updating('create_userid') OR updating('create_timestamp') THEN
:new.create_userid := :old.create_userid;
:new.create_timestamp := :old.create_timestamp;
END IF;
:NEW.update_timestamp := systimestamp;
:NEW.update_userid := sys_context('USERENV','SESSION_USER');
END IF;
END;
/

Regards
Veem

#2Adrian Klaver
adrian.klaver@aklaver.com
In reply to: veem v (#1)
Re: Question on trigger

On 4/11/24 07:31, veem v wrote:

Hi, We used to use Oracle database in which we had audit
triggers(something as below) mandated for all tables by the control
team. Now we are going to use the postgresql 15.4 database for one of
our applications. So,wanted to understand if there exists any downside
of such audit trigger setup for all the tables? Will it impact the bulk
data insert/update/delete OR slowdown of any of the DML operations
significantly (and thus will not be advisable to use for all tables but
selected ones)?

Triggers are overhead in Postgres as they where in Oracle. If they
didn't cause an issue in Oracle I would suspect that would also be the
case in Postgres. To confirm you would need to create a test setup and
run some common operations and see what the overhead is.

Some potential performance improvements:

https://www.postgresql.org/docs/current/sql-createtrigger.html

"...a trigger that is marked FOR EACH STATEMENT only executes once for
any given operation, regardless of how many rows it modifies (in
particular, an operation that modifies zero rows will still result in
the execution of any applicable FOR EACH STATEMENT triggers)."

<...>

"The REFERENCING option enables collection of transition relations,
which are row sets that include all of the rows inserted, deleted, or
modified by the current SQL statement. This feature lets the trigger see
a global view of what the statement did, not just one row at a time.
This option is only allowed for an AFTER trigger that is not a
constraint trigger; also, if the trigger is an UPDATE trigger, it must
not specify a column_name list. OLD TABLE may only be specified once,
and only for a trigger that can fire on UPDATE or DELETE; it creates a
transition relation containing the before-images of all rows updated or
deleted by the statement. Similarly, NEW TABLE may only be specified
once, and only for a trigger that can fire on UPDATE or INSERT; it
creates a transition relation containing the after-images of all rows
updated or inserted by the statement."

As example:

https://www.postgresql.org/docs/current/plpgsql-trigger.html

Example 43.7. Auditing with Transition Tables

CREATE OR REPLACE TRIGGER TAB_AUD_TRG
  BEFORE DELETE OR INSERT OR UPDATE
  ON tab
  FOR EACH ROW
BEGIN
      IF inserting THEN
        :NEW.create_timestamp := systimestamp;
        :NEW.create_userid  := sys_context('USERENV','SESSION_USER');
        :NEW.update_timestamp := systimestamp;
        :NEW.update_userid := sys_context('USERENV','SESSION_USER');
      ELSIF updating THEN
        IF  updating('create_userid') OR updating('create_timestamp') THEN
            :new.create_userid   := :old.create_userid;
            :new.create_timestamp  := :old.create_timestamp;
        END IF;
        :NEW.update_timestamp := systimestamp;
        :NEW.update_userid := sys_context('USERENV','SESSION_USER');
      END IF;
  END;
/

Regards
Veem

--
Adrian Klaver
adrian.klaver@aklaver.com

#3veem v
veema0000@gmail.com
In reply to: Adrian Klaver (#2)
Re: Question on trigger

Thank you Adrian.

So it seems the heavy DML tables will see an impact if having triggers
(mainly for each row trigger) created on them.

And also the bulk DML/array based insert (which inserts multiple rows in
one short or one batch) , in those cases it seems the trigger will not make
that happen as it will force it to make it happen row by row, as the
trigger is row based. Will test anyway though.

On Thu, 11 Apr 2024 at 22:00, Adrian Klaver <adrian.klaver@aklaver.com>
wrote:

Show quoted text

On 4/11/24 07:31, veem v wrote:

Hi, We used to use Oracle database in which we had audit
triggers(something as below) mandated for all tables by the control
team. Now we are going to use the postgresql 15.4 database for one of
our applications. So,wanted to understand if there exists any downside
of such audit trigger setup for all the tables? Will it impact the bulk
data insert/update/delete OR slowdown of any of the DML operations
significantly (and thus will not be advisable to use for all tables but
selected ones)?

Triggers are overhead in Postgres as they where in Oracle. If they
didn't cause an issue in Oracle I would suspect that would also be the
case in Postgres. To confirm you would need to create a test setup and
run some common operations and see what the overhead is.

Some potential performance improvements:

https://www.postgresql.org/docs/current/sql-createtrigger.html

"...a trigger that is marked FOR EACH STATEMENT only executes once for
any given operation, regardless of how many rows it modifies (in
particular, an operation that modifies zero rows will still result in
the execution of any applicable FOR EACH STATEMENT triggers)."

<...>

"The REFERENCING option enables collection of transition relations,
which are row sets that include all of the rows inserted, deleted, or
modified by the current SQL statement. This feature lets the trigger see
a global view of what the statement did, not just one row at a time.
This option is only allowed for an AFTER trigger that is not a
constraint trigger; also, if the trigger is an UPDATE trigger, it must
not specify a column_name list. OLD TABLE may only be specified once,
and only for a trigger that can fire on UPDATE or DELETE; it creates a
transition relation containing the before-images of all rows updated or
deleted by the statement. Similarly, NEW TABLE may only be specified
once, and only for a trigger that can fire on UPDATE or INSERT; it
creates a transition relation containing the after-images of all rows
updated or inserted by the statement."

As example:

https://www.postgresql.org/docs/current/plpgsql-trigger.html

Example 43.7. Auditing with Transition Tables

CREATE OR REPLACE TRIGGER TAB_AUD_TRG
BEFORE DELETE OR INSERT OR UPDATE
ON tab
FOR EACH ROW
BEGIN
IF inserting THEN
:NEW.create_timestamp := systimestamp;
:NEW.create_userid := sys_context('USERENV','SESSION_USER');
:NEW.update_timestamp := systimestamp;
:NEW.update_userid := sys_context('USERENV','SESSION_USER');
ELSIF updating THEN
IF updating('create_userid') OR updating('create_timestamp')

THEN

:new.create_userid := :old.create_userid;
:new.create_timestamp := :old.create_timestamp;
END IF;
:NEW.update_timestamp := systimestamp;
:NEW.update_userid := sys_context('USERENV','SESSION_USER');
END IF;
END;
/

Regards
Veem

--
Adrian Klaver
adrian.klaver@aklaver.com

#4Adrian Klaver
adrian.klaver@aklaver.com
In reply to: veem v (#3)
Re: Question on trigger

On 4/13/24 00:03, veem v wrote:

Thank you Adrian.

So it seems the heavy DML tables will see an impact if having triggers
(mainly for each row trigger) created on them.

And also the bulk DML/array based insert (which inserts multiple rows in
one short or one batch) , in those cases it seems the trigger will not
make that happen as it will force it to make it happen row by row, as
the trigger is row based. Will test anyway though.

You said you have triggers in the Oracle database and I assumed they
worked and where not a show stopping issue there. What makes you think
that would be different in Postgres?

What type of triggers where there in Oracle, per row, per statement or a
mix?

On Thu, 11 Apr 2024 at 22:00, Adrian Klaver <adrian.klaver@aklaver.com
<mailto:adrian.klaver@aklaver.com>> wrote:

On 4/11/24 07:31, veem v wrote:

Hi, We used to use Oracle database in which we had audit
triggers(something as below) mandated for all tables by the control
team. Now we are going to use the postgresql 15.4 database for

one of

our applications. So,wanted to understand if there exists any

downside

of such audit trigger setup for all the tables? Will it impact

the bulk

data insert/update/delete OR slowdown of any of the DML operations
significantly (and thus will not be advisable to use for all

tables but

selected ones)?

Triggers are overhead in Postgres as they where in Oracle. If they
didn't cause an issue in Oracle I would suspect that would also be the
case in Postgres. To confirm you would need to create a test setup and
run some common operations and see what the overhead is.

Some potential performance improvements:

https://www.postgresql.org/docs/current/sql-createtrigger.html
<https://www.postgresql.org/docs/current/sql-createtrigger.html&gt;

"...a trigger that is marked FOR EACH STATEMENT only executes once for
any given operation, regardless of how many rows it modifies (in
particular, an operation that modifies zero rows will still result in
the execution of any applicable FOR EACH STATEMENT triggers)."

<...>

"The REFERENCING option enables collection of transition relations,
which are row sets that include all of the rows inserted, deleted, or
modified by the current SQL statement. This feature lets the trigger
see
a global view of what the statement did, not just one row at a time.
This option is only allowed for an AFTER trigger that is not a
constraint trigger; also, if the trigger is an UPDATE trigger, it must
not specify a column_name list. OLD TABLE may only be specified once,
and only for a trigger that can fire on UPDATE or DELETE; it creates a
transition relation containing the before-images of all rows updated or
deleted by the statement. Similarly, NEW TABLE may only be specified
once, and only for a trigger that can fire on UPDATE or INSERT; it
creates a transition relation containing the after-images of all rows
updated or inserted by the statement."

As example:

https://www.postgresql.org/docs/current/plpgsql-trigger.html
<https://www.postgresql.org/docs/current/plpgsql-trigger.html&gt;

Example 43.7. Auditing with Transition Tables

CREATE OR REPLACE TRIGGER TAB_AUD_TRG
    BEFORE DELETE OR INSERT OR UPDATE
    ON tab
    FOR EACH ROW
BEGIN
        IF inserting THEN
          :NEW.create_timestamp := systimestamp;
          :NEW.create_userid  :=

sys_context('USERENV','SESSION_USER');

          :NEW.update_timestamp := systimestamp;
          :NEW.update_userid := sys_context('USERENV','SESSION_USER');
        ELSIF updating THEN
          IF  updating('create_userid') OR

updating('create_timestamp') THEN

              :new.create_userid   := :old.create_userid;
              :new.create_timestamp  := :old.create_timestamp;
          END IF;
          :NEW.update_timestamp := systimestamp;
          :NEW.update_userid := sys_context('USERENV','SESSION_USER');
        END IF;
    END;
/

Regards
Veem

--
Adrian Klaver
adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>

--
Adrian Klaver
adrian.klaver@aklaver.com

#5veem v
veema0000@gmail.com
In reply to: Adrian Klaver (#4)
Re: Question on trigger

On Sat, 13 Apr 2024 at 21:44, Adrian Klaver <adrian.klaver@aklaver.com>
wrote:

On 4/13/24 00:03, veem v wrote:

Thank you Adrian.

So it seems the heavy DML tables will see an impact if having triggers
(mainly for each row trigger) created on them.

And also the bulk DML/array based insert (which inserts multiple rows in
one short or one batch) , in those cases it seems the trigger will not
make that happen as it will force it to make it happen row by row, as
the trigger is row based. Will test anyway though.

You said you have triggers in the Oracle database and I assumed they
worked and where not a show stopping issue there. What makes you think
that would be different in Postgres?

What type of triggers where there in Oracle, per row, per statement or a
mix?

Actually we have row level triggers in oracle which are running for
smaller volume DML and are making the direct path inserts to happen in
conventional row by row insert, in presence of trigger. So was wondering if
it postgres we will be encountering a similar issue and batch inserts may
be converted back to row by row automatically. And here we are going to
process higher volume DMLS in postgresql database.

#6Adrian Klaver
adrian.klaver@aklaver.com
In reply to: veem v (#5)
Re: Question on trigger

On 4/16/24 12:39, veem v wrote:

On Sat, 13 Apr 2024 at 21:44, Adrian Klaver <adrian.klaver@aklaver.com
<mailto:adrian.klaver@aklaver.com>> wrote:

On 4/13/24 00:03, veem v wrote:

Thank you Adrian.

So it seems the heavy DML tables will see an impact if having

triggers

(mainly for each row trigger) created on them.

And also the bulk DML/array based insert (which inserts multiple

rows in

one short or one batch) , in those cases it seems the trigger

will not

make that happen as it will force it to make it happen row by

row, as

the trigger is row based. Will test anyway though.

You said you have triggers in the Oracle database and I assumed they
worked and where not a show stopping issue there. What makes you think
that would be different in Postgres?

What type of triggers where there in Oracle, per row, per statement
or a
mix?

Actually we have row level triggers  in oracle which are running for
smaller volume DML and are making the direct path inserts to happen in
conventional row by row insert, in presence of trigger. So was wondering

Not sure what the above means, you will need to provide a more detailed
description. Though any DML you are doing on table that has any sort of
constraint, index, trigger, foreign key, default values, etc is going to
have more overhead then into an unencumbered table. FYI, some of the
preceding are system triggers, for example foreign keys.

if it postgres we will be encountering a similar issue and batch inserts
may be converted back to row by row automatically. And here we are going
to process higher volume DMLS in postgresql database.

Hard to say with the information provided. Easiest way to find out is
create a test setup and run the code. Though I guess, as I have not
actually tried this, you could have a per row trigger and per statement
trigger for the same action and disable the per row and enable the per
statement trigger for batch operations. Then once the batch operation is
done reverse the process. Again something to test to verify.

--
Adrian Klaver
adrian.klaver@aklaver.com