Triggers, again.. ;-)

Started by Net Virtual Mailing Listsabout 21 years ago8 messagesgeneral
Jump to latest
#1Net Virtual Mailing Lists
mailinglists@net-virtual.com

Hello,

I have asked about this before, but I just haven't been able to get
anywhere with it yet.. I'm hoping someone can help me?

Here is my original function and trigger:

CREATE OR REPLACE VIEW items_category AS select count(*) AS count
,b.category,nlevel(b.category) AS level,
subpath(b.category,0,nlevel(b.category)-1) as parent,
b.head_title,b.cat_title,b.subcat_title FROM items a,category b
WHERE b.category @> a.category
AND a.status = 'open'
GROUP BY b.category,b.head_title, b.cat_title, b.subcat_title
ORDER BY b.category;
-----------
CREATE OR REPLACE FUNCTION category_mv_refresh_row(ltree[]) RETURNS VOID
SECURITY DEFINER
LANGUAGE 'plpgsql' AS '
BEGIN
DELETE FROM category_mv WHERE category @> $1;
INSERT INTO category_mv SELECT * FROM items_category WHERE category @> $1;
RETURN;
END
';
-----------
CREATE OR REPLACE FUNCTION update_ut() RETURNS TRIGGER
SECURITY DEFINER LANGUAGE 'plpgsql' AS '
BEGIN
IF OLD.category = NEW.category THEN
PERFORM category_mv_refresh_row(NEW.category);
ELSE
PERFORM category_mv_refresh_row(OLD.category);
PERFORM category_mv_refresh_row(NEW.category);
END IF;
RETURN NULL;
END
';
-----------
CREATE TRIGGER category_mv_ut AFTER UPDATE ON items
FOR EACH ROW EXECUTE PROCEDURE update_ut();

Now what I need is a way for category_mv_refresh_row to be made optional
during the execution of update_ut, or somewhere. I thought about
changing update_ut to something like:

CREATE OR REPLACE FUNCTION update_ut() RETURNS TRIGGER
SECURITY DEFINER LANGUAGE 'plpgsql' AS '
BEGIN
IF skip_update IS NOT NULL THEN
IF OLD.category = NEW.category THEN
PERFORM category_mv_refresh_row(NEW.category);
ELSE
PERFORM category_mv_refresh_row(OLD.category);
PERFORM category_mv_refresh_row(NEW.category);
END IF;
END IF;
RETURN NULL;
END
';

.. then somehow setting "skip_update" during my transaction.. Something like:
BEGIN
skip_update boolean := 't';
... insert rows
... update materialized view table
COMMIT;

But, I can't set skip_update like that I guess. Does anyone have any
idea how I might go about doing this?.. The reason is, I have a function
that updates the *entire* materialized view that takes about 15 seconds
to execute, but calling category_mv_refresh_row takes about 2 seconds.
When I am inserting thousands of rows, this results in an enormous
additional load on the database server (and takes far longer to execute).

Out of curiosity, is "DROP TRIGGER" transaction safe?... I mean, could I do:

BEGIN
DROP TRIGGER category_mv_ut;
... insert rows
... update materialized view table
CREATE TRIGGER category_mv_ut AFTER UPDATE ON items
FOR EACH ROW EXECUTE PROCEDURE update_ut();
COMMIT;

.. without other sessions being affected?

I hope I've explained this well enough!

Thanks as always!

- Greg

#2Phil Endecott
spam_from_postgresql_general@chezphil.org
In reply to: Net Virtual Mailing Lists (#1)
Re: Triggers, again.. ;-)

Greg wrote:

is "DROP TRIGGER" transaction safe?... I mean, could I do:

BEGIN
DROP TRIGGER category_mv_ut;
... insert rows
... update materialized view table
CREATE TRIGGER category_mv_ut AFTER UPDATE ON items
FOR EACH ROW EXECUTE PROCEDURE update_ut();
COMMIT;

.. without other sessions being affected?

This is exactly what I do to avoid matview bulk-update performance issues.

--Phil.

#3Net Virtual Mailing Lists
mailinglists@net-virtual.com
In reply to: Phil Endecott (#2)
Re: Triggers, again.. ;-)

Greg wrote:

is "DROP TRIGGER" transaction safe?... I mean, could I do:

BEGIN
DROP TRIGGER category_mv_ut;
... insert rows
... update materialized view table
CREATE TRIGGER category_mv_ut AFTER UPDATE ON items
FOR EACH ROW EXECUTE PROCEDURE update_ut();
COMMIT;

.. without other sessions being affected?

This is exactly what I do to avoid matview bulk-update performance issues.

--Phil.

Hello,

Are there any issues I should know about with this method?.... Will
other transactions be able to take place during this?... Any blocking
issues?..... I've never attempted anything like this and it seems sort-of
scary to me (at the very least, applying an awfully big hammer to the
problem).

Thanks for your help!

- Greg

#4Phil Endecott
spam_from_postgresql_general@chezphil.org
In reply to: Net Virtual Mailing Lists (#3)
Re: Triggers, again.. ;-)

Greg asked:

is "DROP TRIGGER" transaction safe?... I mean, could I do:

BEGIN
DROP TRIGGER category_mv_ut;
... insert rows
... update materialized view table
CREATE TRIGGER category_mv_ut AFTER UPDATE ON items
FOR EACH ROW EXECUTE PROCEDURE update_ut();
COMMIT;

.. without other sessions being affected?

I replied:

This is exactly what I do to avoid matview bulk-update
performance issues.

Greg then asked:

Are there any issues I should know about with this method?.... Will
other transactions be able to take place during this?... Any blocking
issues?..... I've never attempted anything like this and it seems
sort-of scary to me (at the very least, applying an awfully big hammer
to the problem).

I am not an expert, but I use this technique. Maybe other users will
have some observations. But as I perceive it, the triggers currently in
force are recorded in a (system) table somewhere and that table has the
same well-behaved transactional semantics as other tables. So, as far
as other transactions are concerned, the triggers are unchanged and this
is entirely safe. My experience suggests that it is not inefficient.
As for locking, my guess is that another transaction that was also
trying to create or drop triggers could block especially if it was
trying to change the same triggers, but other operations will be fine.

It seems less scary when you think of metadata as just being the content
of more tables, rather than something special.

Hopefully someone will correct me if it is worse than this!

--Phil.

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Phil Endecott (#4)
Re: Triggers, again.. ;-)

Phil Endecott <spam_from_postgresql_general@chezphil.org> writes:

It seems less scary when you think of metadata as just being the content
of more tables, rather than something special.

PG does just fine with handling metadata changes transactionally.
However, most operations that affect a table's schema at all will take
an exclusive lock on the table, thereby blocking out other operations
on the table until the schema-altering operation commits. This could be
pretty annoying if you have lots of concurrent activity that needs to
keep going --- in particular the proposed approach would lock out access
to the underlying table for as long as it takes to update the
materialized view, since the DROP TRIGGER would take that exclusive lock
and it'd be held till end of transaction. If that's OK then there's
nothing wrong with doing it that way.

regards, tom lane

#6Phil Endecott
spam_from_postgresql_general@chezphil.org
In reply to: Tom Lane (#5)
Re: Triggers, again.. ;-)

Tom Lane wrote:

Phil Endecott <spam_from_postgresql_general@chezphil.org> writes:

It seems less scary when you think of metadata as just being the content
of more tables, rather than something special.

PG does just fine with handling metadata changes transactionally.
However, most operations that affect a table's schema at all will take
an exclusive lock on the table, thereby blocking out other operations
on the table until the schema-altering operation commits. This could be
pretty annoying if you have lots of concurrent activity that needs to
keep going --- in particular the proposed approach would lock out access
to the underlying table for as long as it takes to update the
materialized view, since the DROP TRIGGER would take that exclusive lock
and it'd be held till end of transaction. If that's OK then there's
nothing wrong with doing it that way.

Hi Tom,

I was hoping that my positive-sounding message would flush out any
problems...

I would understand this if I were doing an "ALTER TABLE", for example.
But does adding or removing a trigger really count as "schema-altering"?

--Phil.

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Phil Endecott (#6)
Re: Triggers, again.. ;-)

Phil Endecott <spam_from_postgresql_general@chezphil.org> writes:

I would understand this if I were doing an "ALTER TABLE", for example.
But does adding or removing a trigger really count as "schema-altering"?

[ shrug... ] Hard to say. Probably depends a lot on what the trigger
does. I suppose we could at least reduce the lock from AccessExclusive
to Exclusive, which would allow concurrent readers (since SELECT by
definition doesn't fire any triggers).

No one's really gone through and taken a hard look at whether every
single DDL operation needs the maximum lock ...

regards, tom lane

#8Net Virtual Mailing Lists
mailinglists@net-virtual.com
In reply to: Tom Lane (#7)
Re: Triggers, again.. ;-)

Phil Endecott <spam_from_postgresql_general@chezphil.org> writes:

I would understand this if I were doing an "ALTER TABLE", for example.
But does adding or removing a trigger really count as "schema-altering"?

[ shrug... ] Hard to say. Probably depends a lot on what the trigger
does. I suppose we could at least reduce the lock from AccessExclusive
to Exclusive, which would allow concurrent readers (since SELECT by
definition doesn't fire any triggers).

No one's really gone through and taken a hard look at whether every
single DDL operation needs the maximum lock ...

regards, tom lane

FYI, I did experience locking issues (my attempt to drop the trigger
resulted in other sessions blocking)...

In any event, I found a solution which at first seemed stupid, but works
so great I thought I'd share it...

All I did was added an extra column to my table "batch_process". Then in
the trigger do something like:

IF NEW.batch_process THEN
NEW.batch_process := NULL;
RETURN NULL;
END IF;
.. whatever the rest of transaction is

Then when doing an insert, just:

INSERT INTO table (..., batch_process) VALUES (..., 't') when you want the
trigger not to fire...

What do you guys think?.. Is this dumb?....

- Greg