Statement-level Triggers For Uniqueness Checks
In digging around the codebase (see thread: Referential Integrity Checks
with Statement-level Triggers), I noticed that unique constraints are
similarly enforced with a per-row trigger.
The situation with unique indexes is fairly similar to the situation with
RI checks: there is some overhead to using a transition table, but that
overhead may be less than the cost of firing a trigger once per row
inserted/updated.
However, there are some significant differences (apologies to everyone
already familiar with this part of the code, it's new to me).
For one, there is no analog to RI_Initial_Check(). Instead the constraint
is initially checked via building/finding the unique index that would
enforce the uniqueness check.
Then, the actual lookup done in unique_key_recheck has to contend with the
intricacies of HOT updates, so I don't know if that can be expressed in an
SPI query. Even if not, I think it should be possible to iterate over
the EphemeralNamedRelation and that would result itself have a payoff in
reduced trigger calls.
I'm going to be working on this as a POC patch separate from the RI work,
hence the separate thread, but there's obviously a lot of overlap.
All advice is appreciated.
So I took a first pass at this, and I got stuck.
The basic design is that instead of creating one row-level trigger per
deferrable unique constraint, we instead create one insert-statement level
trigger and one update-statement level trigger. Both call the function
unique_key_recheck(), which now attempts to walk the inserted transition
table, doing basically the same checks that were done in the per-row
trigger. I'm hoping for some performance advantage for large row
inserts/updates due to N-1 fewer triggers firing and N-1 attempts to lock
the unique index.
The regression diff (attached) seems to imply that the triggers simply are
not firing, though. I have verified that the triggers are created:
test=# CREATE TEMPORARY TABLE test ( x integer PRIMARY KEY DEFERRABLE
INITIALLY DEFERRED );
CREATE TABLE
test=# SELECT * FROM pg_trigger WHERE
oid tgconstraint tgdeferrable tginitdeferred tgnargs
tgqual
tgargs tgconstrindid tgenabled tgisinternal tgnewtable
tgrelid
tgattr tgconstrrelid tgfoid tgname tgoldtable
tgtype
test=# SELECT * FROM pg_trigger WHERE tgrelid = 'test'::regclass;
oid | tgrelid | tgname | tgfoid | tgtype |
tgenabled | tgisinternal | tgconstrrelid | tgconstrindid | tgconstraint | t
gdeferrable | tginitdeferred | tgnargs | tgattr | tgargs | tgqual |
tgoldtable | tgnewtable
-------+---------+------------------------------+--------+--------+-----------+--------------+---------------+---------------+--------------+--
------------+----------------+---------+--------+--------+--------+------------+------------------------------
16392 | 16387 | PK_ConstraintTrigger_i_16392 | 1250 | 4 | O
| t | 0 | 16390 | 16391 | t
| t | 0 | | \x | |
| pg_inserted_transition_table
16393 | 16387 | PK_ConstraintTrigger_u_16393 | 1250 | 16 | O
| t | 0 | 16390 | 16391 | t
| t | 0 | | \x | |
| pg_inserted_transition_table
(2 rows)
Any idea where I went wrong?
On Mon, Dec 17, 2018 at 9:56 AM Corey Huinker <corey.huinker@gmail.com>
wrote:
Show quoted text
In digging around the codebase (see thread: Referential Integrity Checks
with Statement-level Triggers), I noticed that unique constraints are
similarly enforced with a per-row trigger.The situation with unique indexes is fairly similar to the situation with
RI checks: there is some overhead to using a transition table, but that
overhead may be less than the cost of firing a trigger once per row
inserted/updated.However, there are some significant differences (apologies to everyone
already familiar with this part of the code, it's new to me).For one, there is no analog to RI_Initial_Check(). Instead the constraint
is initially checked via building/finding the unique index that would
enforce the uniqueness check.Then, the actual lookup done in unique_key_recheck has to contend with the
intricacies of HOT updates, so I don't know if that can be expressed in an
SPI query. Even if not, I think it should be possible to iterate over
the EphemeralNamedRelation and that would result itself have a payoff in
reduced trigger calls.I'm going to be working on this as a POC patch separate from the RI work,
hence the separate thread, but there's obviously a lot of overlap.All advice is appreciated.
Attachments:
regression.diffsapplication/octet-stream; name=regression.diffsDownload
*** /home/vagrant/src/postgres/src/test/regress/expected/constraints.out 2018-12-24 23:19:00.678004003 +0000
--- /home/vagrant/src/postgres/src/test/regress/results/constraints.out 2018-12-24 23:19:07.150004003 +0000
***************
*** 475,482 ****
BEGIN;
-- default is immediate so this should fail right away
UPDATE unique_tbl SET i = 1 WHERE i = 0;
- ERROR: duplicate key value violates unique constraint "unique_tbl_i_key"
- DETAIL: Key (i)=(1) already exists.
ROLLBACK;
-- check is done at end of statement, so this should succeed
UPDATE unique_tbl SET i = i+1;
--- 475,480 ----
***************
*** 532,553 ****
BEGIN;
INSERT INTO unique_tbl VALUES (3, 'Three'); -- should succeed for now
COMMIT; -- should fail
- ERROR: duplicate key value violates unique constraint "unique_tbl_i_key"
- DETAIL: Key (i)=(3) already exists.
-- make constraint check immediate
BEGIN;
SET CONSTRAINTS ALL IMMEDIATE;
INSERT INTO unique_tbl VALUES (3, 'Three'); -- should fail
- ERROR: duplicate key value violates unique constraint "unique_tbl_i_key"
- DETAIL: Key (i)=(3) already exists.
COMMIT;
-- forced check when SET CONSTRAINTS is called
BEGIN;
SET CONSTRAINTS ALL DEFERRED;
INSERT INTO unique_tbl VALUES (3, 'Three'); -- should succeed for now
SET CONSTRAINTS ALL IMMEDIATE; -- should fail
- ERROR: duplicate key value violates unique constraint "unique_tbl_i_key"
- DETAIL: Key (i)=(3) already exists.
COMMIT;
-- test deferrable UNIQUE with a partitioned table
CREATE TABLE parted_uniq_tbl (i int UNIQUE DEFERRABLE) partition by range (i);
--- 530,545 ----
***************
*** 566,579 ****
INSERT INTO parted_uniq_tbl VALUES (1);
SAVEPOINT f;
INSERT INTO parted_uniq_tbl VALUES (1); -- unique violation
- ERROR: duplicate key value violates unique constraint "parted_uniq_tbl_1_i_key"
- DETAIL: Key (i)=(1) already exists.
ROLLBACK TO f;
SET CONSTRAINTS parted_uniq_tbl_i_key DEFERRED;
INSERT INTO parted_uniq_tbl VALUES (1); -- OK now, fail at commit
COMMIT;
- ERROR: duplicate key value violates unique constraint "parted_uniq_tbl_1_i_key"
- DETAIL: Key (i)=(1) already exists.
DROP TABLE parted_uniq_tbl;
-- test a HOT update that invalidates the conflicting tuple.
-- the trigger should still fire and catch the violation
--- 558,567 ----
***************
*** 581,588 ****
INSERT INTO unique_tbl VALUES (3, 'Three'); -- should succeed for now
UPDATE unique_tbl SET t = 'THREE' WHERE i = 3 AND t = 'Three';
COMMIT; -- should fail
- ERROR: duplicate key value violates unique constraint "unique_tbl_i_key"
- DETAIL: Key (i)=(3) already exists.
SELECT * FROM unique_tbl;
i | t
---+-------
--- 569,574 ----
***************
*** 591,597 ****
5 | one
4 | two
2 | four
! (5 rows)
-- test a HOT update that modifies the newly inserted tuple,
-- but should succeed because we then remove the other conflicting tuple.
--- 577,587 ----
5 | one
4 | two
2 | four
! 3 | THREE
! 3 | THREE
! 3 | THREE
! 3 | THREE
! (9 rows)
-- test a HOT update that modifies the newly inserted tuple,
-- but should succeed because we then remove the other conflicting tuple.
***************
*** 606,613 ****
5 | one
4 | two
2 | four
3 | threex
! (5 rows)
COMMIT;
SELECT * FROM unique_tbl;
--- 596,607 ----
5 | one
4 | two
2 | four
+ 3 | THREE
+ 3 | THREE
+ 3 | THREE
+ 3 | THREE
3 | threex
! (9 rows)
COMMIT;
SELECT * FROM unique_tbl;
***************
*** 617,624 ****
5 | one
4 | two
2 | four
3 | threex
! (5 rows)
DROP TABLE unique_tbl;
--
--- 611,622 ----
5 | one
4 | two
2 | four
+ 3 | THREE
+ 3 | THREE
+ 3 | THREE
+ 3 | THREE
3 | threex
! (9 rows)
DROP TABLE unique_tbl;
--
***************
*** 668,688 ****
INSERT INTO deferred_excl VALUES(1);
INSERT INTO deferred_excl VALUES(2);
INSERT INTO deferred_excl VALUES(1); -- fail
- ERROR: conflicting key value violates exclusion constraint "deferred_excl_con"
- DETAIL: Key (f1)=(1) conflicts with existing key (f1)=(1).
INSERT INTO deferred_excl VALUES(1) ON CONFLICT ON CONSTRAINT deferred_excl_con DO NOTHING; -- fail
ERROR: ON CONFLICT does not support deferrable unique constraints/exclusion constraints as arbiters
BEGIN;
INSERT INTO deferred_excl VALUES(2); -- no fail here
COMMIT; -- should fail here
- ERROR: conflicting key value violates exclusion constraint "deferred_excl_con"
- DETAIL: Key (f1)=(2) conflicts with existing key (f1)=(2).
BEGIN;
INSERT INTO deferred_excl VALUES(3);
INSERT INTO deferred_excl VALUES(3); -- no fail here
COMMIT; -- should fail here
- ERROR: conflicting key value violates exclusion constraint "deferred_excl_con"
- DETAIL: Key (f1)=(3) conflicts with existing key (f1)=(3).
-- bug #13148: deferred constraint versus HOT update
BEGIN;
INSERT INTO deferred_excl VALUES(2, 1); -- no fail here
--- 666,680 ----
***************
*** 693,700 ****
f1 | f2
----+----
1 |
2 | 2
! (2 rows)
ALTER TABLE deferred_excl DROP CONSTRAINT deferred_excl_con;
-- This should fail, but worth testing because of HOT updates
--- 685,695 ----
f1 | f2
----+----
1 |
+ 1 |
+ 3 |
+ 3 |
2 | 2
! (5 rows)
ALTER TABLE deferred_excl DROP CONSTRAINT deferred_excl_con;
-- This should fail, but worth testing because of HOT updates
======================================================================
*** /home/vagrant/src/postgres/src/test/regress/expected/triggers.out 2018-12-23 20:41:32.506001000 +0000
--- /home/vagrant/src/postgres/src/test/regress/results/triggers.out 2018-12-24 23:19:07.834004003 +0000
***************
*** 2237,2247 ****
group by tgrelid::regclass order by tgrelid::regclass;
tgrelid | count
---------------+-------
! trg_clone | 1
! trg_clone1 | 1
! trg_clone2 | 1
! trg_clone3 | 1
! trg_clone_3_3 | 1
(5 rows)
drop table trg_clone;
--- 2237,2247 ----
group by tgrelid::regclass order by tgrelid::regclass;
tgrelid | count
---------------+-------
! trg_clone | 2
! trg_clone1 | 2
! trg_clone2 | 2
! trg_clone3 | 2
! trg_clone_3_3 | 2
(5 rows)
drop table trg_clone;
======================================================================
0001-Refactor-per-row-unique-key-deferred-constraint-trig.patchapplication/octet-stream; name=0001-Refactor-per-row-unique-key-deferred-constraint-trig.patchDownload
From 829cbc0c03bc3b51b12aaff3feb0161366a9d03b Mon Sep 17 00:00:00 2001
From: vagrant <vagrant@vagrant.vm>
Date: Mon, 24 Dec 2018 23:40:17 +0000
Subject: [PATCH] Refactor per-row unique key deferred constraint triggers into
per-statement triggers. The function unique_key_recheck() will instead walk
the transition table and individually check rows. The hope is that this will
result in reduced locking and a speed improvement for operations with many
rows affected.
---
src/backend/catalog/index.c | 68 ++++++++----
src/backend/commands/constraint.c | 169 ++++++++++++++++--------------
src/backend/commands/trigger.c | 9 --
3 files changed, 136 insertions(+), 110 deletions(-)
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 8709e8c22c..7a8da5b1d9 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -1363,26 +1363,54 @@ index_constraint_create(Relation heapRelation,
*/
if (deferrable)
{
- CreateTrigStmt *trigger;
-
- trigger = makeNode(CreateTrigStmt);
- trigger->trigname = (constraintType == CONSTRAINT_PRIMARY) ?
- "PK_ConstraintTrigger" :
- "Unique_ConstraintTrigger";
- trigger->relation = NULL;
- trigger->funcname = SystemFuncName("unique_key_recheck");
- trigger->args = NIL;
- trigger->row = true;
- trigger->timing = TRIGGER_TYPE_AFTER;
- trigger->events = TRIGGER_TYPE_INSERT | TRIGGER_TYPE_UPDATE;
- trigger->columns = NIL;
- trigger->whenClause = NULL;
- trigger->isconstraint = true;
- trigger->deferrable = true;
- trigger->initdeferred = initdeferred;
- trigger->constrrel = NULL;
-
- (void) CreateTrigger(trigger, NULL, RelationGetRelid(heapRelation),
+ CreateTrigStmt *insert_trigger, *update_trigger;
+
+ TriggerTransition *ins = makeNode(TriggerTransition);
+ ins->name = "pg_inserted_transition_table";
+ ins->isNew = true;
+ ins->isTable = true;
+
+ insert_trigger = makeNode(CreateTrigStmt);
+ insert_trigger->trigname = (constraintType == CONSTRAINT_PRIMARY) ?
+ "PK_ConstraintTrigger_i" :
+ "Unique_ConstraintTrigger_i";
+ insert_trigger->relation = NULL;
+ insert_trigger->funcname = SystemFuncName("unique_key_recheck");
+ insert_trigger->args = NIL;
+ insert_trigger->row = false;
+ insert_trigger->timing = TRIGGER_TYPE_AFTER;
+ insert_trigger->events = TRIGGER_TYPE_INSERT;
+ insert_trigger->columns = NIL;
+ insert_trigger->whenClause = NULL;
+ insert_trigger->isconstraint = true;
+ insert_trigger->deferrable = true;
+ insert_trigger->initdeferred = initdeferred;
+ insert_trigger->constrrel = NULL;
+ insert_trigger->transitionRels = list_make1(ins);
+
+ (void) CreateTrigger(insert_trigger, NULL, RelationGetRelid(heapRelation),
+ InvalidOid, conOid, indexRelationId, InvalidOid,
+ InvalidOid, NULL, true, false);
+
+ update_trigger = makeNode(CreateTrigStmt);
+ update_trigger->trigname = (constraintType == CONSTRAINT_PRIMARY) ?
+ "PK_ConstraintTrigger_u" :
+ "Unique_ConstraintTrigger_u";
+ update_trigger->relation = NULL;
+ update_trigger->funcname = SystemFuncName("unique_key_recheck");
+ update_trigger->args = NIL;
+ update_trigger->row = false;
+ update_trigger->timing = TRIGGER_TYPE_AFTER;
+ update_trigger->events = TRIGGER_TYPE_UPDATE;
+ update_trigger->columns = NIL;
+ update_trigger->whenClause = NULL;
+ update_trigger->isconstraint = true;
+ update_trigger->deferrable = true;
+ update_trigger->initdeferred = initdeferred;
+ update_trigger->constrrel = NULL;
+ update_trigger->transitionRels = list_make1(ins);
+
+ (void) CreateTrigger(update_trigger, NULL, RelationGetRelid(heapRelation),
InvalidOid, conOid, indexRelationId, InvalidOid,
InvalidOid, NULL, true, false);
}
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index b0b2cb2a14..c5d2d1f983 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -45,7 +45,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
IndexInfo *indexInfo;
EState *estate;
ExprContext *econtext;
- TupleTableSlot *slot;
+ TupleTableSlot *slot = NULL;
Datum values[INDEX_MAX_KEYS];
bool isnull[INDEX_MAX_KEYS];
@@ -61,54 +61,18 @@ unique_key_recheck(PG_FUNCTION_ARGS)
funcname)));
if (!TRIGGER_FIRED_AFTER(trigdata->tg_event) ||
- !TRIGGER_FIRED_FOR_ROW(trigdata->tg_event))
+ !TRIGGER_FIRED_FOR_STATEMENT(trigdata->tg_event))
ereport(ERROR,
(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),
- errmsg("function \"%s\" must be fired AFTER ROW",
+ errmsg("function \"%s\" must be fired AFTER STATEMENT",
funcname)));
- /*
- * Get the new data that was inserted/updated.
- */
- if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))
- new_row = trigdata->tg_trigtuple;
- else if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
- new_row = trigdata->tg_newtuple;
- else
- {
+ if (! TRIGGER_FIRED_BY_INSERT(trigdata->tg_event) &&
+ ! TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
ereport(ERROR,
(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),
errmsg("function \"%s\" must be fired for INSERT or UPDATE",
funcname)));
- new_row = NULL; /* keep compiler quiet */
- }
-
- /*
- * If the new_row is now dead (ie, inserted and then deleted within our
- * transaction), we can skip the check. However, we have to be careful,
- * because this trigger gets queued only in response to index insertions;
- * which means it does not get queued for HOT updates. The row we are
- * called for might now be dead, but have a live HOT child, in which case
- * we still need to make the check --- effectively, we're applying the
- * check against the live child row, although we can use the values from
- * this row since by definition all columns of interest to us are the
- * same.
- *
- * This might look like just an optimization, because the index AM will
- * make this identical test before throwing an error. But it's actually
- * needed for correctness, because the index AM will also throw an error
- * if it doesn't find the index entry for the row. If the row's dead then
- * it's possible the index entry has also been marked dead, and even
- * removed.
- */
- tmptid = new_row->t_self;
- if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
- {
- /*
- * All rows in the HOT chain are dead, so skip the check.
- */
- return PointerGetDatum(NULL);
- }
/*
* Open the index, acquiring a RowExclusiveLock, just as if we were going
@@ -119,14 +83,6 @@ unique_key_recheck(PG_FUNCTION_ARGS)
RowExclusiveLock);
indexInfo = BuildIndexInfo(indexRel);
- /*
- * The heap tuple must be put into a slot for FormIndexDatum.
- */
- slot = MakeSingleTupleTableSlot(RelationGetDescr(trigdata->tg_relation),
- &TTSOpsHeapTuple);
-
- ExecStoreHeapTuple(new_row, slot, false);
-
/*
* Typically the index won't have expressions, but if it does we need an
* EState to evaluate them. We need it for exclusion constraints too,
@@ -137,49 +93,101 @@ unique_key_recheck(PG_FUNCTION_ARGS)
{
estate = CreateExecutorState();
econtext = GetPerTupleExprContext(estate);
- econtext->ecxt_scantuple = slot;
}
else
estate = NULL;
/*
- * Form the index values and isnull flags for the index entry that we need
- * to check.
- *
- * Note: if the index uses functions that are not as immutable as they are
- * supposed to be, this could produce an index tuple different from the
- * original. The index AM can catch such errors by verifying that it
- * finds a matching index entry with the tuple's TID. For exclusion
- * constraints we check this in check_exclusion_constraint().
+ * Iterate over transition table.
*/
- FormIndexDatum(indexInfo, slot, estate, values, isnull);
+ tuplestore_select_read_pointer(trigdata->tg_newtable,0);
+ if (!tuplestore_gettupleslot(trigdata->tg_newtable, true, true, slot))
+ ereport(ERROR,
+ (errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),
+ errmsg("unexpected end of tuplestore")));
- /*
- * Now do the appropriate check.
- */
- if (indexInfo->ii_ExclusionOps == NULL)
+ ereport(ERROR, (errmsg_internal("start walking transition table")));
+
+ while (! TupIsNull(slot))
{
+ ereport(WARNING, (errmsg_internal("start loop transition table")));
+ new_row = slot->tts_ops->get_heap_tuple(slot);
+
/*
- * Note: this is not a real insert; it is a check that the index entry
- * that has already been inserted is unique. Passing t_self is
- * correct even if t_self is now dead, because that is the TID the
- * index will know about.
+ * If the new_row is now dead (ie, inserted and then deleted within our
+ * transaction), we can skip the check. However, we have to be careful,
+ * because this trigger gets queued only in response to index insertions;
+ * which means it does not get queued for HOT updates. The row we are
+ * called for might now be dead, but have a live HOT child, in which case
+ * we still need to make the check --- effectively, we're applying the
+ * check against the live child row, although we can use the values from
+ * this row since by definition all columns of interest to us are the
+ * same.
+ *
+ * This might look like just an optimization, because the index AM will
+ * make this identical test before throwing an error. But it's actually
+ * needed for correctness, because the index AM will also throw an error
+ * if it doesn't find the index entry for the row. If the row's dead then
+ * it's possible the index entry has also been marked dead, and even
+ * removed.
*/
- index_insert(indexRel, values, isnull, &(new_row->t_self),
- trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
- indexInfo);
- }
- else
- {
+ tmptid = new_row->t_self;
+ if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+ {
+ /*
+ * All rows in the HOT chain are dead, so skip the check.
+ */
+ continue;
+ }
+
+ if (estate != NULL)
+ econtext->ecxt_scantuple = slot;
+
+ /*
+ * Form the index values and isnull flags for the index entry that we need
+ * to check.
+ *
+ * Note: if the index uses functions that are not as immutable as they are
+ * supposed to be, this could produce an index tuple different from the
+ * original. The index AM can catch such errors by verifying that it
+ * finds a matching index entry with the tuple's TID. For exclusion
+ * constraints we check this in check_exclusion_constraint().
+ */
+ FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
/*
- * For exclusion constraints we just do the normal check, but now it's
- * okay to throw error. In the HOT-update case, we must use the live
- * HOT child's TID here, else check_exclusion_constraint will think
- * the child is a conflict.
+ * Now do the appropriate check.
*/
- check_exclusion_constraint(trigdata->tg_relation, indexRel, indexInfo,
- &tmptid, values, isnull,
- estate, false);
+ if (indexInfo->ii_ExclusionOps == NULL)
+ {
+ /*
+ * Note: this is not a real insert; it is a check that the index entry
+ * that has already been inserted is unique. Passing t_self is
+ * correct even if t_self is now dead, because that is the TID the
+ * index will know about.
+ */
+ index_insert(indexRel, values, isnull, &(new_row->t_self),
+ trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
+ indexInfo);
+ }
+ else
+ {
+ /*
+ * For exclusion constraints we just do the normal check, but now it's
+ * okay to throw error. In the HOT-update case, we must use the live
+ * HOT child's TID here, else check_exclusion_constraint will think
+ * the child is a conflict.
+ */
+ check_exclusion_constraint(trigdata->tg_relation, indexRel, indexInfo,
+ &tmptid, values, isnull,
+ estate, false);
+ }
+
+ ExecDropSingleTupleTableSlot(slot);
+
+ ereport(WARNING, (errmsg_internal("end loop transition table")));
+ if (!tuplestore_gettupleslot(trigdata->tg_newtable, true, true, slot))
+ break;
}
/*
@@ -189,7 +197,6 @@ unique_key_recheck(PG_FUNCTION_ARGS)
if (estate != NULL)
FreeExecutorState(estate);
- ExecDropSingleTupleTableSlot(slot);
index_close(indexRel, RowExclusiveLock);
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index fb0de60a45..e0feec747b 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -504,15 +504,6 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("transition tables cannot be specified for triggers with column lists")));
- /*
- * We disallow constraint triggers with transition tables, to
- * protect the assumption that such triggers can't be deferred.
- * See notes with AfterTriggers data structures, below.
- *
- * Currently this is enforced by the grammar, so just Assert here.
- */
- Assert(!stmt->isconstraint);
-
if (tt->isNew)
{
if (!(TRIGGER_FOR_INSERT(tgtype) ||
--
2.17.1
On Mon, 24 Dec 2018 at 23:57, Corey Huinker <corey.huinker@gmail.com> wrote:
So I took a first pass at this, and I got stuck.
[snip]
Any idea where I went wrong?
Take a look at this code in AfterTriggerSaveEvent():
/*
* If the trigger is a deferred unique constraint check trigger, only
* queue it if the unique constraint was potentially violated, which
* we know from index insertion time.
*/
if (trigger->tgfoid == F_UNIQUE_KEY_RECHECK)
{
if (!list_member_oid(recheckIndexes, trigger->tgconstrindid))
continue; /* Uniqueness definitely not violated */
}
If you trace it back, you'll see that for a statement-level trigger,
recheckIndexes will always be empty.
Regards,
Dean
On Tue, 25 Dec 2018 at 08:04, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
Take a look at this code in AfterTriggerSaveEvent():
Note that the intention behind that code is that in the (fairly
common) case where an insert or update operation is known to not lead
to any potential PK/UNIQUE index violations, the overhead for the
deferred recheck is minimal. For example, consider a bulk insert where
IDs come from a sequence known to not overlap with existing IDs. In
that case, each new ID is determined at index insertion time to not
possibly conflict with any existing ID, recheckIndexes remains empty
for each row, and no recheck triggers are ever queued.
One of the tricky things about replacing the current rechecks with
statement level triggers will be working out how to deal with such
cases (or cases with just a tiny fraction of keys to be rechecked)
without introducing a large overhead.
Regards,
Dean
On 25/12/2018 00:56, Corey Huinker wrote:
The regression diff (attached) seems to imply that the triggers simply
are not firing, though.
The reason for this was explained by Dean. If you take out the check
that he mentioned, then your trigger fires but crashes. In your changed
unique_key_recheck(), "slot" is not initialized before use (or ever).
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Fri, Jan 4, 2019 at 7:49 AM Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
On 25/12/2018 00:56, Corey Huinker wrote:
The regression diff (attached) seems to imply that the triggers simply
are not firing, though.The reason for this was explained by Dean. If you take out the check
that he mentioned, then your trigger fires but crashes. In your changed
unique_key_recheck(), "slot" is not initialized before use (or ever).
Thanks. I'll be revisiting this shortly. Dean's information made me
think the potential for a gain is smaller than initially imagined.
On 08/01/2019 23:26, Corey Huinker wrote:
On Fri, Jan 4, 2019 at 7:49 AM Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:On 25/12/2018 00:56, Corey Huinker wrote:
The regression diff (attached) seems to imply that the triggers simply
are not firing, though.The reason for this was explained by Dean. If you take out the check
that he mentioned, then your trigger fires but crashes. In your changed
unique_key_recheck(), "slot" is not initialized before use (or ever).Thanks. I'll be revisiting this shortly. Dean's information made me
think the potential for a gain is smaller than initially imagined.
I think those are independent considerations. The "recheckIndexes"
logic just tracks what indexes have potential conflicts to check later.
Whether that checking later happens in a row or statement trigger should
not matter. What you need to do is adapt the "recheckIndexes" logic
from row triggers to statement triggers, e.g., expand
ExecASInsertTriggers() to look more like ExecARInsertTriggers().
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services