Large expressions in indexes can't be stored (non-TOASTable)
Hi,
I ran into an issue (previously discussed[1]/messages/by-id/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com; quoting Andres out of
context that not addressing it then would "[a]ll but guarantee that
we'll have this discussion again"[2]/messages/by-id/20180720000356.5zkhvfpsqswngyob@alap3.anarazel.de) when trying to build a very large
expression index that did not fit within the page boundary. The
real-world use case was related to a vector search technique where I
wanted to use binary quantization based on the relationship between a
constant vector (the average at a point-in-time across the entire data
set) and the target vector[3]https://github.com/pgvector/pgvector[4]https://jkatz05.com/post/postgres/pgvector-scalar-binary-quantization/. An example:
CREATE INDEX ON embeddings
USING hnsw((quantization_func(embedding, $VECTOR)) bit_hamming_ops);
However, I ran into the issue in[1]/messages/by-id/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com, where pg_index was identified as
catalog that is missing a toast table, even though `indexprs` is marked
for extended storage. Providing a very simple reproducer in psql below:
----
CREATE TABLE def (id int);
SELECT array_agg(n) b FROM generate_series(1,10_000) n \gset
CREATE OR REPLACE FUNCTION vec_quantizer (a int, b int[]) RETURNS bool
AS $$ SELECT true $$ LANGUAGE SQL IMMUTABLE;
CREATE INDEX ON def (vec_quantizer(id, :'b'));
ERROR: row is too big: size 29448, maximum size 8160
---
This can come up with vector searches as vectors can be quite large -
the case I was testing involved a 1536-dim floating point vector (~6KB),
and the node parse tree pushed past the page boundary by about 2KB.
One could argue that pgvector or an extension can build in capabilities
to handle quantization internally without requiring the user to provide
a source vector (pgvectorscale does this). However, this also limits
flexibility to users, as they may want to bring their own quantization
functions to vector searches, e.g., as different quantization techniques
emerge, or if a particular technique is more suitable for a person's
dataset.
Thanks,
Jonathan
[1]: /messages/by-id/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
/messages/by-id/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
[2]: /messages/by-id/20180720000356.5zkhvfpsqswngyob@alap3.anarazel.de
/messages/by-id/20180720000356.5zkhvfpsqswngyob@alap3.anarazel.de
[3]: https://github.com/pgvector/pgvector
[4]: https://jkatz05.com/post/postgres/pgvector-scalar-binary-quantization/
On Tue, Sep 03, 2024 at 12:35:42PM -0400, Jonathan S. Katz wrote:
However, I ran into the issue in[1], where pg_index was identified as
catalog that is missing a toast table, even though `indexprs` is marked for
extended storage. Providing a very simple reproducer in psql below:
Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST
tables: pg_attribute, pg_class, pg_index, pg_largeobject, and
pg_largeobject_metadata. I've attached a short patch to add one for
pg_index, which resolves the issue cited here. This passes "check-world"
and didn't fail for a few ad hoc tests (e.g., VACUUM FULL on pg_index). I
haven't spent too much time investigating possible circularity issues, but
I'll note that none of the system indexes presently use the indexprs and
indpred columns.
If we do want to proceed with adding a TOAST table to pg_index, IMHO it
would be better to do it sooner than later so that it has plenty of time to
bake.
--
nathan
Attachments:
v1-0001-add-toast-table-to-pg_index.patchtext/plain; charset=us-asciiDownload+3-4
Nathan Bossart <nathandbossart@gmail.com> writes:
Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST
tables: pg_attribute, pg_class, pg_index, pg_largeobject, and
pg_largeobject_metadata. I've attached a short patch to add one for
pg_index, which resolves the issue cited here. This passes "check-world"
and didn't fail for a few ad hoc tests (e.g., VACUUM FULL on pg_index). I
haven't spent too much time investigating possible circularity issues, but
I'll note that none of the system indexes presently use the indexprs and
indpred columns.
Yeah, the possibility of circularity seems like the main hazard, but
I agree it's unlikely that the entries for system indexes could ever
need out-of-line storage. There are many other things that would have
to be improved before a system index could use indexprs or indpred.
regards, tom lane
On 9/4/24 3:08 PM, Tom Lane wrote:
Nathan Bossart <nathandbossart@gmail.com> writes:
Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST
tables: pg_attribute, pg_class, pg_index, pg_largeobject, and
pg_largeobject_metadata. I've attached a short patch to add one for
pg_index, which resolves the issue cited here. This passes "check-world"
and didn't fail for a few ad hoc tests (e.g., VACUUM FULL on pg_index). I
haven't spent too much time investigating possible circularity issues, but
I'll note that none of the system indexes presently use the indexprs and
indpred columns.Yeah, the possibility of circularity seems like the main hazard, but
I agree it's unlikely that the entries for system indexes could ever
need out-of-line storage. There are many other things that would have
to be improved before a system index could use indexprs or indpred.
Agreed on the unlikeliness of that, certainly in the short-to-mid term.
The impetus driving this is dealing with a data type that can be quite
large, and it's unlikely system catalogs will be dealing with anything
of that nature, or requiring very long expressions that couldn't be
encapsulated in a different way.
Just to be fair, in the case I presented there's an argument that what
I'm trying to do is fairly inefficient for an expression, given I'm
passing around an additional several KB payload into the query. However,
we'd likely have to do that anyway for this problem space, and the
overall performance hit is negligible compared to the search relevancy
boost.
I'm working on a much more robust test, but using a known 10MM 768-dim
dataset and two C-based quantization functions (one using the
expression), I got a 3% relevancy boost with a 2% reduction in latency
and throughput. On some other known datasets, I was able to improve
relevancy 40% or more, though given they were initially returning with
0% relevancy in some cases, it's not fair to compare performance numbers.
There are other ways to solve the problem as well, but allowing for the
larger expression gives users more choices in how they can approach it.
Jonathan
On Wed, Sep 04, 2024 at 03:20:33PM -0400, Jonathan S. Katz wrote:
On 9/4/24 3:08 PM, Tom Lane wrote:
Nathan Bossart <nathandbossart@gmail.com> writes:
Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST
tables: pg_attribute, pg_class, pg_index, pg_largeobject, and
pg_largeobject_metadata. I've attached a short patch to add one for
pg_index, which resolves the issue cited here. This passes "check-world"
and didn't fail for a few ad hoc tests (e.g., VACUUM FULL on pg_index). I
haven't spent too much time investigating possible circularity issues, but
I'll note that none of the system indexes presently use the indexprs and
indpred columns.Yeah, the possibility of circularity seems like the main hazard, but
I agree it's unlikely that the entries for system indexes could ever
need out-of-line storage. There are many other things that would have
to be improved before a system index could use indexprs or indpred.Agreed on the unlikeliness of that, certainly in the short-to-mid term. The
impetus driving this is dealing with a data type that can be quite large,
and it's unlikely system catalogs will be dealing with anything of that
nature, or requiring very long expressions that couldn't be encapsulated in
a different way.
Any objections to committing this? I've still been unable to identify any
breakage, and adding it now would give us ~1 year of testing before it'd be
available in a GA release. Perhaps we should at least add something to
misc_sanity.sql that verifies no system indexes are using pg_index's TOAST
table.
--
nathan
Nathan Bossart <nathandbossart@gmail.com> writes:
On Wed, Sep 04, 2024 at 03:20:33PM -0400, Jonathan S. Katz wrote:
On 9/4/24 3:08 PM, Tom Lane wrote:
Nathan Bossart <nathandbossart@gmail.com> writes:
Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST
tables: pg_attribute, pg_class, pg_index, pg_largeobject, and
pg_largeobject_metadata. I've attached a short patch to add one for
pg_index, which resolves the issue cited here.
Any objections to committing this?
Nope.
regards, tom lane
On Wed, Sep 18, 2024 at 10:54:56AM -0400, Tom Lane wrote:
Nathan Bossart <nathandbossart@gmail.com> writes:
Any objections to committing this?
Nope.
Committed. I waffled on whether to add a test for system indexes that used
pg_index's varlena columns, but I ended up leaving it out. I've attached
it here in case anyone thinks we should add it.
--
nathan
Attachments:
test.patchtext/plain; charset=us-asciiDownload+45-0
Hello Nathan,
18.09.2024 22:52, Nathan Bossart wrote:
Committed. I waffled on whether to add a test for system indexes that used
pg_index's varlena columns, but I ended up leaving it out. I've attached
it here in case anyone thinks we should add it.
I've discovered that Jonathan's initial script:
CREATE TABLE def (id int);
SELECT array_agg(n) b FROM generate_series(1,10_000) n \gset
CREATE OR REPLACE FUNCTION vec_quantizer (a int, b int[]) RETURNS bool
AS $$ SELECT true $$ LANGUAGE SQL IMMUTABLE;
CREATE INDEX ON def (vec_quantizer(id, :'b'));
completed with:
DROP INDEX CONCURRENTLY def_vec_quantizer_idx;
triggers an assertion failure:
TRAP: failed Assert("HaveRegisteredOrActiveSnapshot()"), File: "toast_internals.c", Line: 668, PID: 3723372
with the following stack trace:
ExceptionalCondition at assert.c:52:13
init_toast_snapshot at toast_internals.c:670:2
toast_delete_datum at toast_internals.c:429:60
toast_tuple_cleanup at toast_helper.c:303:30
heap_toast_insert_or_update at heaptoast.c:335:9
heap_update at heapam.c:3752:14
simple_heap_update at heapam.c:4210:11
CatalogTupleUpdate at indexing.c:324:2
index_set_state_flags at index.c:3522:2
index_concurrently_set_dead at index.c:1848:2
index_drop at index.c:2286:3
doDeletion at dependency.c:1362:5
deleteOneObject at dependency.c:1279:12
deleteObjectsInList at dependency.c:229:3
performMultipleDeletions at dependency.c:393:2
RemoveRelations at tablecmds.c:1594:2
ExecDropStmt at utility.c:2008:4
...
This class of assert failures is not new, see e. g., bugs #13809, #18127,
but this concrete instance (with index_set_state_flags()) emerged with
b52c4fc3c and may be worth fixing while on it...
Best regards,
Alexander
On Thu, Sep 19, 2024 at 12:00:00PM +0300, Alexander Lakhin wrote:
I've discovered that Jonathan's initial script:
CREATE TABLE def (id int);
SELECT array_agg(n) b FROM generate_series(1,10_000) n \gset
CREATE OR REPLACE FUNCTION vec_quantizer (a int, b int[]) RETURNS bool
AS $$ SELECT true $$ LANGUAGE SQL IMMUTABLE;
CREATE INDEX ON def (vec_quantizer(id, :'b'));completed with:
DROP INDEX CONCURRENTLY def_vec_quantizer_idx;triggers an assertion failure:
TRAP: failed Assert("HaveRegisteredOrActiveSnapshot()"), File: "toast_internals.c", Line: 668, PID: 3723372
Ha, that was fast. The attached patch seems to fix the assertion failures.
It's probably worth checking if any of the adjacent code paths are
affected, too.
--
nathan
Attachments:
fix_assert.patchtext/plain; charset=us-asciiDownload+5-0
On Thu, Sep 19, 2024 at 01:36:36PM -0500, Nathan Bossart wrote:
+ PushActiveSnapshot(GetTransactionSnapshot());
/* * Now we must wait until no running transaction could be using the @@ -2283,8 +2284,10 @@ index_drop(Oid indexId, bool concurrent, bool concurrent_lock_mode) * Again, commit the transaction to make the pg_index update visible * to other sessions. */ + PopActiveSnapshot(); CommitTransactionCommand(); StartTransactionCommand(); + PushActiveSnapshot(GetTransactionSnapshot());/*
* Wait till every transaction that saw the old index state has
@@ -2387,6 +2390,8 @@ index_drop(Oid indexId, bool concurrent, bool concurrent_lock_mode)
{
UnlockRelationIdForSession(&heaprelid, ShareUpdateExclusiveLock);
UnlockRelationIdForSession(&indexrelid, ShareUpdateExclusiveLock);
+
+ PopActiveSnapshot();
}
}
Perhaps the reason why these snapshots are pushed should be documented
with a comment?
--
Michael
Hello Nathan,
19.09.2024 21:36, Nathan Bossart wrote:
On Thu, Sep 19, 2024 at 12:00:00PM +0300, Alexander Lakhin wrote:
completed with:
DROP INDEX CONCURRENTLY def_vec_quantizer_idx;triggers an assertion failure:
TRAP: failed Assert("HaveRegisteredOrActiveSnapshot()"), File: "toast_internals.c", Line: 668, PID: 3723372Ha, that was fast. The attached patch seems to fix the assertion failures.
It's probably worth checking if any of the adjacent code paths are
affected, too.
Thank you for your attention to that issue!
I've found another two paths to reach that condition:
CREATE INDEX CONCURRENTLY ON def (vec_quantizer(id, :'b'));
ERROR: cannot fetch toast data without an active snapshot
REINDEX INDEX CONCURRENTLY def_vec_quantizer_idx;
(or REINDEX TABLE CONCURRENTLY def;)
TRAP: failed Assert("HaveRegisteredOrActiveSnapshot()"), File: "toast_internals.c", Line: 668, PID: 2934502
ExceptionalCondition at assert.c:52:13
init_toast_snapshot at toast_internals.c:670:2
toast_delete_datum at toast_internals.c:429:60
toast_tuple_cleanup at toast_helper.c:303:30
heap_toast_insert_or_update at heaptoast.c:335:9
heap_update at heapam.c:3752:14
simple_heap_update at heapam.c:4210:11
CatalogTupleUpdate at indexing.c:324:2
index_concurrently_swap at index.c:1649:2
ReindexRelationConcurrently at indexcmds.c:4270:3
ReindexIndex at indexcmds.c:2962:1
ExecReindex at indexcmds.c:2884:4
ProcessUtilitySlow at utility.c:1570:22
...
Perhaps it would make sense to check all CatalogTupleUpdate(pg_index, ...)
calls (I've found 10 such instances, but haven't checked them yet).
Best regards,
Alexander
On Fri, Sep 20, 2024 at 08:16:24AM +0900, Michael Paquier wrote:
Perhaps the reason why these snapshots are pushed should be documented
with a comment?
Definitely. I'll add those once we are more confident that we've
identified all the bugs.
--
nathan
On Fri, Sep 20, 2024 at 07:00:00AM +0300, Alexander Lakhin wrote:
I've found another two paths to reach that condition:
CREATE INDEX CONCURRENTLY ON def (vec_quantizer(id, :'b'));
ERROR:� cannot fetch toast data without an active snapshotREINDEX INDEX CONCURRENTLY def_vec_quantizer_idx;
(or REINDEX TABLE CONCURRENTLY def;)
TRAP: failed Assert("HaveRegisteredOrActiveSnapshot()"), File: "toast_internals.c", Line: 668, PID: 2934502
Here's a (probably naive) attempt at fixing these, too. I'll give each
path a closer look once it feels like we've identified all the bugs.
Perhaps it would make sense to check all CatalogTupleUpdate(pg_index, ...)
calls (I've found 10 such instances, but haven't checked them yet).
Indeed.
--
nathan
Attachments:
v2-0001-fix-failed-assertions-due-to-pg_index-s-TOAST-tab.patchtext/plain; charset=us-asciiDownload+14-1
Hello Nathan,
20.09.2024 19:51, Nathan Bossart wrote:
Here's a (probably naive) attempt at fixing these, too. I'll give each
path a closer look once it feels like we've identified all the bugs.
Thank you for the updated patch!
I tested it with two code modifications (1st is to make each created
expression index TOASTed (by prepending 1M of spaces to the indexeprs
value) and 2nd to make each created index an expression index (by
modifying index_elem_options in gram.y) — both modifications are kludgy so
I don't dare to publish them) and found no other snapshot-related issues
during `make check-world`.
Best regards,
Alexander
On Mon, Sep 23, 2024 at 04:00:00PM +0300, Alexander Lakhin wrote:
I tested it with two code modifications (1st is to make each created
expression index TOASTed (by prepending 1M of spaces to the indexeprs
value) and 2nd to make each created index an expression index (by
modifying index_elem_options in gram.y) - both modifications are kludgy so
I don't dare to publish them) and found no other snapshot-related issues
during `make check-world`.
Thanks. Here is an updated patch with tests and comments. I've also moved
the calls to PushActiveSnapshot()/PopActiveSnapshot() to surround only the
section of code where the snapshot is needed. Besides being more similar
in style to other fixes I found, I think this is safer because much of this
code is cautious to avoid deadlocks. For example, DefineIndex() has the
following comment:
/*
* The snapshot subsystem could still contain registered snapshots that
* are holding back our process's advertised xmin; in particular, if
* default_transaction_isolation = serializable, there is a transaction
* snapshot that is still active. The CatalogSnapshot is likewise a
* hazard. To ensure no deadlocks, we must commit and start yet another
* transaction, and do our wait before any snapshot has been taken in it.
*/
I carefully inspected all the code paths this patch touches, and I think
I've got all the details right, but I would be grateful if someone else
could take a look.
--
nathan
Attachments:
v3-0001-Ensure-we-have-a-snapshot-when-updating-pg_index-.patchtext/plain; charset=us-asciiDownload+76-1
On Mon, Sep 23, 2024 at 10:50:21AM -0500, Nathan Bossart wrote:
I carefully inspected all the code paths this patch touches, and I think
I've got all the details right, but I would be grateful if someone else
could take a look.
No objections from here with putting the snapshots pops and pushes
outside the inner routines of reindex/drop concurrently, meaning that
ReindexRelationConcurrently(), DefineIndex() and index_drop() are fine
to do these operations.
Looking at the patch, we could just add an assertion based on
ActiveSnapshotSet() in index_set_state_flags().
Actually, thinking more... Could it be better to have some more
sanity checks in the stack outside the toast code for catalogs with
toast tables? For example, I could imagine adding a check in
CatalogTupleUpdate() so as all catalog accessed that have a toast
relation require an active snapshot. That would make checks more
aggressive, because we would not need any toast data in a catalog to
make sure that there is a snapshot set. This strikes me as something
we could do better to improve the detection of failures like the one
reported by Alexander when updating catalog tuples as this can be
triggered each time we do a CatalogTupleUpdate() when dirtying a
catalog tuple. The idea is then to have something before the
HaveRegisteredOrActiveSnapshot() in the toast internals, for catalogs,
and we would not require toast data to detect problems.
--
Michael
On Tue, Sep 24, 2024 at 01:21:45PM +0900, Michael Paquier wrote:
On Mon, Sep 23, 2024 at 10:50:21AM -0500, Nathan Bossart wrote:
I carefully inspected all the code paths this patch touches, and I think
I've got all the details right, but I would be grateful if someone else
could take a look.No objections from here with putting the snapshots pops and pushes
outside the inner routines of reindex/drop concurrently, meaning that
ReindexRelationConcurrently(), DefineIndex() and index_drop() are fine
to do these operations.
Great. I plan to push 0001 shortly.
Actually, thinking more... Could it be better to have some more
sanity checks in the stack outside the toast code for catalogs with
toast tables? For example, I could imagine adding a check in
CatalogTupleUpdate() so as all catalog accessed that have a toast
relation require an active snapshot. That would make checks more
aggressive, because we would not need any toast data in a catalog to
make sure that there is a snapshot set. This strikes me as something
we could do better to improve the detection of failures like the one
reported by Alexander when updating catalog tuples as this can be
triggered each time we do a CatalogTupleUpdate() when dirtying a
catalog tuple. The idea is then to have something before the
HaveRegisteredOrActiveSnapshot() in the toast internals, for catalogs,
and we would not require toast data to detect problems.
I gave this a try and, unsurprisingly, found a bunch of other problems. I
hastily hacked together the attached patch that should fix all of them, but
I'd still like to comb through the code a bit more. The three catalogs
with problems are pg_replication_origin, pg_subscription, and
pg_constraint. pg_contraint has had a TOAST table for a while, and I don't
think it's unheard of for conbin to be large, so this one is probably worth
fixing. pg_subscription hasn't had its TOAST table for quite as long, but
presumably subpublications could be large enough to require out-of-line
storage. pg_replication_origin, however, only has one varlena column:
roname. Three out of the seven problem areas involve
pg_replication_origin, but AFAICT that'd only ever be a problem if the name
of your replication origin requires out-of-line storage. So... maybe we
should just remove pg_replication_origin's TOAST table instead...
--
nathan
Attachments:
toast_snapshot.patchtext/plain; charset=us-asciiDownload+75-0
On Tue, Sep 24, 2024 at 02:26:08PM -0500, Nathan Bossart wrote:
I gave this a try and, unsurprisingly, found a bunch of other problems. I
hastily hacked together the attached patch that should fix all of them, but
I'd still like to comb through the code a bit more. The three catalogs
with problems are pg_replication_origin, pg_subscription, and
pg_constraint.
Regression tests don't blow up after this patch and the reindex parts.
pg_contraint has had a TOAST table for a while, and I don't
think it's unheard of for conbin to be large, so this one is probably worth
fixing.
Ahh. That's the tablecmds.c part for the partition detach.
pg_subscription hasn't had its TOAST table for quite as long, but
presumably subpublications could be large enough to require out-of-line
storage. pg_replication_origin, however, only has one varlena column:
roname. Three out of the seven problem areas involve
pg_replication_origin, but AFAICT that'd only ever be a problem if the name
of your replication origin requires out-of-line storage. So... maybe we
should just remove pg_replication_origin's TOAST table instead...
I'd rather keep it, FWIW. Contrary to pg_authid it does not imply
problems at the same scale because we would have access to the toast
relation in all the code paths with logical workers or table syncs.
The other one was at early authentication stages.
+ /*
+ * If we might need TOAST access, make sure the caller has set up a valid
+ * snapshot.
+ */
+ Assert(HaveRegisteredOrActiveSnapshot() ||
+ !OidIsValid(heapRel->rd_rel->reltoastrelid) ||
+ !IsNormalProcessingMode());
+
I didn't catch that we could just reuse the opened Relation in these
paths and check for reltoastrelid. Nice.
It sounds to me that we should be much more proactive in detecting
these failures and add something like that on HEAD. That's cheap
enough. As the checks are the same for all these code paths, perhaps
just hide them behind a local macro to reduce the duplication?
Not the responsibility of this patch, but the business with
clear_subscription_skip_lsn() with its conditional transaction start
feels messy. This comes down to the way handles work for 2PC and the
streaming, which may or may not be in a transaction depending on the
state of the upper caller. Your patch looks right in the way
snapshots are set, as far as I've checked.
--
Michael
On Tue, Sep 24, 2024 at 02:26:08PM -0500, Nathan Bossart wrote:
On Tue, Sep 24, 2024 at 01:21:45PM +0900, Michael Paquier wrote:
On Mon, Sep 23, 2024 at 10:50:21AM -0500, Nathan Bossart wrote:
I carefully inspected all the code paths this patch touches, and I think
I've got all the details right, but I would be grateful if someone else
could take a look.No objections from here with putting the snapshots pops and pushes
outside the inner routines of reindex/drop concurrently, meaning that
ReindexRelationConcurrently(), DefineIndex() and index_drop() are fine
to do these operations.Great. I plan to push 0001 shortly.
Committed this one.
--
nathan
On Wed, Sep 25, 2024 at 01:05:26PM +0900, Michael Paquier wrote:
On Tue, Sep 24, 2024 at 02:26:08PM -0500, Nathan Bossart wrote:
So... maybe we
should just remove pg_replication_origin's TOAST table instead...I'd rather keep it, FWIW. Contrary to pg_authid it does not imply
problems at the same scale because we would have access to the toast
relation in all the code paths with logical workers or table syncs.
The other one was at early authentication stages.
Okay.
It sounds to me that we should be much more proactive in detecting
these failures and add something like that on HEAD. That's cheap
enough. As the checks are the same for all these code paths, perhaps
just hide them behind a local macro to reduce the duplication?
In v2, I moved the assertions to a new function called by the heapam.c
routines. I was hoping to move them to the tableam.h routines, but several
callers (in particular, the catalog/indexing.c ones that are causing
problems) call the heap ones directly. I've also included a 0001 patch
that introduces a RelationGetToastRelid() macro because I got tired of
typing "rel->rd_rel->reltoastrelid".
0002 could probably use some more commentary, but otherwise I think it is
in decent shape. You (Michael) seem to be implying that I should
back-patch the actual fixes and only apply the new assertions to v18. Am I
understanding you correctly?
--
nathan