RE: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

Started by Bykov Ivanabout 1 year ago49 messages
Jump to latest
#1Bykov Ivan
i.bykov@modernsys.ru

Hello!

Last time, I forgot to attach the patches.
The problem still persists in the 17.3 release.

Solution One
============
The simplest way to fix the problem is to place the scalar field used in the query ID calculation
between similar subnodes.
A patch for this solution is attached below (0001-Query-ID-Calculation-Fix-Variant-A.patch).

Solution Two
============
Alternatively, we can change the hash sum when we encounter an empty node.
This approach may impact performance but will protect us from such errors in the future.
A patch for this solution is attached below (0001-Query-ID-Calculation-Fix-Variant-B.patch).

======
SELECT version();
version
-------------------------------------------------------------------------------------------------
PostgreSQL 17.3 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

SET compute_query_id = on;

/* LIMIT / OFFSET */
EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class LIMIT 1;

QUERY PLAN
----------------------------------------------------------------------------
Limit (cost=0.00..0.04 rows=1 width=4)
Output: oid
-> Seq Scan on pg_catalog.pg_class (cost=0.00..18.15 rows=415 width=4)
Output: oid
Query Identifier: 5185884322440896420

EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class OFFSET 1;
QUERY PLAN
----------------------------------------------------------------------------
Limit (cost=0.04..18.15 rows=414 width=4)
Output: oid
-> Seq Scan on pg_catalog.pg_class (cost=0.00..18.15 rows=415 width=4)
Output: oid
Query Identifier: 5185884322440896420

/* DISTINCT / ORDER BY */
EXPLAIN (VERBOSE) SELECT DISTINCT "oid" FROM pg_class;

QUERY PLAN
------------------------------------------------------------------------------------------------------------
Unique (cost=0.27..23.54 rows=415 width=4)
Output: oid
-> Index Only Scan using pg_class_oid_index on pg_catalog.pg_class (cost=0.27..22.50 rows=415 width=4)
Output: oid
Query Identifier: 751948508603549510

EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class ORDER BY "oid";

QUERY PLAN
------------------------------------------------------------------------------------------------------
Index Only Scan using pg_class_oid_index on pg_catalog.pg_class (cost=0.27..22.50 rows=415 width=4)
Output: oid
Query Identifier: 751948508603549510

Attachments:

0001-Query-ID-Calculation-Fix-Variant-A.patchapplication/octet-stream; name=0001-Query-ID-Calculation-Fix-Variant-A.patchDownload+3-6
0001-Query-ID-Calculation-Fix-Variant-B.patchapplication/octet-stream; name=0001-Query-ID-Calculation-Fix-Variant-B.patchDownload+5-3
#2Bykov Ivan
i.bykov@modernsys.ru
In reply to: Bykov Ivan (#1)

Here is bug description from
/messages/by-id/ca447b72d15745b9a877fad7e258407a@localhost.localdomain

Problem
=======
In some cases, we could have same IDs for not identical query trees.
For two structurally similar query subnodes, we may have query
trees that look like this:
----
QueryA->subNodeOne = Value X;
QueryA->subNodeTwo = NULL;
QueryB->subNodeOne = NULL;
QueryB->subNodeTwo = Value X;
----
When the query jumble walker calculates the query ID, it traverses the
Query members from top to bottom and generates the same IDs for these
two queries because it does not change the hash value when visiting
an empty node (= NULL).
Here is an example (the collection of jstate->jumble is omitted):
----
/* QueryA_ID = AAAA */
QueryA->subNodeOne = &(Value X);
/* QueryA_ID = hash_any_extended(&(Value X), sizeof(Value X), QueryA_ID) = BBBB */
QueryA->subNodeTwo = NULL;
/* QueryA_ID = BBBB; */
/* QueryB_ID = AAAA */
QueryB->subNodeOne = NULL;
/* QueryB_ID = AAAA; */
QueryB->subNodeTwo = &(Value X);
/* QueryB_ID = hash_any_extended(&(Value X), sizeof(Value X), QueryB_ID) = BBBB */
----
There are two pairs of subnodes that can trigger this error:
- distinctClause and sortClause (both contain a list of SortGroupClauses);
- limitOffset and limitCount (both contain an int8 expression).
Here is an example of such errors (all queries run on REL_17_0):
----
SET compute_query_id = on;
/* DISTINCT / ORDER BY *************************************/
EXPLAIN (VERBOSE) SELECT DISTINCT "oid" FROM pg_class;
/* Query Identifier: 751948508603549510 */
EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class ORDER BY "oid";
/* Query Identifier: 751948508603549510 */
/* LIMIT / OFFSET ******************************************/
EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class LIMIT 1;
/* Query Identifier: 5185884322440896420 */
EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class OFFSET 1;
/* Query Identifier: 5185884322440896420 */
----
Solution One
============
The simplest way to fix the problem is to place the scalar field used in
the query ID calculation between similar subnodes. A patch for this
solution is attached below (0001-Query-ID-Calculation-Fix-Variant-A.patch).
Here is an example:
----
/* QueryA_ID = AAAA */
QueryA->subNodeOne = &(Value X);
/* QueryA_ID = hash_any_extended(&(Value X), sizeof(Value X), QueryA_ID) = BBBB */
QueryA->scalar = Value Y;
/* QueryA_ID = hash_any_extended(&(Value Y), sizeof(Value Y), QueryA_ID) = CCCC */
QueryA->subNodeTwo = NULL;
/* QueryA_ID = CCCC; */
/* QueryB_ID = AAAA */
QueryB->subNodeOne = NULL;
/* QueryB_ID = AAAA; */
QueryB->scalar = Value Y;
/* QueryB_ID = hash_any_extended(&(Value Y), sizeof(Value Y), QueryB_ID) = DDDD */
QueryB->subNodeOne = &(Value X);
/* QueryB_ID = hash_any_extended(&(Value X), sizeof(Value X), QueryB_ID) = EEEE */
----
Solution Two
============
Alternatively, we can change the hash sum when we encounter an empty node.
This approach may impact performance but will protect us from such errors
in the future. A patch for this solution is attached below
(0001-Query-ID-Calculation-Fix-Variant-B.patch).
Here is an example:
----
/* QueryA_ID = AAAA */
QueryA->subNodeOne = &(Value X);
/* QueryA_ID = hash_any_extended(&(Value Y), sizeof(Value Y), QueryA_ID) = BBBB */
QueryA->subNodeTwo = NULL;
/* QueryA_ID = hash_any_extended(&('\0'), sizeof(char), QueryA_ID) = CCCC */
/* QueryB_ID = AAAA */
QueryB->subNodeOne = NULL;
/* QueryB_ID = hash_any_extended(&('\0'), sizeof(char), QueryB_ID) = DDDD */
QueryB->subNodeOne = &(Value X);
/* QueryB_ID = hash_any_extended(&(Value X), sizeof(Value X), QueryB_ID) = EEEE */
----

From: Быков Иван Александрович
Sent: Thursday, March 6, 2025 7:32 PM
To: 'pgsql-hackers@postgresql.org' <pgsql-hackers@postgresql.org>
Subject: RE: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

Hello!
Last time, I forgot to attach the patches.
The problem still persists in the 17.3 release.
Solution One
============
The simplest way to fix the problem is to place the scalar field used in the query ID calculation
between similar subnodes.
A patch for this solution is attached below (0001-Query-ID-Calculation-Fix-Variant-A.patch).
Solution Two
============
Alternatively, we can change the hash sum when we encounter an empty node.
This approach may impact performance but will protect us from such errors in the future.
A patch for this solution is attached below (0001-Query-ID-Calculation-Fix-Variant-B.patch).

======
SELECT version();
version
-------------------------------------------------------------------------------------------------
PostgreSQL 17.3 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

SET compute_query_id = on;

/* LIMIT / OFFSET */
EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class LIMIT 1;

QUERY PLAN
----------------------------------------------------------------------------
Limit (cost=0.00..0.04 rows=1 width=4)
Output: oid
-> Seq Scan on pg_catalog.pg_class (cost=0.00..18.15 rows=415 width=4)
Output: oid
Query Identifier: 5185884322440896420

EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class OFFSET 1;
QUERY PLAN
----------------------------------------------------------------------------
Limit (cost=0.04..18.15 rows=414 width=4)
Output: oid
-> Seq Scan on pg_catalog.pg_class (cost=0.00..18.15 rows=415 width=4)
Output: oid
Query Identifier: 5185884322440896420

/* DISTINCT / ORDER BY */
EXPLAIN (VERBOSE) SELECT DISTINCT "oid" FROM pg_class;

QUERY PLAN
------------------------------------------------------------------------------------------------------------
Unique (cost=0.27..23.54 rows=415 width=4)
Output: oid
-> Index Only Scan using pg_class_oid_index on pg_catalog.pg_class (cost=0.27..22.50 rows=415 width=4)
Output: oid
Query Identifier: 751948508603549510

EXPLAIN (VERBOSE) SELECT "oid" FROM pg_class ORDER BY "oid";

QUERY PLAN
------------------------------------------------------------------------------------------------------
Index Only Scan using pg_class_oid_index on pg_catalog.pg_class (cost=0.27..22.50 rows=415 width=4)
Output: oid
Query Identifier: 751948508603549510

#3Bykov Ivan
i.bykov@modernsys.ru
In reply to: Bykov Ivan (#2)

Here is
0001-Query-ID-Calculation-Fix-Variant-A.patch
and
0001-Query-ID-Calculation-Fix-Variant-B.patch

Attachments:

0001-Query-ID-Calculation-Fix-Variant-B.patchapplication/octet-stream; name=0001-Query-ID-Calculation-Fix-Variant-B.patchDownload+5-3
0001-Query-ID-Calculation-Fix-Variant-A.patchapplication/octet-stream; name=0001-Query-ID-Calculation-Fix-Variant-A.patchDownload+3-6
#4Sami Imseih
samimseih@gmail.com
In reply to: Bykov Ivan (#1)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

Hi,

It seems like there are multiple threads on this topic. This is the
original [0]/messages/by-id/ca447b72d15745b9a877fad7e258407a@localhost.localdomain, but I suggest continuing the discussion in this thread
since it includes the examples and patches.

Regarding the issue itself, query jumbling behavior is often subjective,
making it difficult to classify as a bug. I'm not entirely sure this
qualifies as a bug either, but I do believe it should be addressed.

As I understand it, two nodes defined one after the other and in which both
could end up with the same expressions when traversed cannot be differentiated
by query jumbling when one of them is NULL. In this case, queryJumbling can't
differentiate between when limitOffset of LimitOption and they both result with
a similar FuncExpr.

By rearranging them as done in variant A, the position where the expression
is appended in the jumble changes, leading to a different hash. I just
don't like
this solution because it requires one to carefully construct a struct,
but it maybe
the best out of the other options.

Variant B is not acceptable IMO as it adds a whole bunch of null-terminators
unnecessarily. For example, in a simple "select 1", (expr == NULL) is
true 19 times,
so that is an extra 19 bytes.

I think a third option is to add a new pg_node_attr called "query_jumble_null"
and be very selective on which fields should append a null-terminator to the
jumble when the expression is null

The queryjumblefuncs.c could look like the below. JUMBLE_NULL will
be responsible for appending the null terminator.

"""
static void
_jumbleQuery(JumbleState *jstate, Node *node)
{
Query *expr = (Query *) node;
...
......
.......
if (!expr->limitOffset)
{
JUMBLE_NULL();
}
else
{
JUMBLE_NODE(limitOffset);
}
if (!expr->limitCount)
{
JUMBLE_NULL();
}
else
{
JUMBLE_NODE(limitCount);
}
"""

What do you think? Maybe someone can suggest a better solution?

Regards,

Sami Imseih
Amazon Web Services (AWS)

[0]: /messages/by-id/ca447b72d15745b9a877fad7e258407a@localhost.localdomain

#5Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#4)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Thu, Mar 06, 2025 at 06:44:27PM -0600, Sami Imseih wrote:

Regarding the issue itself, query jumbling behavior is often subjective,
making it difficult to classify as a bug. I'm not entirely sure this
qualifies as a bug either, but I do believe it should be addressed.

I would call that a bug and something that we should fix, but not
something that we can really backpatch as this has unfortunately an
impact on monitoring tools. Stability takes priority in this area in
already released branches.

By rearranging them as done in variant A, the position where the expression
is appended in the jumble changes, leading to a different hash. I just
don't like
this solution because it requires one to carefully construct a struct,
but it maybe the best out of the other options.

I prefer something like variant A. It would not be the first time we
are manipulating the structure of the parse nodes for the sake of
making the query jumbling more transparent. An advantage of depending
on the structure definition is that we can document the expectation in
the header, and not hide it in the middle of the jumble code.

Variant B is not acceptable IMO as it adds a whole bunch of null-terminators
unnecessarily. For example, in a simple "select 1", (expr == NULL) is
true 19 times,
so that is an extra 19 bytes.

Variant B is not acceptable here.

I think a third option is to add a new pg_node_attr called "query_jumble_null"
and be very selective on which fields should append a null-terminator to the
jumble when the expression is null

The queryjumblefuncs.c could look like the below. JUMBLE_NULL will
be responsible for appending the null terminator.

Not much a fan of that. For the sake of this thread, I'd still go
with the simplicity of A. And please, let's add a couple of queries
in pgss to track that these lead to two different entries.

Another option that I was thinking about and could be slightly cleaner
is the addition of an extra field in Query that is set when we go
through a offset_clause in the parser. It could just be a boolean,
false by default. We have been using this practice in in
DeallocateStmt, for example. And there are only a few places where
limitOffset is set. However, I'd rather discard this idea as
transformSelectStmt() has also the idea to transform a LIMIT clause
into an OFFSET clause, changing a Query representation. And note that
we calculate the query jumbling after applying the query
transformation. For these reasons, variant A where we put the
LimitOption between the two int8 expression nodes feels like the
"okay" approach here. But we must document this expectation in the
structure, and check for more grammar variants of LIMIT and OFFSET
clauses in pgss.

Another concept would be to add into the jumble the offset of a Node
in the upper structure we are jumbling, but this requires keeping a
track of all the nested things, which would make the query jumbling
code much more complicated and more expensive, for sure.

I'd also recommend to be careful and double-check all these
transformation assumptions for LIMIT and OFFSET. We should shape
things so as an OFFSET never maps to a LIMIT variant when we
differentiate all these flavors at query string level.
--
Michael

#6David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#5)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Mon, 10 Mar 2025 at 12:39, Michael Paquier <michael@paquier.xyz> wrote:

On Thu, Mar 06, 2025 at 06:44:27PM -0600, Sami Imseih wrote:

Regarding the issue itself, query jumbling behavior is often subjective,
making it difficult to classify as a bug. I'm not entirely sure this
qualifies as a bug either, but I do believe it should be addressed.

I would call that a bug and something that we should fix, but not
something that we can really backpatch as this has unfortunately an
impact on monitoring tools. Stability takes priority in this area in
already released branches.

I know you reviewed this, Michael, so you're aware; 2d3389c28 did
recently document that we'd only break this in minor versions as a
last resort. So, I agree that it sounds like a master-only fix is in
order.

For the record, the docs [1]https://www.postgresql.org/docs/current/pgstatstatements.html read:

"Generally, it can be assumed that queryid values are stable between
minor version releases of PostgreSQL, providing that instances are
running on the same machine architecture and the catalog metadata
details match. Compatibility will only be broken between minor
versions as a last resort."

David

[1]: https://www.postgresql.org/docs/current/pgstatstatements.html

#7Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#6)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Mon, Mar 10, 2025 at 02:14:01PM +1300, David Rowley wrote:

I know you reviewed this, Michael, so you're aware; 2d3389c28 did
recently document that we'd only break this in minor versions as a
last resort. So, I agree that it sounds like a master-only fix is in
order.

Yes, this thread's problem does not pass the compatibility bar.
Thanks for the reminder about the docs.
--
Michael

#8Bykov Ivan
i.bykov@modernsys.ru
In reply to: Michael Paquier (#7)

Hello!

Variant B is not acceptable IMO as it adds a whole bunch of
null-terminators unnecessarily. For example, in a simple "select 1",
(expr == NULL) is true 19 times, so that is an extra 19 bytes.

Variant B is not acceptable here.

Could we improve Variant B?

I was thinking about adding a count of NULL pointers encountered by the query
jumble walker since the last scalar or non-null field visited.
This way, we can traverse the query as follows:
====
QueryA->subNodeOne = &(Value X);
/* JumbleState->Count = 0;
QueryA_ID = hash_any_extended(&(Value X), sizeof(Value X), QueryA_ID) */
QueryA->subNodeTwo = NULL;
/* JumbleState->Count = 1; */
QueryA->subNodeThee = NULL;
/* JumbleState->Count = 2; */
QueryA->subNodeFour = NULL;
/* JumbleState->Count = 3; */
QueryA->scalar = Value Z;
/* QueryA_ID = hash_any_extended(&(JumbleState->Count),
sizeof(JumbleState->Count), QueryA_ID)
JumbleState->Count = 0;
QueryA_ID = hash_any_extended(&(Value Y), sizeof(Value Y), QueryA_ID) */
====

Variant A improvement
---------------------

I have attached the updated Variant A patch with the following changes:
- Implemented a pg_stat_statements regression test (see similar test results
on REL_17_3 in Query-ID-collision-at_pg_stat_statements-1ea6e890b22.txt).
- Added extra description about the Query ID collision
in src/include/nodes/parsenodes.h.

Attachments:

Query-ID-collision-at_pg_stat_statements-1ea6e890b22.txttext/plain; name=Query-ID-collision-at_pg_stat_statements-1ea6e890b22.txtDownload
v2-0001-Query-ID-Calculation-Fix-Variant-A.patchapplication/octet-stream; name=v2-0001-Query-ID-Calculation-Fix-Variant-A.patchDownload+80-6
#9David Rowley
dgrowleyml@gmail.com
In reply to: Bykov Ivan (#8)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Mon, 10 Mar 2025 at 23:17, Bykov Ivan <i.bykov@modernsys.ru> wrote:

I have attached the updated Variant A patch with the following changes:
- Implemented a pg_stat_statements regression test (see similar test results
on REL_17_3 in Query-ID-collision-at_pg_stat_statements-1ea6e890b22.txt).
- Added extra description about the Query ID collision
in src/include/nodes/parsenodes.h.

I've not paid much attention to the jumble code before, but I did
glance at this patch:

+ *
+ *   The query jumbling process may trigger a Query ID collision when
+ *   two Query fields located sequentially contain the same type of nodes
+ *   (a list of nodes), and both of them may be NULL.
+ *   List of such groups of fields:
+ *     - limitOffset and limitCount (which may be an int8 expression or NULL);
+ *     - distinctClause, and sortClause (which may be a list of
+ *       SortGroupClause entries or NULL);
+ *   To prevent this collision, we should insert at least one scalar field
+ *   without the attribute query_jumble_ignore between such fields.
  */
 typedef struct Query
 {
@@ -208,11 +218,11 @@ typedef struct Query

List *distinctClause; /* a list of SortGroupClause's */

- List    *sortClause; /* a list of SortGroupClause's */
-
  Node    *limitOffset; /* # of result tuples to skip (int8 expr) */
- Node    *limitCount; /* # of result tuples to return (int8 expr) */
  LimitOption limitOption; /* limit type */
+ Node    *limitCount; /* # of result tuples to return (int8 expr) */
+
+ List    *sortClause; /* a list of SortGroupClause's */

List *rowMarks; /* a list of RowMarkClause's */

It seems to me that if this fixes the issue, that the next similar one
is already lurking in the shadows waiting to jump out on us.

Couldn't we adjust all this code so that we pass a seed to
AppendJumble() that's the offsetof(Struct, field) that we're jumbling
and incorporate that seed into the hash? I don't think we could
possibly change the offset in a minor version, so I don't think
there's a risk that could cause jumble value instability. Possibly
different CPU architectures would come up with different offsets
through having different struct alignment requirements, but we don't
make any guarantees about the jumble values matching across different
CPU architectures anyway.

I admit to only having spent a few minutes looking at this, so there
may well be a good reason for not doing this. Maybe Michael can tell
me what that is...?

A very quickly put together patch is attached. I intend this only to
demonstrate what I mean, not because I want to work on it myself.

David

Attachments:

v1-0001-A-very-rough-proof-of-concept-to-fix-the-problem-.patchapplication/octet-stream; name=v1-0001-A-very-rough-proof-of-concept-to-fix-the-problem-.patchDownload+37-30
#10Sami Imseih
samimseih@gmail.com
In reply to: David Rowley (#9)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

It seems to me that if this fixes the issue, that the next similar one
is already lurking in the shadows waiting to jump out on us.

Yes, this is true that there may be other cases, but I am not sure if
it's worth carrying all the
extra bytes in the jumble to deal with a few cases like this. This is
why I don't think Variant B
or tracking the offset is a thrilling idea. -1 for me.

--
Sami

#11Sami Imseih
samimseih@gmail.com
In reply to: Sami Imseih (#10)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

transformation. For these reasons, variant A where we put the
LimitOption between the two int8 expression nodes feels like the
"okay" approach here. But we must document this expectation in the
structure, and check for more grammar variants of LIMIT and OFFSET
clauses in pgss.

Please see the attached. Variant A with comments and some additional
test cases.

It should be noted that we currently have "WITH TIES/ROWS ONLY" tests in pg_s_s,
so I added another case to show "FETCH FIRST 2 ROW ONLY" and "LIMIT 2" are
the same queryId and also added a query that uses both a LIMIT and OFFSET.

I could not think of other cases we need to cover.

--

Sami

Attachments:

v1-0001-Fix-QueryId-collision-for-LIMIT-and-OFFSET.patchapplication/octet-stream; name=v1-0001-Fix-QueryId-collision-for-LIMIT-and-OFFSET.patchDownload+43-3
#12Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#9)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Tue, Mar 11, 2025 at 12:45:35AM +1300, David Rowley wrote:

It seems to me that if this fixes the issue, that the next similar one
is already lurking in the shadows waiting to jump out on us.

For how many years will be have to wait for this threat hiddent in the
shadows? :)

This issue exists since the query jumbling exists in pgss, so it goes
really down the road. I've spent a bit more than a few minutes on
that.

Couldn't we adjust all this code so that we pass a seed to
AppendJumble() that's the offsetof(Struct, field) that we're jumbling
and incorporate that seed into the hash? I don't think we could
possibly change the offset in a minor version, so I don't think
there's a risk that could cause jumble value instability. Possibly
different CPU architectures would come up with different offsets
through having different struct alignment requirements, but we don't
make any guarantees about the jumble values matching across different
CPU architectures anyway.

Yeah, something like that has potential to get rid of such problems
forever, particularly thanks to the fact that we automate the
generation of this code now so it is mostly cost-free. This has a
cost for the custom jumbling functions where one needs some
imagination, but with models being around while we keep the number of
custom functions low, that should be neither complicated nor costly,
at least in my experience.

When I was thinking about the addition of the offset to the jumbling
yesterday, I was wondering about two things:
- if there are some node structures where it could not work.
- if we'd need to pass down some information of the upper node when
doing the jumbling of a node attached to it, which would make the code
generation much more annoying.

But after sleeping on it I think that these two points are kind of
moot: having only the offset passed down to a single _jumbleNode()
with the offset compiled at its beginning should be sufficient. Using
"seed" as a term is perhaps a bit confusing, because this is an offset
in the upper node, but perhaps that's OK as-is. It's just more
entropy addition. If somebody has a better idea for this term, feel
free.

_jumbleList() is an interesting one. If we want an equivalent of the
offset, this could use the item number in the list which would be a
rather close equivalent to the offset used elsewhere. For the int,
oid and xid lists, we should do an extra JUMBLE_FIELD_SINGLE() for
each member, apply the offset at the beginning of _jumbleList(), once,
I guess.

_jumbleVariableSetStmt() should be OK with the offset of "arg" in
VariableSetStmt. The addition of hash_combine64() to count for the
offset should be OK.

With that in mind, the cases with DISTINCT/ORDER BY and OFFSET/LIMIT
seem to work fine, without causing noise for the other cases tracked
in the regression tests of PGSS.

What do you think?
--
Michael

Attachments:

0001-Add-node-offsets-in-query-jumbling-computations.patchtext/x-diff; charset=us-asciiDownload+157-30
#13David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#12)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Tue, 11 Mar 2025 at 19:50, Michael Paquier <michael@paquier.xyz> wrote:

On Tue, Mar 11, 2025 at 12:45:35AM +1300, David Rowley wrote:

It seems to me that if this fixes the issue, that the next similar one
is already lurking in the shadows waiting to jump out on us.

For how many years will be have to wait for this threat hiddent in the
shadows? :)

This issue exists since the query jumbling exists in pgss, so it goes
really down the road. I've spent a bit more than a few minutes on
that.

I didn't mean to cause offence here. I just wanted to make it clear
that I don't think fixing these types of issues by swapping the order
of the fields is going to be a sustainable practice, and it would be
good to come up with a solution that eliminates the chances of this
class of bug from ever appearing again. Even if we were to trawl the
entire Query struct and everything that could ever be attached to it
and we deem that today it's fine and no more such bugs exist, the
person adding some new SQL feature in the future that adds new data to
store in the Query probably isn't going to give query jumbling much
thought, especially now that the code generation is automatic.

Couldn't we adjust all this code so that we pass a seed to
AppendJumble() that's the offsetof(Struct, field) that we're jumbling
and incorporate that seed into the hash? I don't think we could
possibly change the offset in a minor version, so I don't think
there's a risk that could cause jumble value instability. Possibly
different CPU architectures would come up with different offsets
through having different struct alignment requirements, but we don't
make any guarantees about the jumble values matching across different
CPU architectures anyway.

Yeah, something like that has potential to get rid of such problems
forever, particularly thanks to the fact that we automate the
generation of this code now so it is mostly cost-free. This has a
cost for the custom jumbling functions where one needs some
imagination, but with models being around while we keep the number of
custom functions low, that should be neither complicated nor costly,
at least in my experience.

I hadn't really processed this thread fully when I responded yesterday
and my response was mostly aimed at the latest patch on the thread at
the time, which I had looked at quickly and wanted to put a stop to
changing field orders as a fix for this. Since then, I see that Ivan
has already submitted a patch that accounts for NULL nodes and adds a
byte to the jumble buffer to account for NULLs. This seems quite clean
and simple. However, Sami seems to have concerns about the overhead of
doing this. Is that warranted at all? Potentially, there could be a
decent number of NULL fields. It'll probably be much cheaper than the
offsetof idea I came up with.

When I was thinking about the addition of the offset to the jumbling
yesterday, I was wondering about two things:
- if there are some node structures where it could not work.
- if we'd need to pass down some information of the upper node when
doing the jumbling of a node attached to it, which would make the code
generation much more annoying.

But after sleeping on it I think that these two points are kind of
moot: having only the offset passed down to a single _jumbleNode()
with the offset compiled at its beginning should be sufficient. Using
"seed" as a term is perhaps a bit confusing, because this is an offset
in the upper node, but perhaps that's OK as-is. It's just more
entropy addition. If somebody has a better idea for this term, feel
free.

Can you share your thoughts on Ivan's NULL jumble idea?

_jumbleList() is an interesting one. If we want an equivalent of the
offset, this could use the item number in the list which would be a
rather close equivalent to the offset used elsewhere. For the int,
oid and xid lists, we should do an extra JUMBLE_FIELD_SINGLE() for
each member, apply the offset at the beginning of _jumbleList(), once,
I guess.

Is this affected by the same class of bug that we're aiming to fix
here? (This class being not always jumbling all fields because of
NULLs or some other value, resulting in the next jumble field applying
the same bytes to the buffer as the previous field would have, had it
not been ignored)

_jumbleVariableSetStmt() should be OK with the offset of "arg" in
VariableSetStmt. The addition of hash_combine64() to count for the
offset should be OK.

With that in mind, the cases with DISTINCT/ORDER BY and OFFSET/LIMIT
seem to work fine, without causing noise for the other cases tracked
in the regression tests of PGSS.

What do you think?

I mostly now think the class of bug can be fixed by ensuring we never
ignore any jumble field for any reason and always put at least 1 byte
onto the buffer with some sort of entropy. I'm keen to understand if
I'm missing some case that you've thought about that I've not.

David

#14Bykov Ivan
i.bykov@modernsys.ru
In reply to: David Rowley (#13)

Hello!

Since then, I see that Ivan
has already submitted a patch that accounts for NULL nodes and adds a
byte to the jumble buffer to account for NULLs. This seems quite clean
and simple. However, Sami seems to have concerns about the overhead of
doing this. Is that warranted at all? Potentially, there could be a
decent number of NULL fields. It'll probably be much cheaper than the
offsetof idea I came up with.

To address the issue of it consuming a lot of bytes in the jumble buffer
for NULL marks, I have added a new patch version for Variant B.
(v2-0001-Query-ID-Calculation-Fix-Variant-B.patch).

This patch adds to JumbleState the count of NULL nodes visited before a
non-NULL one appears. The least significant byte of this count is appended
to the jumble buffer every time the count is not null (indicating that we have
visited non-NULL nodes previously), and a new chunk of data is also appended
to the jumble buffer. I intentionally did not add a final append for the NULL
count in the JumbleQuery processing, as NULL tail nodes of the query do not
improve the reliability of query ID collision resolution.

I believe that my first version, Variant B of the patch, is simple and easy to
understand. I would prefer to use the first version of the Variant B patch,
so I have attached an updated version along with tests
(v3-0001-Query-ID-Calculation-Fix-Variant-B.patch).

As you can see, v2-0001-Query-ID-Calculation-Fix-Variant-B.patch is more
complex and invasive than v3-0001-Query-ID-Calculation-Fix-Variant-B.patch.
I don't think that saving a few bytes in a 1024-byte sized buffer is worth
implementing, even for this simple optimization (v2).

Can you share your thoughts on Ivan's NULL jumble idea?

I would appreciate any feedback. Thank you.

-----Original Message-----
From: David Rowley <dgrowleyml@gmail.com>
Sent: Tuesday, March 11, 2025 12:49 PM
To: Michael Paquier <michael@paquier.xyz>
Cc: Быков Иван Александрович <i.bykov@modernsys.ru>; Sami Imseih <samimseih@gmail.com>; pgsql-hackers@postgresql.org
Subject: Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Tue, 11 Mar 2025 at 19:50, Michael Paquier <michael@paquier.xyz> wrote:

On Tue, Mar 11, 2025 at 12:45:35AM +1300, David Rowley wrote:

It seems to me that if this fixes the issue, that the next similar
one is already lurking in the shadows waiting to jump out on us.

For how many years will be have to wait for this threat hiddent in the
shadows? :)

This issue exists since the query jumbling exists in pgss, so it goes
really down the road. I've spent a bit more than a few minutes on
that.

I didn't mean to cause offence here. I just wanted to make it clear that I don't think fixing these types of issues by swapping the order of the fields is going to be a sustainable practice, and it would be good to come up with a solution that eliminates the chances of this class of bug from ever appearing again. Even if we were to trawl the entire Query struct and everything that could ever be attached to it and we deem that today it's fine and no more such bugs exist, the person adding some new SQL feature in the future that adds new data to store in the Query probably isn't going to give query jumbling much thought, especially now that the code generation is automatic.

Couldn't we adjust all this code so that we pass a seed to
AppendJumble() that's the offsetof(Struct, field) that we're
jumbling and incorporate that seed into the hash? I don't think we
could possibly change the offset in a minor version, so I don't
think there's a risk that could cause jumble value instability.
Possibly different CPU architectures would come up with different
offsets through having different struct alignment requirements, but
we don't make any guarantees about the jumble values matching across
different CPU architectures anyway.

Yeah, something like that has potential to get rid of such problems
forever, particularly thanks to the fact that we automate the
generation of this code now so it is mostly cost-free. This has a
cost for the custom jumbling functions where one needs some
imagination, but with models being around while we keep the number of
custom functions low, that should be neither complicated nor costly,
at least in my experience.

I hadn't really processed this thread fully when I responded yesterday and my response was mostly aimed at the latest patch on the thread at the time, which I had looked at quickly and wanted to put a stop to changing field orders as a fix for this. Since then, I see that Ivan has already submitted a patch that accounts for NULL nodes and adds a byte to the jumble buffer to account for NULLs. This seems quite clean and simple. However, Sami seems to have concerns about the overhead of doing this. Is that warranted at all? Potentially, there could be a decent number of NULL fields. It'll probably be much cheaper than the offsetof idea I came up with.

When I was thinking about the addition of the offset to the jumbling
yesterday, I was wondering about two things:
- if there are some node structures where it could not work.
- if we'd need to pass down some information of the upper node when
doing the jumbling of a node attached to it, which would make the code
generation much more annoying.

But after sleeping on it I think that these two points are kind of
moot: having only the offset passed down to a single _jumbleNode()
with the offset compiled at its beginning should be sufficient. Using
"seed" as a term is perhaps a bit confusing, because this is an offset
in the upper node, but perhaps that's OK as-is. It's just more
entropy addition. If somebody has a better idea for this term, feel
free.

Can you share your thoughts on Ivan's NULL jumble idea?

_jumbleList() is an interesting one. If we want an equivalent of the
offset, this could use the item number in the list which would be a
rather close equivalent to the offset used elsewhere. For the int,
oid and xid lists, we should do an extra JUMBLE_FIELD_SINGLE() for
each member, apply the offset at the beginning of _jumbleList(), once,
I guess.

Is this affected by the same class of bug that we're aiming to fix here? (This class being not always jumbling all fields because of NULLs or some other value, resulting in the next jumble field applying the same bytes to the buffer as the previous field would have, had it not been ignored)

_jumbleVariableSetStmt() should be OK with the offset of "arg" in
VariableSetStmt. The addition of hash_combine64() to count for the
offset should be OK.

With that in mind, the cases with DISTINCT/ORDER BY and OFFSET/LIMIT
seem to work fine, without causing noise for the other cases tracked
in the regression tests of PGSS.

What do you think?

I mostly now think the class of bug can be fixed by ensuring we never ignore any jumble field for any reason and always put at least 1 byte onto the buffer with some sort of entropy. I'm keen to understand if I'm missing some case that you've thought about that I've not.

David

Attachments:

v2-0001-Query-ID-Calculation-Fix-Variant-B.patchapplication/octet-stream; name=v2-0001-Query-ID-Calculation-Fix-Variant-B.patchDownload+111-12
v3-0001-Query-ID-Calculation-Fix-Variant-B.patchapplication/octet-stream; name=v3-0001-Query-ID-Calculation-Fix-Variant-B.patchDownload+72-3
#15Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#13)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Tue, Mar 11, 2025 at 08:48:33PM +1300, David Rowley wrote:

On Tue, 11 Mar 2025 at 19:50, Michael Paquier <michael@paquier.xyz> wrote:

This issue exists since the query jumbling exists in pgss, so it goes
really down the road. I've spent a bit more than a few minutes on
that.

I didn't mean to cause offence here. I just wanted to make it clear
that I don't think fixing these types of issues by swapping the order
of the fields is going to be a sustainable practice, and it would be
good to come up with a solution that eliminates the chances of this
class of bug from ever appearing again. Even if we were to trawl the
entire Query struct and everything that could ever be attached to it
and we deem that today it's fine and no more such bugs exist, the
person adding some new SQL feature in the future that adds new data to
store in the Query probably isn't going to give query jumbling much
thought, especially now that the code generation is automatic.

FWIW, I've mentioned the use of the offset in a Node upthread, in the
next to last last paragraph of this email:
/messages/by-id/Z84mhjm5RtWtLG4X@paquier.xyz

One thing I did not notice yesterday was that it is possible to rely
on _jumbleNode() to pass an offset from the parent. So it looks like
we had roughly the same idea independently, but I was not able to look
more closely at that yesterday.

But after sleeping on it I think that these two points are kind of
moot: having only the offset passed down to a single _jumbleNode()
with the offset compiled at its beginning should be sufficient. Using
"seed" as a term is perhaps a bit confusing, because this is an offset
in the upper node, but perhaps that's OK as-is. It's just more
entropy addition. If somebody has a better idea for this term, feel
free.

Can you share your thoughts on Ivan's NULL jumble idea?

This is an incomplete solution, I am afraid. The origin of the
problem comes from the fact that a Node (here, Query) stores two times
successively equivalent Nodes that are used for separate data, with
NULL being the difference between both. The problem is not limited to
NULL, we could as well, for example, finish with a Node that has a
custom jumbling function and the same issue depending on how it is
used in a parent Node. Adding the offset from the parent in the
jumbling is a much stronger insurance policy that the proposed
solution specific to NULL, because each offset is unique in a single
Node.

_jumbleList() is an interesting one. If we want an equivalent of the
offset, this could use the item number in the list which would be a
rather close equivalent to the offset used elsewhere. For the int,
oid and xid lists, we should do an extra JUMBLE_FIELD_SINGLE() for
each member, apply the offset at the beginning of _jumbleList(), once,
I guess.

Is this affected by the same class of bug that we're aiming to fix
here? (This class being not always jumbling all fields because of
NULLs or some other value, resulting in the next jumble field applying
the same bytes to the buffer as the previous field would have, had it
not been ignored)

Yep, that could be possible. I am not sure if it can be reached
currently with the set of parse nodes, but it in theory possible could
be possible with a list of Nodes, at least.
--
Michael

#16Sami Imseih
samimseih@gmail.com
In reply to: Michael Paquier (#15)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

and simple. However, Sami seems to have concerns about the overhead of
doing this. Is that warranted at all? Potentially, there could be a
decent number of NULL fields. It'll probably be much cheaper than the
offsetof idea I came up with.

I have not benchmarked the overhead, so maybe there is not much to
be concerned about. However, it just seems to me that tracking the extra
data for all cases just to only deal with corner cases does not seem like the
correct approach. This is what makes variant A the most attractive
approach.

--

Sami

#17Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#16)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Tue, Mar 11, 2025 at 05:35:10PM -0500, Sami Imseih wrote:

I have not benchmarked the overhead, so maybe there is not much to
be concerned about. However, it just seems to me that tracking the extra
data for all cases just to only deal with corner cases does not seem like the
correct approach. This is what makes variant A the most attractive
approach.

I suspect that the overhead will be minimal for all the approaches I'm
seeing on this thread, but it would not hurt to double-check all that.
As the overhead of a single query jumbling is weightless compared to
the overall query processing, the fastest method I've used in this
area is a micro-benchmark with a hardcoded loop in JumbleQuery() with
some rusage to get a more few metrics. This exagerates the query
jumbling computing, but it's good enough to see a difference once you
take an average of the time taken for each loop.
--
Michael

#18David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#15)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Tue, 11 Mar 2025 at 23:28, Michael Paquier <michael@paquier.xyz> wrote:

Can you share your thoughts on Ivan's NULL jumble idea?

This is an incomplete solution, I am afraid. The origin of the
problem comes from the fact that a Node (here, Query) stores two times
successively equivalent Nodes that are used for separate data, with
NULL being the difference between both. The problem is not limited to
NULL, we could as well, for example, finish with a Node that has a
custom jumbling function and the same issue depending on how it is
used in a parent Node. Adding the offset from the parent in the
jumbling is a much stronger insurance policy that the proposed
solution specific to NULL, because each offset is unique in a single
Node.

One of us is not understanding the problem correctly. I'm very open
for that to be me, but I'll need a bit more explanation for me to know
that for sure.

As far as I see it, providing we ensure we always add something with a
bit of entropy to the jumble buffer for all possible values that a
jumble field can be, this fixes the issue. The problem only occurs at
the moment because we entirely skip over NULL node fields and we end
up with the same jumble bytes if we effectively move the same List of
nodes between the Query's distinctClause and sortClause fields.

If we add something for NULLs, we'll always add distinctClause before
the sortClause. If the distinctClause is NULL we won't end up with the
same jumble bytes as if the sortClause were NULL, as the order the
NULL byte(s) are added to the buffer changes.

The fragment of the Query struct looks like:

List *windowClause; /* a list of WindowClause's */

List *distinctClause; /* a list of SortGroupClause's */

List *sortClause; /* a list of SortGroupClause's */

Let's assume we have a non-NIL windowClause, a non-NIL distinctClause
and a NIL sortClause. The jumble bytes currently look like:
"<windowClause bytes><distinctClause bytes>", and if we have an ORDER
BY instead of a DISTINCT, we get: "<windowClause bytes><sortClause
bytes>", and this is problematic when the distinctClause bytes and
sortClause bytes at the same as that results in the same hash.

If we account for the NULLs, those two cases become: "<windowClause
bytes><distinctClause bytes><byte for NULL sortClause>" and
"<windowClause bytes><byte for NULL distinct clause><sortClause
bytes>", which is going to be highly unlikely to hash to the same
value due to the buffer not being the same.

Can you explain where my understanding is wrong?

David

#19Sami Imseih
samimseih@gmail.com
In reply to: David Rowley (#18)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

If we add something for NULLs, we'll always add distinctClause before
the sortClause. If the distinctClause is NULL we won't end up with the
same jumble bytes as if the sortClause were NULL, as the order the
NULL byte(s) are added to the buffer changes.

That is how I understand it as well. By appending a single character null
terminator to the jumble for every NULL expression, we change the composition
of the final jumble. Passing offsets will accomplish the same thing, but I can't
see how it buys us any additional safeguards.

I suspect that the overhead will be minimal for all the approaches I'm
seeing on this thread, but it would not hurt to double-check all that.

Perhaps my concern is overblown. Also, it seems the consensus is for a more
comprehensive solution than that of variant A.

Here is an idea I was playing around with. Why don't we just append a null
terminator for every expression (a variant of variant B)?
I describe this as jumbling the expression position. And if we do
that, it no longer
seems necessary to jumble the expression type either. right?

+++ b/src/backend/nodes/queryjumblefuncs.c
@@ -236,6 +236,13 @@ do { \
        if (expr->str) \
                AppendJumble(jstate, (const unsigned char *)
(expr->str), strlen(expr->str) + 1); \
 } while(0)
+#define JUMBLE_POSITION() \
+do { \
+       char nullterm = '\0'; \
+       AppendJumble(jstate, (const unsigned char *) &(nullterm),
sizeof(nullterm)); \
+       if (expr == NULL) \
+               return; \
+} while(0)

#include "queryjumblefuncs.funcs.c"

@@ -244,17 +251,15 @@ _jumbleNode(JumbleState *jstate, Node *node)
{
Node *expr = node;

- if (expr == NULL)
- return;
-
/* Guard against stack overflow due to overly complex expressions */
check_stack_depth();

        /*
-        * We always emit the node's NodeTag, then any additional
fields that are
-        * considered significant, and then we recurse to any child nodes.
+        * We represent a position of a field in the jumble with a null-term.
+        * Doing so ensures that even NULL expressions are accounted for in
+        * the jumble.
         */
-       JUMBLE_FIELD(type);
+       JUMBLE_POSITION();

switch (nodeTag(expr))
{

As the overhead of a single query jumbling is weightless compared to
the overall query processing

yeah, that is a good point. At least benchmarking the above on a
SELECT only pgbench did not reveal any regression.

--
Sami Imseih
Amazon Web Services (AWS)

#20David Rowley
dgrowleyml@gmail.com
In reply to: David Rowley (#13)
Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET

On Tue, 11 Mar 2025 at 20:48, David Rowley <dgrowleyml@gmail.com> wrote:

Since then, I see that Ivan
has already submitted a patch that accounts for NULL nodes and adds a
byte to the jumble buffer to account for NULLs. This seems quite clean
and simple. However, Sami seems to have concerns about the overhead of
doing this. Is that warranted at all? Potentially, there could be a
decent number of NULL fields. It'll probably be much cheaper than the
offsetof idea I came up with.

To try and get some timing information about the overhead added for
jumbling the NULLs, I compared master and the patch at [1]/messages/by-id/attachment/173439/0001-Query-ID-Calculation-Fix-Variant-B.patch using the
attached jumbletime.patch.txt. The patch just adds an additional call
to JumbleQuery and times how long it takes and outputs the result in
nanoseconds. Running make check has the advantage of trying out many
different queries.

On an AMD 3990x machine, the results for jumbling all make check queries are:

master: 73.66 milliseconds
0001-Query-ID-Calculation-Fix-Variant-B.patch: 80.36 milliseconds

So that adds about 9.1% overhead to jumbling, on average.

See the attached jumble_results.txt for full results and the code to
extract the results.

David

[1]: /messages/by-id/attachment/173439/0001-Query-ID-Calculation-Fix-Variant-B.patch

Attachments:

jumbletime.patch.txttext/plain; charset=US-ASCII; name=jumbletime.patch.txtDownload+21-0
jumble_results.txttext/plain; charset=US-ASCII; name=jumble_results.txtDownload
#21Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#18)
#22Sami Imseih
samimseih@gmail.com
In reply to: Michael Paquier (#21)
#23Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#22)
#24Sami Imseih
samimseih@gmail.com
In reply to: Michael Paquier (#23)
#25Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#24)
#26David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#25)
#27Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#26)
#28David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#27)
#29Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#28)
#30Bykov Ivan
i.bykov@modernsys.ru
In reply to: Michael Paquier (#29)
#31Michael Paquier
michael@paquier.xyz
In reply to: Bykov Ivan (#30)
#32David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#29)
#33Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#32)
#34David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#33)
#35Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#34)
#36Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#35)
#37David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#36)
#38Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#37)
#39David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#35)
#40Bykov Ivan
i.bykov@modernsys.ru
In reply to: David Rowley (#39)
#41David Rowley
dgrowleyml@gmail.com
In reply to: Bykov Ivan (#40)
#42David Rowley
dgrowleyml@gmail.com
In reply to: David Rowley (#41)
#43Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#42)
#44Bykov Ivan
i.bykov@modernsys.ru
In reply to: Michael Paquier (#43)
#45David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#43)
#46Michael Paquier
michael@paquier.xyz
In reply to: David Rowley (#45)
#47Sami Imseih
samimseih@gmail.com
In reply to: Michael Paquier (#46)
#48David Rowley
dgrowleyml@gmail.com
In reply to: Sami Imseih (#47)
#49David Rowley
dgrowleyml@gmail.com
In reply to: Michael Paquier (#46)