[PATCH] Optionally record Plan IDs to track plan changes for a query

Started by Lukas Fittlabout 1 year ago43 messages
Jump to latest
#1Lukas Fittl
lukas@fittl.com

Hi all,

Inspired by a prior proposal by Sami Imseih for tracking Plan IDs [0]/messages/by-id/604E3199-2DD2-47DD-AC47-774A6F97DCA9@amazon.com, as
well as extensions like pg_stat_plans [1]https://github.com/2ndQuadrant/pg_stat_plans (unmaintained), pg_store_plans
[2]: https://ossc-db.github.io/pg_store_plans/
(enabled by default on AWS), this proposed patch set adds:

1. An updated in-core facility to optionally track Plan IDs based on
hashing the plan nodes during the existing treewalk in setrefs.c -
controlled by the new "compute_plan_id" GUC
2. An example user of plan IDs with a new pg_stat_plans extension in
contrib, that also records the first plan text with EXPLAIN (COSTS OFF)

My overall perspective is that (1) is best done in-core to keep overhead
low, whilst (2) could be done outside of core (or merged with a future
pg_stat_statements) and is included here mainly for illustration purposes.

Notes including what constitutes a plan ID follow, after a quick example:

## Example

Having the planid + an extension that records it, plus the first plan text,
lets you track different plans for the same query:

bench=# SELECT * FROM pgbench_accounts WHERE aid = 123;
bench=# SET enable_indexscan = off;
bench=# SELECT * FROM pgbench_accounts WHERE aid = 123;
bench=# SELECT queryid, planid, plan FROM pg_stat_plans WHERE plan LIKE
'%pgbench%';
       queryid        |        planid        |
 plan
----------------------+----------------------+------------------------------------------------------------
 -5986989572677096226 | -2057350818695327558 | Index Scan using
pgbench_accounts_pkey on pgbench_accounts+
                      |                      |   Index Cond: (aid = 123)
 -5986989572677096226 |  2815444815385882663 | Bitmap Heap Scan on
pgbench_accounts                      +
                      |                      |   Recheck Cond: (aid = 123)
                              +
                      |                      |   ->  Bitmap Index Scan on
pgbench_accounts_pkey          +
                      |                      |         Index Cond: (aid =
123)

And this also supports showing the plan for a currently running query (call
count is zero in such cases):

session 1:
bench# SELECT pg_sleep(100), COUNT(*) FROM pgbench_accounts;

session 2:
bench=# SELECT query, plan FROM pg_stat_activity
  JOIN pg_stat_plans ON (usesysid = userid AND datid = dbid AND query_id =
queryid AND plan_id = planid)
  WHERE query LIKE 'SELECT pg_sleep%';
                         query                         |
 plan
-------------------------------------------------------+------------------------------------
 SELECT pg_sleep(100), COUNT(*) FROM pgbench_accounts; | Aggregate
                +
                                                       |   ->  Seq Scan on
pgbench_accounts

## What is a plan ID?

My overall hypothesis here is that identifying different plan shapes for
the same normalized query (i.e. queryid) is useful, because it lets you
detect use of different plan choices such as which join order or index was
used based on different input parameters (or different column statistics
due to a recent ANALYZE) for the same normalized query.

You can get this individually for a given query with EXPLAIN of course, but
if you want to track this over time the only workable mechanism in my
experience is auto_explain, which is good for sampling outliers, but bad
for getting a comprehensive view of which plans where used and how often.

To me the closest to what I consider a "plan shape" is the output of
EXPLAIN (COSTS OFF), that is, the plan nodes and their filters/conditions,
but discarding the exact costs as well as ignoring any execution
statistics. The idea behind the proposed plan ID implementation is trying
to match that by hashing plan nodes, similar to how query IDs hash
post-parse analysis query nodes.

One notable edge case are plans that involve partitions - those could of
course lead to a lot of different planids for a given queryid, based on how
many partitions were pruned. We could consider special casing this, e.g. by
trying to be smart about declarative partitioning, and considering plans to
be identical if they scan the same number of partitions with the same scan
methods. However this could also be done by an out-of-core extension,
either by defining a better planid mechanism, or maintaining a grouped
planid of sorts based on the internal planid.

The partitions problem reminds me a bit of the IN list problem with
pg_stat_statements (which we still haven't resolved) - despite the problem
the extension has been successfully used for many years by many Postgres
users, even for those workloads where you have thousands of entries for the
same query with different IN list lengths.

## Why does this need to be in core?

Unfortunately both existing open-source extensions I'm familiar with are
not suitable for production use. Out of the two, only pg_store_plans [2]https://ossc-db.github.io/pg_store_plans/ is
being maintained, however it carries significant overhead because it
calculates the plan ID by hashing the EXPLAIN text output every time a
query is executed.

My colleague Marko (CCed) and I evaluated whether pg_store_plans could be
modified to instead calculate the planid by hashing the plan tree, and ran
into three issues:

1. The existing node jumbling in core is not usable by extensions, and it
is necessary to have something like it for hashing Filters/Conds
(ultimately requiring us to duplicate all of it in the extension, and keep
maintaining that for every major release)
2. Whilst its cheap enough, it seems unnecessary to do an additional tree
walk when setrefs.c already walks the plan tree in a near-final state
3. It seems useful to enable showing the plan shape of a currently running
query (e.g. to identify whether a plan regression causes the query to run
forever), and this is much easier to do by adding planid to
pg_stat_activity, like the queryid

I also suspect that Aurora's implementation in [3]https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora_stat_plans.html had some in-core
modifications to enable it work efficiently, but I'm not familiar with any
implementation details beyond what's in the public documentation.

## Implementation notes

The attached patch set includes two preparatory patches that could be
committed independently if deemed useful:

The first patch allows use of node jumbling by other unit files /
extensions, which would help an out-of-core extension avoid duplicating all
the node jumbling code.

The second patch adds a function for the extensible cumulative statistics
system to drop all entries for a given statistics kind. This already exists
for resetting, but in case of a dynamic list of entries its more useful to
be able to drop all of them when "reset" is called.

The third patch adds plan ID tracking in core. This is turned off by
default, and can be enabled by setting "compute_plan_id" to "on". Plan IDs
are shown in pg_stat_activity, as well as EXPLAIN and auto_explain output,
to allow matching a given plan ID to a plan text, without requiring the use
of an extension. There are some minor TODOs in the plan jumbling logic that
I haven't finalized yet. There is also an open question whether we should
use the node attribute mechanism instead of custom jumbling logic?

The fourth patch adds the pg_stat_plans contrib extension, for illustrative
purposes. This is inspired by pg_stat_statements, but intentionally kept
separate for easier review and since it does not use an external file and
could technically be used independently. We may want to develop this into a
unified pg_stat_statements+plans in-core mechanism in the future, but I
think that is best kept for a separate discussion.

The pg_stat_plans extension utilizes the cumulative statistics system for
tracking statistics (extensible thanks to recent changes!), as well as
dynamic shared memory to track plan texts up to a given limit (2kB by
default). As a side note, managing extra allocations with the new
extensible stats is a bit cumbersome - it would be helpful to have a hook
for cleaning up data associated to entries (like a DSA allocation).

Thanks,
Lukas

[0]: /messages/by-id/604E3199-2DD2-47DD-AC47-774A6F97DCA9@amazon.com
/messages/by-id/604E3199-2DD2-47DD-AC47-774A6F97DCA9@amazon.com
[1]: https://github.com/2ndQuadrant/pg_stat_plans
[2]: https://ossc-db.github.io/pg_store_plans/
[3]: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora_stat_plans.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora_stat_plans.html

--
Lukas Fittl

Attachments:

0002-Cumulative-statistics-Add-pgstat_drop_entries_of_kin.patchapplication/octet-stream; name=0002-Cumulative-statistics-Add-pgstat_drop_entries_of_kin.patchDownload+34-1
0003-Optionally-record-a-plan_id-in-PlannedStmt-to-identi.patchapplication/octet-stream; name=0003-Optionally-record-a-plan_id-in-PlannedStmt-to-identi.patchDownload+722-20
0001-Allow-using-jumbling-logic-outside-of-query-jumble-u.patchapplication/octet-stream; name=0001-Allow-using-jumbling-logic-outside-of-query-jumble-u.patchDownload+41-25
0004-Add-pg_stat_plans-contrib-extension.patchapplication/octet-stream; name=0004-Add-pg_stat_plans-contrib-extension.patchDownload+1787-1
#2Jeremy Schneider
schneider@ardentperf.com
In reply to: Lukas Fittl (#1)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Thu, 2 Jan 2025 12:46:04 -0800
Lukas Fittl <lukas@fittl.com> wrote:

this proposed patch set adds:

1. An updated in-core facility to optionally track Plan IDs based on
hashing the plan nodes during the existing treewalk in setrefs.c -
controlled by the new "compute_plan_id" GUC
2. An example user of plan IDs with a new pg_stat_plans extension in
contrib, that also records the first plan text with EXPLAIN (COSTS
OFF)

My overall perspective is that (1) is best done in-core to keep
overhead low, whilst (2) could be done outside of core (or merged
with a future pg_stat_statements) and is included here mainly for
illustration purposes.

And 2025 is starting with a bang! Nice to see this email! Being able to
collect telemetry that indicates when plan changes happened would be
very useful.

The specifics of how a plan ID is generated are going to have some edge
cases (as you noted)

I concur that the ideal place for this to eventually land would be
alongside queryid in pg_stat_activity

-Jeremy

#3Artem Gavrilov
artem.gavrilov@percona.com
In reply to: Lukas Fittl (#1)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Thu, Jan 2, 2025 at 10:47 PM Lukas Fittl <lukas@fittl.com> wrote:

The first patch allows use of node jumbling by other unit files /
extensions, which would help an out-of-core extension avoid duplicating all
the node jumbling code.

The second patch adds a function for the extensible cumulative statistics
system to drop all entries for a given statistics kind. This already exists
for resetting, but in case of a dynamic list of entries its more useful to
be able to drop all of them when "reset" is called.

The third patch adds plan ID tracking in core. This is turned off by
default, and can be enabled by setting "compute_plan_id" to "on". Plan IDs
are shown in pg_stat_activity, as well as EXPLAIN and auto_explain output,
to allow matching a given plan ID to a plan text, without requiring the use
of an extension. There are some minor TODOs in the plan jumbling logic that
I haven't finalized yet. There is also an open question whether we should
use the node attribute mechanism instead of custom jumbling logic?

The fourth patch adds the pg_stat_plans contrib extension, for
illustrative purposes. This is inspired by pg_stat_statements, but
intentionally kept separate for easier review and since it does not use an
external file and could technically be used independently. We may want to
develop this into a unified pg_stat_statements+plans in-core mechanism in
the future, but I think that is best kept for a separate discussion.

The pg_stat_plans extension utilizes the cumulative statistics system for
tracking statistics (extensible thanks to recent changes!), as well as
dynamic shared memory to track plan texts up to a given limit (2kB by
default). As a side note, managing extra allocations with the new
extensible stats is a bit cumbersome - it would be helpful to have a hook
for cleaning up data associated to entries (like a DSA allocation).

Thanks,
Lukas

[0]:
/messages/by-id/604E3199-2DD2-47DD-AC47-774A6F97DCA9@amazon.com
<https://url.avanan.click/v2/r01/___https://www.postgresql.org/message-id/flat/604E3199-2DD2-47DD-AC47-774A6F97DCA9*40amazon.com___.YXAzOnBlcmNvbmE6YTpnOjk5NDUyOGU1MTIwZDhhNDQxNzNiMDM0NjEwZjY1NTIxOjc6YWM0YjpjN2VmMzI5ZmVjMmM2N2RlNDg0MGVlNjJmMGFlOTQ3OGQ1NTM1ODZmZGMxNzI2NGQ4NmEwMDcxYmI1ODVjY2RjOmg6VDpO&gt;
[1]: https://github.com/2ndQuadrant/pg_stat_plans
<https://url.avanan.click/v2/r01/___https://github.com/2ndQuadrant/pg_stat_plans___.YXAzOnBlcmNvbmE6YTpnOjk5NDUyOGU1MTIwZDhhNDQxNzNiMDM0NjEwZjY1NTIxOjc6NjM3NTowODVhZWY2OGY1MjdhYWEzY2NiMDY1NTVlNzcwYjM5YTlmOTI5ODU3ZWI5ZWY2NjY1YTljMDBmMWEyNDU0ZmMwOmg6VDpO&gt;
[2]: https://ossc-db.github.io/pg_store_plans/
<https://url.avanan.click/v2/r01/___https://ossc-db.github.io/pg_store_plans/___.YXAzOnBlcmNvbmE6YTpnOjk5NDUyOGU1MTIwZDhhNDQxNzNiMDM0NjEwZjY1NTIxOjc6NjU3ZDo0NjA1YzQ1ZTk2ZGEzZmZiNmM5NTEyYjZiMTRmYjk3Y2RjMTE5M2ZkMTMwYTg4ZWM1NjdmMWY1N2RhZjI5YTliOmg6VDpO&gt;
[3]:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora_stat_plans.html
<https://url.avanan.click/v2/r01/___https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora_stat_plans.html___.YXAzOnBlcmNvbmE6YTpnOjk5NDUyOGU1MTIwZDhhNDQxNzNiMDM0NjEwZjY1NTIxOjc6ZGNjNTozOGIyNDM2MWVhYzg1MTcyNjc5NzJlZTdkM2JkNzliMjE3NjYzODk5MGQwMTdkNDM1YzliMGU5MDA1ZmEwNzFlOmg6VDpO&gt;

--
Lukas Fittl

Hello Lukas,

We have another extension that does plan ID tracking: pg_stat_monitor. So I
think it would be great to have this functionality in core.

I tested your patch set on top of *86749ea3b76* PG revision on MacOS. All
tests successfully passed. However, pgident shows that some files are not
properly formatted.

--

<https://www.percona.com/&gt;

Artem Gavrilov
Senior Software Engineer, Percona

artem.gavrilov@percona.com

#4Michael Paquier
michael@paquier.xyz
In reply to: Lukas Fittl (#1)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Thu, Jan 02, 2025 at 12:46:04PM -0800, Lukas Fittl wrote:

Inspired by a prior proposal by Sami Imseih for tracking Plan IDs [0], as
well as extensions like pg_stat_plans [1] (unmaintained), pg_store_plans
[2] (not usable on production, see notes later) and aurora_stat_plans [3]
(enabled by default on AWS), this proposed patch set adds:

0002 introduces this new routine to delete all the entries of the new
stats kind you are adding:
+void
+pgstat_drop_entries_of_kind(PgStat_Kind kind)
+{
+	dshash_seq_status hstat;
+	PgStatShared_HashEntry *ps;
+	uint64		not_freed_count = 0;
+
+	dshash_seq_init(&hstat, pgStatLocal.shared_hash, true);

This is the same as pgstat_drop_all_entries(), except for the filter
based on the stats kind and the fact that you need to take care of the
local reference for an entry of this kind, if there are any, like
pgstat_drop_entry(). Why not, that can be useful on its own depending
on the stats you are working on. May I suggest the addition of a code
path outside of your main proposal to test this API? For example
injection_stats.c with a new SQL function to reset everything.

+static void
+pgstat_gc_plan_memory()
+{
+	dshash_seq_status hstat;
+	PgStatShared_HashEntry *p;
+
+	/* dshash entry is not modified, take shared lock */
+	dshash_seq_init(&hstat, pgStatLocal.shared_hash, false);
+	while ((p = dshash_seq_next(&hstat)) != NULL)
+	{
+		PgStatShared_Common *header;
+		PgStat_StatPlanEntry *statent;

Question time: pgstat_drop_entries_of_kind() is called once in 0004,
which does a second sequential scan of pgStatLocal.shared_hash.
That's not efficient, making me question what's the reason to think
why pgstat_drop_entries_of_kind() is the best approach to use. I like
the idea of pgstat_drop_entries_of_kind(), less how it's applied in
the context of the main patch.

Mixed feelings about the choices of JumblePlanNode() in 0003 based on
its complexity as implemented. When it comes to such things, we
should keep the custom node functions short, applying node_attr
instead to the elements of the nodes so as the assumptions behind the
jumbling are documented within the structure definitions in the
headers, not the jumbling code itself.
--
Michael

#5Sami Imseih
samimseih@gmail.com
In reply to: Michael Paquier (#4)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

Thanks for starting this thread. This is an important feature.

I am still reviewing, but wanted to share some initial comments.

== pg_stat_plans extension (0004)

1. pg_stat_plans_1_0 should not call pgstat_fetch_entry.l
This is not needed since we already have the entry with a shared lock
and it could lead to an assertion error when pgstat_fetch_entry
may conditionally call dshash_find. dshash_find asserts that the lock
is not already held. Calling pgstat_get_entry_data should be
enough here.

2. a "toplevel" bool field is missing in pg_stat_plans to indicate the
plan is for a nested query.

3. I think we should add cumulative planning_time. This
Probably should be controlled with a separate GUC as well.

4. For deallocation, I wonder if it makes more sense to zap the
plans with the lowest total execution time rather than calls; or make
this configurable. In fact, I think choosing the eviction strategy
should be done in pg_stat_statements as well ( but that belongs
in a separate discussion ). The idea is to give more priority to
plans that have the most overall database time.

5. What are your thoughts about controlling the memory by
size rather than .max and .max_size ? if a typical plan
is 2KB, a user can fit 10k plans with 20MB. A typical
user can probably allocate much more memory for this
purpose.

Also, pgstat_gc_plans is doing a loop over the
hash to get the # of entries. I don't think this
is a good idea for performance and it may not be possible to
actually enforce the .max on a dshash since
the lock is taken on a partition level.

6. I do like the idea of showing an in-flight plan.
This is so useful for a rare plan, especially on the
first capture of the plan ( calls = 0), and the planId
can be joined with pg_stat_activity to get the query
text.

/* Record initial entry now, so plan text is available for currently
running queries */
pgstat_report_plan_stats(queryDesc,
0, /* executions are counted in
pgsp_ExecutorEnd */
0.0);

We will need to be clear in the documentation
that calls being 0 is a valid scenario.

== core plan id computation (0003)

1. compute_plan_id should do exactly what compute_query_id
does. It should have an "auto" as the default which automatically
computes a plan id when pg_stat_plans is enabled.

2.

Mixed feelings about the choices of JumblePlanNode() in 0003 based on
its complexity as implemented. When it comes to such things, we
should keep the custom node functions short, applying node_attr
instead to the elements of the nodes so as the assumptions behind the
jumbling are documented within the structure definitions in the
headers, not the jumbling code itself.

+1

we should be able to control which node is considered for plan_id
computation using a node attribute such as plan_jumble_ignore.
I played around with this idea by building on top of your proposal
and attached my experiment code for this. The tricky part will be finalizing
which nodes and node fields to use for plan computation.

3. We may want to combine all the jumbling code into
a single jumble.c since the query and plan jumble will
share a lot of the same code, i.e. JumbleState.
_JumbleNode, etc.

Regards,

Sami Imseih
Amazon Web Services (AWS)

Attachments:

Experiment-with-plan-identifier-computation.patchapplication/octet-stream; name=Experiment-with-plan-identifier-computation.patchDownload+553-55
#6Lukas Fittl
lukas@fittl.com
In reply to: Artem Gavrilov (#3)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Tue, Jan 21, 2025 at 10:47 AM Artem Gavrilov <artem.gavrilov@percona.com>
wrote:

We have another extension that does plan ID tracking: pg_stat_monitor. So
I think it would be great to have this functionality in core.

Thanks! I had forgotten that pg_stat_monitor can optionally track plan
statistics. Its actually another data point for why the plan ID calculation
should be in core:

Like pg_store_plans, pg_stat_monitor is hashing the plan text to calculate
the plan ID [0]https://github.com/percona/pg_stat_monitor/blob/main/pg_stat_monitor.c#L730, which can have measurable overhead (judging from our
benchmarks of pg_store_plans). It also utilizes EXPLAIN (COSTS OFF) for
getting the plan text [1]https://github.com/percona/pg_stat_monitor/blob/main/pg_stat_monitor.c#L678, which tracks with my thinking as to what should
be considered significant for the plan ID jumbling.

I tested your patch set on top of *86749ea3b76* PG revision on MacOS. All

tests successfully passed. However, pgident shows that some files are not
properly formatted.

Thanks, appreciate the test and note re: pgident, taking care of that in
the next patch refresh.

Thanks,
Lukas

[0]: https://github.com/percona/pg_stat_monitor/blob/main/pg_stat_monitor.c#L730
https://github.com/percona/pg_stat_monitor/blob/main/pg_stat_monitor.c#L730
[1]: https://github.com/percona/pg_stat_monitor/blob/main/pg_stat_monitor.c#L678
https://github.com/percona/pg_stat_monitor/blob/main/pg_stat_monitor.c#L678

--
Lukas Fittl

#7Andrei Lepikhov
lepihov@gmail.com
In reply to: Lukas Fittl (#1)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On 1/3/25 03:46, Lukas Fittl wrote:

My overall perspective is that (1) is best done in-core to keep overhead
low, whilst (2) could be done outside of core (or merged with a future
pg_stat_statements) and is included here mainly for illustration purposes.

Thank you for the patch and your attention to this issue!

I am pleased with the export of the jumbling functions and their
generalisation.

I may not be close to the task monitoring area, but I utilise queryId
and other tools to differ plan nodes inside extensions. Initially, like
queryId serves as a class identifier for queries, plan_id identifies a
class of nodes, not a single node. In the implementation provided here,
nodes with the same hash can represent different subtrees. For example,
JOIN(A, JOIN(B,C)) and JOIN(JOIN(B,C),A) may have the same ID.

Moreover, I wonder if this version of plan_id reacts to the join level
change. It appears that only a change of the join clause alters the
plan_id hash value, which means you would end up with a single hash for
very different plan nodes. Is that acceptable? To address this, we
should consider the hashes of the left and right subtrees and the hashes
of each subplan (especially in the case of Append).

Overall, similar to discussions on queryId, various extensions may want
different logic for generating plan_id (more or less unique guarantees,
for example). Hence, it would be beneficial to separate this logic and
allow extensions to provide different plan_ids. IMO, What we need is a
'List *ext' field in each of the Plan, Path, PlanStmt, and Query
structures. Such 'ext' field may contain different stuff that extensions
want to push without interference between them - specific plan_id as an
example.

Additionally, we could bridge the gap between the cloud of paths and the
plan by adding a hook at the end of the create_plan_recurse routine.
This may facilitate the transfer of information regarding optimiser
decisions that could be influenced by an extension into the plan.

--
regards, Andrei Lepikhov

#8Lukas Fittl
lukas@fittl.com
In reply to: Sami Imseih (#5)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

Thanks for the reviews! Attached an updated v2 patch set, notes inline
below.

On Tue, Jan 21, 2025 at 4:47 PM Michael Paquier <michael@paquier.xyz> wrote:

May I suggest the addition of a code
path outside of your main proposal to test this API? For example
injection_stats.c with a new SQL function to reset everything.

Good idea - added an example use of this to injection_stats.c in the
attached 0002.

Question time: pgstat_drop_entries_of_kind() is called once in 0004,

which does a second sequential scan of pgStatLocal.shared_hash.
That's not efficient, making me question what's the reason to think
why pgstat_drop_entries_of_kind() is the best approach to use. I like
the idea of pgstat_drop_entries_of_kind(), less how it's applied in
the context of the main patch.

My motivation for doing two scans here, one in pgstat_drop_entries_of_kind
and one in pgstat_gc_plan_memory (both called from the reset function) was
that the first time through we hold an exclusive lock on
pgStatLocal.shared_hash, vs the second time (when we free plan texts) we
hold a share lock.

Maybe that doesn't matter, since "dsa_free" is fast anyway, and we can just
do this all in one go whilst holding an exclusive lock?

Overall, I also do wonder if it wouldn't be better to have a callback
mechanism in the shared memory stats, so stats plugins can do extra work
when an entry gets dropped (like freeing the DSA memory for the plan text),
vs having to add all this extra logic to do it.

On Thu, Jan 23, 2025 at 5:25 PM Sami Imseih <samimseih@gmail.com> wrote:

Thanks for starting this thread. This is an important feature.

I am still reviewing, but wanted to share some initial comments.

Thanks for taking the time! I had started on a v2 patch based on Michael's
note before I saw your email and experiment, so apologies for sending this
in the middle of your review :)

== pg_stat_plans extension (0004)

1. pg_stat_plans_1_0 should not call pgstat_fetch_entry.l
This is not needed since we already have the entry with a shared lock
and it could lead to an assertion error when pgstat_fetch_entry
may conditionally call dshash_find. dshash_find asserts that the lock
is not already held. Calling pgstat_get_entry_data should be
enough here.

Fixed.

2. a "toplevel" bool field is missing in pg_stat_plans to indicate the
plan is for a nested query.

Good point - added.

3. I think we should add cumulative planning_time. This

Probably should be controlled with a separate GUC as well.

Hmm. Don't we already have that in pg_stat_statements?

Though, in practice I see that turned off most of the time (due to its
overhead?), not sure if we could do better if it this was done here instead?

4. For deallocation, I wonder if it makes more sense to zap the

plans with the lowest total execution time rather than calls; or make
this configurable. In fact, I think choosing the eviction strategy
should be done in pg_stat_statements as well ( but that belongs
in a separate discussion ). The idea is to give more priority to
plans that have the most overall database time.

Yeah, that's a good point, its likely people would be most interested in
slow plans, vs those that were called a lot.

Happy to adjust it that way - I don't think we need to make it configurable
to be honest.

5. What are your thoughts about controlling the memory by

size rather than .max and .max_size ? if a typical plan
is 2KB, a user can fit 10k plans with 20MB. A typical
user can probably allocate much more memory for this
purpose.

Interesting idea! I'd be curious to get more feedback on the overall
approach here before digging into this, but I like this as it'd be more
intuitive from an end-user perspective.

6. I do like the idea of showing an in-flight plan.

This is so useful for a rare plan, especially on the
first capture of the plan ( calls = 0), and the planId
can be joined with pg_stat_activity to get the query
text.

Yes indeed, I think it would be a miss if we didn't allow looking at
in-flight plan IDs.

For the record, I can't take credit for the idea, I think I got this either
from your earlier plan ID patch, or from talking with you at PGConf NYC
last year.

== core plan id computation (0003)

1. compute_plan_id should do exactly what compute_query_id
does. It should have an "auto" as the default which automatically
computes a plan id when pg_stat_plans is enabled.

Yep, it does seem better to be consistent here. I added "auto" in v2 and
made it the default.

Mixed feelings about the choices of JumblePlanNode() in 0003 based on

its complexity as implemented. When it comes to such things, we
should keep the custom node functions short, applying node_attr
instead to the elements of the nodes so as the assumptions behind the
jumbling are documented within the structure definitions in the
headers, not the jumbling code itself.

+1

we should be able to control which node is considered for plan_id
computation using a node attribute such as plan_jumble_ignore.
I played around with this idea by building on top of your proposal
and attached my experiment code for this. The tricky part will be
finalizing
which nodes and node fields to use for plan computation.

Agreed, its better to do this via the node_attr infrastructure. I've done
this in the attached before I saw your experiment code, so it may be worth
comparing the approaches.

Generally, I tried to stay closer to the idea of "only jumble what EXPLAIN
(COSTS OFF) would show", vs jumbling most plan fields by default.

That does mean we have a lot of extra "node_attr(query_jumble_ignore)" tags
in the plan node structs. We could potentially invent a new way of only
jumbling what's marked vs the current jumbling all by default + ignoring
some fields, but not sure if that's worth it.

3. We may want to combine all the jumbling code into

a single jumble.c since the query and plan jumble will
share a lot of the same code, i.e. JumbleState.
_JumbleNode, etc.

Agreed, that's what I ended up doing in v2. I think we can state that plan
jumbling is a super set of query jumbling, so it seems best to not have two
copies of very similar jumbling conds/funcs.

I retained the "query" prefix for now to not generate a big diff, but we
should maybe consider dropping that in both the source file names and the
node attributes?

Thanks,
Lukas

--
Lukas Fittl

Attachments:

v2-0003-Optionally-record-a-plan_id-in-PlannedStmt-to-ide.patchapplication/x-patch; name=v2-0003-Optionally-record-a-plan_id-in-PlannedStmt-to-ide.patchDownload+635-186
v2-0001-Allow-using-jumbling-logic-outside-of-query-jumbl.patchapplication/x-patch; name=v2-0001-Allow-using-jumbling-logic-outside-of-query-jumbl.patchDownload+41-25
v2-0002-Cumulative-statistics-Add-pgstat_drop_entries_of_.patchapplication/x-patch; name=v2-0002-Cumulative-statistics-Add-pgstat_drop_entries_of_.patchDownload+69-1
v2-0004-Add-pg_stat_plans-contrib-extension.patchapplication/x-patch; name=v2-0004-Add-pg_stat_plans-contrib-extension.patchDownload+1828-1
#9Michael Paquier
michael@paquier.xyz
In reply to: Lukas Fittl (#8)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Fri, Jan 24, 2025 at 01:59:00AM -0800, Lukas Fittl wrote:

Overall, I also do wonder if it wouldn't be better to have a callback
mechanism in the shared memory stats, so stats plugins can do extra work
when an entry gets dropped (like freeing the DSA memory for the plan text),
vs having to add all this extra logic to do it.

Not sure about this part yet. I have looked at 0002 to begin with
something and it is really useful on its own. Stats kinds calling
this routine don't need to worry about the internals of dropping local
references or doing a seqscan on the shared hash table. However, what
you have sent lacks in flexibility to me, and the duplication with
pgstat_drop_all_entries is annoying. This had better be merged in a
single routine.

Attached is an updated version that adds an optional "do_drop"
callback in the function that does the seqscan on the dshash, to
decide if an entry should be gone or not. This follows the same model
as the "reset" part, where stats kind can push the matching function
they want to work on the individual entries. We could add a
pgstat_drop_entries_of_kind(), but I'm not feeling that this is
strongly necessary with the basic interface in place.

The changes in the module injection_points were not good. The SQL
function was named "reset" but that's a drop operation.

What do you think?
--
Michael

Attachments:

v3-0002-Add-pgstat_drop_matching_entries.patchtext/x-diff; charset=us-asciiDownload+74-2
#10Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#9)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Mon, Jan 27, 2025 at 12:53:36PM +0900, Michael Paquier wrote:

Not sure about this part yet. I have looked at 0002 to begin with
something and it is really useful on its own. Stats kinds calling
this routine don't need to worry about the internals of dropping local
references or doing a seqscan on the shared hash table. However, what
you have sent lacks in flexibility to me, and the duplication with
pgstat_drop_all_entries is annoying. This had better be merged in a
single routine.

After thinking more about this one, I still want this toy and hearing
nothing I have applied it, with a second commit for the addition in
injection_points to avoid multiple bullet points in a single commit.

I have noticed post-commit that I have made a mistake in the credits
of a632cd354d35 and ce5c620fb625 for your family name. Really sorry
about that! This mistake is on me..

What do you think?

Attached is a rebased version of the three remaining patches. While
looking at this stuff, I have noticed an extra cleanup that would be
good to have, as a separate change: we could reformat a bit the plan
header comments so as these do not require a rewrite when adding
node_attr to them, like d575051b9af9.

Sami's patch set posted at [1]/messages/by-id/CAA5RZ0sUPPOpkRZD=Za83op2ngcPC7dp249vcHA-X5YS7p3n8Q@mail.gmail.com -- Michael has the same problem, making the
proposals harder to parse and review, and the devil is in the details
with these pg_node_attr() properties attached to the structures. That
would be something to do on top of the proposed patch sets. Would any
of you be interested in that?

[1]: /messages/by-id/CAA5RZ0sUPPOpkRZD=Za83op2ngcPC7dp249vcHA-X5YS7p3n8Q@mail.gmail.com -- Michael
--
Michael

Attachments:

v3-0001-Allow-using-jumbling-logic-outside-of-query-jumbl.patchtext/x-diff; charset=us-asciiDownload+41-25
v3-0002-Optionally-record-a-plan_id-in-PlannedStmt-to-ide.patchtext/x-diff; charset=us-asciiDownload+635-187
v3-0003-Add-pg_stat_plans-contrib-extension.patchtext/x-diff; charset=us-asciiDownload+1834-1
#11Lukas Fittl
lukas@fittl.com
In reply to: Michael Paquier (#10)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Thu, Jan 30, 2025 at 8:37 PM Michael Paquier <michael@paquier.xyz> wrote:

After thinking more about this one, I still want this toy and hearing
nothing I have applied it, with a second commit for the addition in
injection_points to avoid multiple bullet points in a single commit.

Thanks for committing! I had intended to review/test your patch, but the
earlier parts of the week got way too busy.

I think the API with do_drop makes sense, and whilst I'd think there is
some extra overhead to calling the function vs having an inline check for
kind, it seems unlikely this would be used in a performance critical
context, and the flexibility seems useful.

I have noticed post-commit that I have made a mistake in the credits

of a632cd354d35 and ce5c620fb625 for your family name. Really sorry
about that! This mistake is on me..

No worries regarding the name, happens to me all the time :)

What do you think?

Attached is a rebased version of the three remaining patches. While
looking at this stuff, I have noticed an extra cleanup that would be
good to have, as a separate change: we could reformat a bit the plan
header comments so as these do not require a rewrite when adding
node_attr to them, like d575051b9af9.

Yeah, I think that'd be helpful to move the comments before the fields - it
definitely gets hard to read.

Sami's patch set posted at [1] has the same problem, making the

proposals harder to parse and review, and the devil is in the details
with these pg_node_attr() properties attached to the structures. That
would be something to do on top of the proposed patch sets. Would any
of you be interested in that?

I'd be happy to tackle that - were you thinking to simply move any comments
before the field, in each case where we're adding an annotation?

Separately I've been thinking how we could best have a discussion/review on
whether the jumbling of specific plan struct fields is correct. I was
thinking maybe a quick wiki page could be helpful, noting why to jumble/not
jumble certain fields?

Thanks,
Lukas

--
Lukas Fittl

#12Michael Paquier
michael@paquier.xyz
In reply to: Lukas Fittl (#11)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Thu, Jan 30, 2025 at 09:19:49PM -0800, Lukas Fittl wrote:

I'd be happy to tackle that - were you thinking to simply move any comments
before the field, in each case where we're adding an annotation?

Yes.

Separately I've been thinking how we could best have a discussion/review on
whether the jumbling of specific plan struct fields is correct. I was
thinking maybe a quick wiki page could be helpful, noting why to jumble/not
jumble certain fields?

Makes sense. This is a complicated topic.
--
Michael

#13Sami Imseih
samimseih@gmail.com
In reply to: Michael Paquier (#12)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

Separately I've been thinking how we could best have a discussion/review on
whether the jumbling of specific plan struct fields is correct. I was
thinking maybe a quick wiki page could be helpful, noting why to jumble/not
jumble certain fields?

Makes sense. This is a complicated topic.

+1 for the Wiki page

I started looking at the set of patches and started with v3-0001.
For that one, I think we need to refactor a bit more for
maintainability/readability.

queryjumblefuncs.c now has dual purposes which is the generic node jumbling
code and now it also has the specific query jumbling code. That seems wrong
from a readability/maintainability perspective.

Here are my high-level thoughts on this:
1. rename queryjumblefuncs.c to jumblefuncs.c
2. move the query jumbling related code to parser/analyze.c,
since query jumbling occurs there during parsing.
3. Rewrite the comments in the new jumblefuncs.c to
make it clear the intention of this infrastructure; that
it is used to jumble nodes for query or plan trees.

I can work on this if you agree.

Regards,

Sami

#14Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#13)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Tue, Feb 04, 2025 at 05:14:48PM -0600, Sami Imseih wrote:

Here are my high-level thoughts on this:
1. rename queryjumblefuncs.c to jumblefuncs.c

If these APIs are used for somethings else than Query structure, yes,
the renaming makes sense.

2. move the query jumbling related code to parser/analyze.c,
since query jumbling occurs there during parsing.

Not sure about this one. It depends on how much is changed. As long
as everything related to the nodes stays in src/backend/nodes/,
perhaps that's OK.

3. Rewrite the comments in the new jumblefuncs.c to
make it clear the intention of this infrastructure; that
it is used to jumble nodes for query or plan trees.

Seems to me that this could be done before 2, as well.

I can work on this if you agree.

I'd welcome an extra patch to rework a bit the format of the comments
for the Plan nodes, to ease the addition of pg_node_attr(), making any
proposed patches more readable.
--
Michael

#15Sami Imseih
samimseih@gmail.com
In reply to: Michael Paquier (#14)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

I can work on this if you agree.

I'd welcome an extra patch to rework a bit the format of the comments
for the Plan nodes, to ease the addition of pg_node_attr(), making any
proposed patches more readable.

I'll take care of this also.

Regards,

Sami

#16Lukas Fittl
lukas@fittl.com
In reply to: Sami Imseih (#15)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

Looks like some emails were sent before I could send my draft email, but
hopefully this should make the follow up work easier :)

Attached a v4 patch set with a few minor changes to plan ID jumbling:

* Range table jumbling is now done in a separate JumbleRangeTable function
after setrefs.c walked the tree - this way we avoid having custom logic for
RT Indexes in the node jumbling, and keeping a reference to PlannerGlobal
in the jumble struct
* Moved the JumbleNode call to the bottom of the set_plan_references
function for clarity - previously it was before descending into inner/outer
plan, but after some other recursive calls to set_plan_references, which
didn't really make sense
* Fixed a bug with JUMBLE_ARRAY incorrectly taking the reference of the
array (which caused planid to change incorrectly between runs)
* Added JUMBLE_BITMAPSET

Further, I've significantly reduced the number of fields ignored for plan
jumbling:

Basically the approach taken in this version is that only things that would
negatively affect the planid (i.e. make it unique when it shouldn't be) are
ignored, vs ignoring duplicate fields and fields that are only used by the
executor (which is what v1-v3 did). I'm not 100% sure that's the right
approach (but it does keep the diff a good amount smaller), I think the
tradeoff here is basically jumbling performance vs maintenance overhead
when fields are added/changed.

This does not yet move field-specific comments to their own line in nodes
where we're adding node attributes, I'll leave that for Sami to work on.

On Tue, Feb 4, 2025 at 3:15 PM Sami Imseih <samimseih@gmail.com> wrote:

Separately I've been thinking how we could best have a

discussion/review on

whether the jumbling of specific plan struct fields is correct. I was
thinking maybe a quick wiki page could be helpful, noting why to

jumble/not

jumble certain fields?

Makes sense. This is a complicated topic.

+1 for the Wiki page

Started here: https://wiki.postgresql.org/wiki/Plan_ID_Jumbling

Thanks,
Lukas

--
Lukas Fittl

Attachments:

v4-0002-Optionally-record-a-plan_id-in-PlannedStmt-to-ide.patchapplication/octet-stream; name=v4-0002-Optionally-record-a-plan_id-in-PlannedStmt-to-ide.patchDownload+529-107
v4-0003-Add-pg_stat_plans-contrib-extension.patchapplication/octet-stream; name=v4-0003-Add-pg_stat_plans-contrib-extension.patchDownload+1834-1
v4-0001-Allow-using-jumbling-logic-outside-of-query-jumbl.patchapplication/octet-stream; name=v4-0001-Allow-using-jumbling-logic-outside-of-query-jumbl.patchDownload+41-25
#17Lukas Fittl
lukas@fittl.com
In reply to: Andrei Lepikhov (#7)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

Hi Andrei,

On Fri, Jan 24, 2025 at 1:23 AM Andrei Lepikhov <lepihov@gmail.com> wrote:

I may not be close to the task monitoring area, but I utilise queryId
and other tools to differ plan nodes inside extensions. Initially, like
queryId serves as a class identifier for queries, plan_id identifies a
class of nodes, not a single node. In the implementation provided here,
nodes with the same hash can represent different subtrees. For example,
JOIN(A, JOIN(B,C)) and JOIN(JOIN(B,C),A) may have the same ID.

Moreover, I wonder if this version of plan_id reacts to the join level
change. It appears that only a change of the join clause alters the
plan_id hash value, which means you would end up with a single hash for
very different plan nodes. Is that acceptable? To address this, we
should consider the hashes of the left and right subtrees and the hashes
of each subplan (especially in the case of Append).

I looked back at this again just to confirm we're not missing anything:

I don't think any of the posted patch versions (including the just shared
v4) have a problem with distinguishing two plans that are very similar but
only differ in JOIN order. Since we descend into the inner/outer plans via
the setrefs.c treewalk, the placement of JOIN nodes vs other nodes should
cause a different plan jumble (and we include both the node tag for the
join/scan nodes, as well as the RT index the scans point to in the jumble).

Do you have a reproducer that shows these two generate the same plan ID?

Thanks,
Lukas

--
Lukas Fittl

#18Andrei Lepikhov
lepihov@gmail.com
In reply to: Lukas Fittl (#17)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On 2/5/25 09:16, Lukas Fittl wrote:

Hi Andrei,

On Fri, Jan 24, 2025 at 1:23 AM Andrei Lepikhov <lepihov@gmail.com
<mailto:lepihov@gmail.com>> wrote:

I may not be close to the task monitoring area, but I utilise queryId
and other tools to differ plan nodes inside extensions. Initially, like
queryId serves as a class identifier for queries, plan_id identifies a
class of nodes, not a single node. In the implementation provided here,
nodes with the same hash can represent different subtrees. For example,
JOIN(A, JOIN(B,C)) and JOIN(JOIN(B,C),A) may have the same ID.

Moreover, I wonder if this version of plan_id reacts to the join level
change. It appears that only a change of the join clause alters the
plan_id hash value, which means you would end up with a single hash for
very different plan nodes. Is that acceptable? To address this, we
should consider the hashes of the left and right subtrees and the
hashes
of each subplan (especially in the case of Append).

I looked back at this again just to confirm we're not missing anything:

I don't think any of the posted patch versions (including the just
shared v4) have a problem with distinguishing two plans that are very
similar but only differ in JOIN order. Since we descend into the inner/
outer plans via the setrefs.c treewalk, the placement of JOIN nodes vs
other nodes should cause a different plan jumble (and we include both
the node tag for the join/scan nodes, as well as the RT index the scans
point to in the jumble).

Maybe. I haven't dive into that stuff deeply yet. It is not difficult to
check.
The main point was that different extensions want different plan_ids.
For example, planner extensions want to guarantee the distinctness and
sort of stability of this field inside a query plan. Does the hash value
guarantee that?
We have discussed how queryId should be generated more than once. That's
why I think the plan_id generation logic should be implemented inside an
extension, not in the core.

--
regards, Andrei Lepikhov

#19Sami Imseih
samimseih@gmail.com
In reply to: Lukas Fittl (#16)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

This does not yet move field-specific comments to their own line in nodes where we're adding node attributes, I'll leave that for Sami to work on.

Hi,

Attached is a new set of patches for fixing the long comments
in plannodes.h and to refactor queryjumblefuncs.c

v5-0001
-----------

This fixes the long comments in plannodes.h to make it easier to add the
attribute annotation. It made the most sense to make this the first patch
in the set.

v5-0002
-----------

Here are my high-level thoughts on this:
1. rename queryjumblefuncs.c to jumblefuncs.c

If these APIs are used for somethings else than Query structure, yes,
the renaming makes sense.

Done. Also rewrote the header comment in jumblefuncs.c to describe
a more generic node jumbling mechanism that this file now offers.

2. move the query jumbling related code to parser/analyze.c,
since query jumbling occurs there during parsing.

Not sure about this one. It depends on how much is changed. As long
as everything related to the nodes stays in src/backend/nodes/,
perhaps that's OK.

Yes, after getting my hands on this, I agree with you. It made more sense
to keep all the jumbling work in jumblefuncs.c

v5-0003 and v5-0004 introduce the planId in core and pg_stat_plans. These
needed rebasing only; but I have not yet looked at this thoroughly.

We should aim to get 0001 and 0002 committed next.

Regards,

Sami

Attachments:

v5-0002-Allow-using-jumbling-logic-outside-of-query-jumbl.patchapplication/octet-stream; name=v5-0002-Allow-using-jumbling-logic-outside-of-query-jumbl.patchDownload+98-78
v5-0001-reformat-comments-in-plannode.h.patchapplication/octet-stream; name=v5-0001-reformat-comments-in-plannode.h.patchDownload+279-148
v5-0004-Add-pg_stat_plans-contrib-extension.patchapplication/octet-stream; name=v5-0004-Add-pg_stat_plans-contrib-extension.patchDownload+1834-1
v5-0003-Optionally-record-a-plan_id-in-PlannedStmt-to-ide.patchapplication/octet-stream; name=v5-0003-Optionally-record-a-plan_id-in-PlannedStmt-to-ide.patchDownload+396-46
#20Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#19)
Re: [PATCH] Optionally record Plan IDs to track plan changes for a query

On Thu, Feb 06, 2025 at 07:52:53PM -0600, Sami Imseih wrote:

This fixes the long comments in plannodes.h to make it easier to add the
attribute annotation. It made the most sense to make this the first patch
in the set.

A commit that happened last Friday made also this to have conflict.

Done. Also rewrote the header comment in jumblefuncs.c to describe
a more generic node jumbling mechanism that this file now offers.

Yes, after getting my hands on this, I agree with you. It made more sense
to keep all the jumbling work in jumblefuncs.c

-static void AppendJumble(JumbleState *jstate,
- const unsigned char *item, Size size

I don't understand why there is a need for publishing AppendJumble()
while it remains statis in jumblefuncs.c. This is not needed in 0003
and 0004, either.

Should we use more generic names for the existing custom_query_jumble,
no_query_jumble, query_jumble_ignore and query_jumble_location? Last
time I've talked about that with Peter E, "jumble" felt too generic,
so perhaps we're looking for a completely new term? This impacts as
well the naming of the existing queryjumblefuncs.c. The simplest term
that may be possible here is "hash", actually, because that's what we
are doing with all these node structures? That's also a very generic
term. The concept of location does not apply to plans, based on the
current proposal, so perhaps we should talk about "query normalization
location"?

Point is that query_jumble_ignore is used in the planner nodes, which
feels inconsistent, so perhaps we could rename query_jumble_ignore and
no_query_jumble to "hash_ignore" and/or "no_hash", or something like
that? This may point towards the need of a split, not sure, still the
picture is incomplete.

v5-0003 and v5-0004 introduce the planId in core and pg_stat_plans. These
needed rebasing only; but I have not yet looked at this thoroughly.

We should aim to get 0001 and 0002 committed next.

Yeah. I didn't see any reasons why 0001 should not happen now, as it
makes the whole easier while making the header styles a bit more
consistent. Perhaps also if somebody forks the code and adds some
pg_node_attr() properties?

v5-0003 and v5-0004, not sure yet. The intrisincs of the planner make
putting a strict definition of what a hash means hard to set down,
we should work towards studying that more first. I don't see this
happen until the next release freeze.
--
Michael

#21Sami Imseih
samimseih@gmail.com
In reply to: Michael Paquier (#20)
#22Sami Imseih
samimseih@gmail.com
In reply to: Sami Imseih (#21)
#23Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#21)
#24Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#22)
#25Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#21)
#26Julien Rouhaud
rjuju123@gmail.com
In reply to: Michael Paquier (#25)
#27Michael Paquier
michael@paquier.xyz
In reply to: Julien Rouhaud (#26)
#28Julien Rouhaud
rjuju123@gmail.com
In reply to: Michael Paquier (#27)
#29Greg Sabino Mullane
greg@turnstep.com
In reply to: Michael Paquier (#25)
#30Sami Imseih
samimseih@gmail.com
In reply to: Julien Rouhaud (#28)
#31Julien Rouhaud
rjuju123@gmail.com
In reply to: Sami Imseih (#30)
#32Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Julien Rouhaud (#31)
#33Julien Rouhaud
rjuju123@gmail.com
In reply to: Alvaro Herrera (#32)
#34Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Julien Rouhaud (#33)
#35Sami Imseih
samimseih@gmail.com
In reply to: Greg Sabino Mullane (#29)
#36Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Sami Imseih (#35)
#37Sami Imseih
samimseih@gmail.com
In reply to: Alvaro Herrera (#36)
#38Michael Paquier
michael@paquier.xyz
In reply to: Sami Imseih (#37)
#39Andrei Lepikhov
lepihov@gmail.com
In reply to: Michael Paquier (#38)
#40Ilia Evdokimov
ilya.evdokimov@tantorlabs.com
In reply to: Alvaro Herrera (#34)
#41Андрей Казачков
andrey.kazachkov@tantorlabs.ru
In reply to: Sami Imseih (#19)
#42Андрей Казачков
andrey.kazachkov@tantorlabs.ru
In reply to: Sami Imseih (#19)
#43Michael Paquier
michael@paquier.xyz
In reply to: Андрей Казачков (#42)