Max sane value for join_collapse_limit?

Started by Andreas Joseph Kroghalmost 4 years ago2 messagesgeneral
Jump to latest
#1Andreas Joseph Krogh
andreas@visena.com

Hi, I have set join_collapse_limit = 12 in production, but I'm thinking about
raising it to 16.

On modern HW is there a “sane maximum” for this value?

I can easily spare 10ms for extra planning per query on our workload, is 16
too high?

Thanks.

--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andreas@visena.com <mailto:andreas@visena.com>
www.visena.com <https://www.visena.com&gt;
<https://www.visena.com&gt;

#2Philip Semanchuk
philip@americanefficient.com
In reply to: Andreas Joseph Krogh (#1)
Re: Max sane value for join_collapse_limit?

On Jun 3, 2022, at 4:19 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:

Hi, I have set join_collapse_limit = 12 in production, but I'm thinking about raising it to 16.
On modern HW is there a “sane maximum” for this value?
I can easily spare 10ms for extra planning per query on our workload, is 16 too high?

I set ours set to 24 (from_collapse_limit=24 and geqo_threshold=25). Most of our queries that involve that involve 10+ relations have a slow execution time (20-30 minutes or more) so reducing planning time isn’t a major concern for us. If the planner takes an extra 20-30 seconds to find a plan that reduces execution time by 5%, we still come out ahead.

That said, in our environment the planner can make pretty bad choices once the number of relations into the mid teens because we have some difficult-to-estimate join conditions, so we write our canned queries with this in mind, breaking them into two parts if necessary to avoid throwing too much at the planner at once. IOW, we generally don’t come anywhere near 24 relations in a query. Our very high join_collapse_limit might still come into play if a user writes a very complicated ad hoc query.

So (IMHO) as is often the case, the answer is “it depends”. :-)

Cheers
Philip