Uninterruptible long planning of a query with too many WHERE clauses

Started by Alexander Kuzmenkovover 7 years ago4 messageshackers
Jump to latest
#1Alexander Kuzmenkov
a.kuzmenkov@postgrespro.ru

Hi hackers,

Recently one of our customers encountered a situation when the planning
of a particular query takes too long (several minutes) and can't be
interrupted by pg_terminate_backend(). The query and schema are attached
(this is generated by Zabbix). The reason for the slowness is that the
run time of choose_bitmap_and() is quadratic in the number of WHERE
clauses. It assigns unique ids to the clauses by putting them in a list
and then doing a linear search with equal() to determine the position of
each new clause.

Our first attempt to fix this was putting these clauses into an rbtree
or dynahash. This improves the performance, but is not entirely correct.
We don't have a comparison or hash function for nodes, so we have to
hash or compare their string representation. But the equality of
nodeToString() is not equivalent to equal(), because the string has some
fields that are ignored by equal(), such as token location. So we can't
really compare the string value instead of using equal().

I settled on a simpler solution: limiting the number of clauses we try
to uniquely identify. If there are too many, skip the smarter logic that
requires comparing paths by clauses, and just return the cheapest input
path from choose_bitmap_and(). The patch is attached.

I'd like to hear your thoughts on this. This is a valid query that
freezes a backend with 100% CPU usage and no way to interrupt it, and I
think we should fail more gracefully.

--
Alexander Kuzmenkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

schema_dump.sqlapplication/sql; name=schema_dump.sqlDownload
select.sql.bz2application/x-bzip; name=select.sql.bz2Download+42-40
choose-bitmap-and.patchtext/x-patch; name=choose-bitmap-and.patchDownload+52-1
#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alexander Kuzmenkov (#1)
Re: Uninterruptible long planning of a query with too many WHERE clauses

Alexander Kuzmenkov <a.kuzmenkov@postgrespro.ru> writes:

Recently one of our customers encountered a situation when the planning
of a particular query takes too long (several minutes) and can't be
interrupted by pg_terminate_backend(). The query and schema are attached
(this is generated by Zabbix).

Ugh. I hope they aren't expecting actually *good* performance on this
sort of query. Still, O(N^2) behavior isn't nice.

When I first saw your post, I thought maybe the problem was an
unreasonable number of paths, but actually there are only two
indexpaths being considered in choose_bitmap_and(). The problem
is that one of them has 80000 quals attached to it, and the code
that's sorting through the quals is O(N^2).

Our first attempt to fix this was putting these clauses into an rbtree
or dynahash. This improves the performance, but is not entirely correct.

... depends on how you do it ...

I settled on a simpler solution: limiting the number of clauses we try
to uniquely identify. If there are too many, skip the smarter logic that
requires comparing paths by clauses, and just return the cheapest input
path from choose_bitmap_and(). The patch is attached.

I think you have the right basic idea, but we don't have to completely
lobotomize the bitmap-and search logic in order to cope with this.
This code is only trying to figure out which paths are potentially
redundant, so for a path with too many quals, we can just deem it
not-redundant, as attached.

A different line of thought is that using equal() to compare quals
here is likely overkill: plain old pointer equality ought to be enough,
since what we are looking for is different indexpaths derived from the
same members of the relation's baserestrictinfo list. In itself, such
a change would not fix the O(N^2) problem, it'd just cut a constant
factor off the big-O multiplier. (A big constant factor, perhaps, but
still just a constant factor.) However, once we go to pointer equality
as the definition, we could treat the pointer values as scalars and then
use hashing or whatever on them. But this would take a good deal of work,
and I think it might be a net loss for typical not-very-large numbers
of quals. Also, I tried just quickly changing the equal() call to a
pointer comparison, and it didn't seem to make much difference given
that I'd already done the attached. So my feeling is that possibly
that'd be worth doing sometime in the future, but this particular
example isn't offering a compelling reason to do it.

Another thought is that maybe we need a CHECK_FOR_INTERRUPTS call
somewhere in here; but I'm not sure where would be a good place.
I'm not excited about sticking one into classify_index_clause_usage,
but adding one up at the per-path loops would not help for this case.

regards, tom lane

Attachments:

choose-bitmap-and-2.patchtext/x-diff; charset=us-ascii; name=choose-bitmap-and-2.patchDownload+32-3
#3Alexander Kuzmenkov
a.kuzmenkov@postgrespro.ru
In reply to: Tom Lane (#2)
Re: Uninterruptible long planning of a query with too many WHERE clauses

El 11/11/18 a las 07:38, Tom Lane escribi�:

I think you have the right basic idea, but we don't have to completely
lobotomize the bitmap-and search logic in order to cope with this.
This code is only trying to figure out which paths are potentially
redundant, so for a path with too many quals, we can just deem it
not-redundant, as attached.

Thanks for the patch, looks good to me.

A different line of thought is that using equal() to compare quals
here is likely overkill: plain old pointer equality ought to be enough,
since what we are looking for is different indexpaths derived from the
same members of the relation's baserestrictinfo list.

I didn't realize that we could just hash the pointers here, this
simplifies things. But indeed it makes sense to just use a simpler logic
for such extreme queries, because we won't have a good plan anyway.

Another thought is that maybe we need a CHECK_FOR_INTERRUPTS call
somewhere in here; but I'm not sure where would be a good place.
I'm not excited about sticking one into classify_index_clause_usage,
but adding one up at the per-path loops would not help for this case.

We added some interrupt checks as a quick fix for the client. In the
long run, I think we don't have to add them, because normally, planning
a query is relatively fast, and unexpected slowdowns like this one can
still happen in places where we don't process interrupts.

--
Alexander Kuzmenkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alexander Kuzmenkov (#3)
Re: Uninterruptible long planning of a query with too many WHERE clauses

Alexander Kuzmenkov <a.kuzmenkov@postgrespro.ru> writes:

El 11/11/18 a las 07:38, Tom Lane escribió:

I think you have the right basic idea, but we don't have to completely
lobotomize the bitmap-and search logic in order to cope with this.
This code is only trying to figure out which paths are potentially
redundant, so for a path with too many quals, we can just deem it
not-redundant, as attached.

Thanks for the patch, looks good to me.

Pushed, thanks for reviewing.

regards, tom lane