Optimizing "top queries" ...
hello everybody ...
i was thinking about introducing a new executor node to optimize the
following scenario a little:
test=# explain select * from t_lock order by id limit 10;
QUERY PLAN
------------------------------------------------------------------------
--
Limit (cost=14821.84..14821.87 rows=10 width=4)
-> Sort (cost=14821.84..15149.52 rows=131072 width=4)
Sort Key: id
-> Seq Scan on t_lock (cost=0.00..1888.72 rows=131072
width=4)
(4 rows)
test=# \d t_lock
Table "public.t_lock"
Column | Type | Modifiers
--------+---------+-----------
id | integer |
in fact, the sort step is not necessary here as we could add a node
which buffers the highest 10 records and replaces them whenever a
higher value is returned from the underlaying node (in this case seq
scan).
this query is a quite common scenario when it comes to some analysis
related issues.
saving the sort step is an especially good idea when the table is
very large.
we could use the new node when the desired subset of data is expected
to fit into work_mem.
how about it?
best regards,
hans-jürgen schönig
--
Cybertec Geschwinde & Schönig GmbH
Sch?ngrabern 134; A-2020 Hollabrunn
Tel: +43/1/205 10 35 / 340
www.postgresql.at, www.cybertec.at
Hi,
Hans-Juergen Schoenig wrote:
in fact, the sort step is not necessary here as we could add a node
which buffers the highest 10 records and replaces them whenever a
higher value is returned from the underlaying node (in this case seq scan).
this query is a quite common scenario when it comes to some analysis
related issues.
saving the sort step is an especially good idea when the table is very
large.
That sounds very much like what's known as 'partial sort', which has
been proposed by Oleg and Theodor. AFAIK they had a trivial patch
sometime around version 7.1, without integration into the planer and
optimizer. They were talking about libpsort, but I can't find that
currently. See archives [1]http://archives.postgresql.org/pgsql-sql/2002-01/msg00316.php and [2]http://archives.postgresql.org/pgsql-hackers/2006-09/msg01532.php.
Regards
Markus
[1]: http://archives.postgresql.org/pgsql-sql/2002-01/msg00316.php
[2]: http://archives.postgresql.org/pgsql-hackers/2006-09/msg01532.php
"Markus Schiltknecht" <markus@bluegap.ch> writes:
Hi,
Hans-Juergen Schoenig wrote:
in fact, the sort step is not necessary here as we could add a node which
buffers the highest 10 records and replaces them whenever a higher value is
returned from the underlaying node (in this case seq scan).
this query is a quite common scenario when it comes to some analysis related
issues.
saving the sort step is an especially good idea when the table is very large.That sounds very much like what's known as 'partial sort', which has been
proposed by Oleg and Theodor. AFAIK they had a trivial patch sometime around
version 7.1, without integration into the planer and optimizer. They were
talking about libpsort, but I can't find that currently. See archives [1] and
[2].
I actually implemented it again a few months ago during the feature freeze. I
had a few questions but since it was the middle of the feature freeze I expect
people had other things on their minds.
It is an important form of query since it crops up any time you have a UI
(read web page) with a paged result set. Currently postgres has to gather up
all the records in the result set and sort them which makes it compare poorly
against other databases popular with web site authors...
The open question in my patch was how to communicate about the limit down to
the sort node. I had implemented it by having ExecLimit peek into the SortNode
and set a field there.
This alternative of making a whole new plan node may have more promise though.
It would make it easier to come up with reasonable cost estimates.
One thing to keep in mind though is that I also wanted to cover the case of
Unique(Sort(...)) and Limit(Unique(Sort(...))) which can throw away duplicates
earlier. Do we want three different plan nodes? Are there other cases like
these that can benefit?
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
i basically thought a node would make more sense as it gives some
more flexibility.
making the "replacement strategy" inside the node a bit more fancy
this could actually open the door for further optimizations and for
other operations.
also, OFFSET would be quite easy as the buffer size needed is
perfectly defined by LIMIT + OFFSET.
taking work_mem into consideration we could safely fall back to the
old plan if too much data is fetched.
can a node like that be of any further use for other operations as
well? i am especially thinking of some other stuff related to analytics.
best regards,
hans
On Dec 6, 2006, at 4:34 PM, Gregory Stark wrote:
"Markus Schiltknecht" <markus@bluegap.ch> writes:
Hi,
Hans-Juergen Schoenig wrote:
in fact, the sort step is not necessary here as we could add a
node which
buffers the highest 10 records and replaces them whenever a
higher value is
returned from the underlaying node (in this case seq scan).
this query is a quite common scenario when it comes to some
analysis related
issues.
saving the sort step is an especially good idea when the table is
very large.That sounds very much like what's known as 'partial sort', which
has been
proposed by Oleg and Theodor. AFAIK they had a trivial patch
sometime around
version 7.1, without integration into the planer and optimizer.
They were
talking about libpsort, but I can't find that currently. See
archives [1] and
[2].I actually implemented it again a few months ago during the feature
freeze. I
had a few questions but since it was the middle of the feature
freeze I expect
people had other things on their minds.It is an important form of query since it crops up any time you
have a UI
(read web page) with a paged result set. Currently postgres has to
gather up
all the records in the result set and sort them which makes it
compare poorly
against other databases popular with web site authors...The open question in my patch was how to communicate about the
limit down to
the sort node. I had implemented it by having ExecLimit peek into
the SortNode
and set a field there.This alternative of making a whole new plan node may have more
promise though.
It would make it easier to come up with reasonable cost estimates.One thing to keep in mind though is that I also wanted to cover the
case of
Unique(Sort(...)) and Limit(Unique(Sort(...))) which can throw away
duplicates
earlier. Do we want three different plan nodes? Are there other
cases like
these that can benefit?--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com---------------------------(end of
broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
--
Cybertec Geschwinde & Schönig GmbH
Schöngrabern 134; A-2020 Hollabrunn
Tel: +43/1/205 10 35 / 340
www.postgresql.at, www.cybertec.at
"Hans-Juergen Schoenig" <postgres@cybertec.at> writes:
i basically thought a node would make more sense as it gives some more
flexibility. making the "replacement strategy" inside the node a bit more
fancy this could actually open the door for further optimizations and for
other operations.
I'm not sure what you mean by this. The two optimizations I saw were keeping
the top-N and keeping only distinct elements. You can also have both
simultaneously.
also, OFFSET would be quite easy as the buffer size needed is perfectly
defined by LIMIT + OFFSET. taking work_mem into consideration we could
safely fall back to the old plan if too much data is fetched.
Certainly you have to keep the top-N where N = (limit + offset) or it won't
work correctly. That's what my patch did.
can a node like that be of any further use for other operations as well? i am
especially thinking of some other stuff related to analytics.
I think we'll need a whole slew of new nodes for implementing OLAP. Having the
top-N functionality inside tuplesort may indeed be useful for implementing
them.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
On Wed, Dec 06, 2006 at 03:34:02PM +0000, Gregory Stark wrote:
One thing to keep in mind though is that I also wanted to cover the case of
Unique(Sort(...)) and Limit(Unique(Sort(...))) which can throw away duplicates
earlier. Do we want three different plan nodes? Are there other cases like
these that can benefit?
I would think that we really don't want more than one type of sort
node. What I think would be good is the setting of Unique and Limit as
options to the Sort node. Instead of peeking across nodes at execution
time, have the optimizer merge them.
For Unique, you arrange to have the sort algorithm, if two nodes are
equal, to throw one of the tuples.
For Limit, you can do it by having the tapes stop writing after the
given number of rows.
However, I think there's a difference between Unique across all
columns, or Unique as in DISTINCT ON(). The latter would be easier to
integrate.
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
From each according to his ability. To each according to his ability to litigate.