What about LIMIT in SELECT ?

Started by Oleg Bartunovabout 27 years ago74 messages
#1Oleg Bartunov
oleg@sai.msu.su

Hi,

I took a look at mysql and was very impressed with possibility
to limit number of rows returned from select. This is very useful
feature for Web applications when user need to browse results of
selection page by page. In my application I have to do full
select every time user press button [Next] and show requested page
using perl. This works more or less ok for several thousands rows but
totally unusable for large selections. But now I'm about to work
with big database and I don't know how I'll stay with postgres :-)
It'll just doesn't work if customer will wait several minutes just browse
next page. Mysql lacks some useful features postgres has
(subselects, transaction ..) but for most Web applications I need
just select :-) I dont' know how LIMIT is implemented in Mysql and
I know it's not in SQL92 standart, but this makes Mysql very popular.

Is it difficult to implement this feature in postgres ?

Regards,

Oleg

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

#2Terry Mackintosh
terry@terrym.com
In reply to: Oleg Bartunov (#1)
Re: [HACKERS] What about LIMIT in SELECT ?

Hi, my 2 cents...

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

I would not expect it for this release, but could it get put on the TODO
list for next time? I am even willing to work at an apprentise level on
this with a more expeireanced person that knows this stuff.

A note on implimentation:
I *used to* :) work with VFP on NT's :(
And the way VFP did LIMIT, it would only return the number of rows asked
for, BUT it still did the WHOLE search!
So on a larger table, which we had (property tax database for the county),
if some one put in too vague a query, it would try to collect ALL of the
rows as the initial result set, then give you the first x rows of that.

This did save on pushing mass amounts of data out to the browser, but it
would have been even better if it could have simply aborted the select
after having found x rows.

Also, it did not have the concept of an offset, so one could not select
100 rows, starting 200 rows in, which would be REALLY usefull for "paging"
through data. I do not know if mySQL or any other has such a concept
either, but it would be nice.

So a properly implemented "LIMIT" could:
1. Save pushing mass amounts of data across the web, that no one wants
any way.
2. Stop vague queries from bogging down the server.
(On very larg tables this could be critical!)
3. Enable "Paging" of data. (easyer then now (app. level))
4. Would be a very nice feather in PostgreSQL's cap that could make it
even more attractive to those looking at all sorts of databases out there.

Have a great day.

On Tue, 13 Oct 1998, Oleg Bartunov wrote:

Hi,

I took a look at mysql and was very impressed with possibility
to limit number of rows returned from select. This is very useful
feature for Web applications when user need to browse results of
selection page by page. In my application I have to do full
select every time user press button [Next] and show requested page
using perl. This works more or less ok for several thousands rows but
totally unusable for large selections. But now I'm about to work
with big database and I don't know how I'll stay with postgres :-)
It'll just doesn't work if customer will wait several minutes just browse
next page. Mysql lacks some useful features postgres has
(subselects, transaction ..) but for most Web applications I need
just select :-) I dont' know how LIMIT is implemented in Mysql and
I know it's not in SQL92 standart, but this makes Mysql very popular.

Is it difficult to implement this feature in postgres ?

Regards,

Oleg

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Terry Mackintosh <terry@terrym.com> http://www.terrym.com
sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!

#3Marc G. Fournier
scrappy@hub.org
In reply to: Terry Mackintosh (#2)
Re: [HACKERS] What about LIMIT in SELECT ?

What is wrong with the already implemented FETCH command?

On Tue, 13 Oct 1998, Terry Mackintosh wrote:

Hi, my 2 cents...

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

I would not expect it for this release, but could it get put on the TODO
list for next time? I am even willing to work at an apprentise level on
this with a more expeireanced person that knows this stuff.

A note on implimentation:
I *used to* :) work with VFP on NT's :(
And the way VFP did LIMIT, it would only return the number of rows asked
for, BUT it still did the WHOLE search!
So on a larger table, which we had (property tax database for the county),
if some one put in too vague a query, it would try to collect ALL of the
rows as the initial result set, then give you the first x rows of that.

This did save on pushing mass amounts of data out to the browser, but it
would have been even better if it could have simply aborted the select
after having found x rows.

Also, it did not have the concept of an offset, so one could not select
100 rows, starting 200 rows in, which would be REALLY usefull for "paging"
through data. I do not know if mySQL or any other has such a concept
either, but it would be nice.

So a properly implemented "LIMIT" could:
1. Save pushing mass amounts of data across the web, that no one wants
any way.
2. Stop vague queries from bogging down the server.
(On very larg tables this could be critical!)
3. Enable "Paging" of data. (easyer then now (app. level))
4. Would be a very nice feather in PostgreSQL's cap that could make it
even more attractive to those looking at all sorts of databases out there.

Have a great day.

On Tue, 13 Oct 1998, Oleg Bartunov wrote:

Hi,

I took a look at mysql and was very impressed with possibility
to limit number of rows returned from select. This is very useful
feature for Web applications when user need to browse results of
selection page by page. In my application I have to do full
select every time user press button [Next] and show requested page
using perl. This works more or less ok for several thousands rows but
totally unusable for large selections. But now I'm about to work
with big database and I don't know how I'll stay with postgres :-)
It'll just doesn't work if customer will wait several minutes just browse
next page. Mysql lacks some useful features postgres has
(subselects, transaction ..) but for most Web applications I need
just select :-) I dont' know how LIMIT is implemented in Mysql and
I know it's not in SQL92 standart, but this makes Mysql very popular.

Is it difficult to implement this feature in postgres ?

Regards,

Oleg

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Terry Mackintosh <terry@terrym.com> http://www.terrym.com
sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!

Marc G. Fournier scrappy@hub.org
Systems Administrator @ hub.org
scrappy@{postgresql|isc}.org ICQ#7615664

#4Jeff Hoffmann
jeff@remapcorp.com
In reply to: Marc G. Fournier (#3)
Re: [HACKERS] What about LIMIT in SELECT ?

Hi, my 2 cents...

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
be achieved by declaring a cursor and fetching however many records in the
cursor? it's a very noticeable improvement when you only want the first 20
out of 500 in a 200k record database, at least.

jeff

#5Eric Lee Green
eric@linux-hw.com
In reply to: Jeff Hoffmann (#4)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Jeff Hoffmann wrote:

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
be achieved by declaring a cursor and fetching however many records in the
cursor? it's a very noticeable improvement when you only want the first 20
out of 500 in a 200k record database, at least.

The problem with declaring a cursor vs. the "LIMIT" clause is that the
"LIMIT" clause, if used properly by the database engine (along with the
database engine using indexes in "ORDER BY" clauses) allows the database
engine to short-circuit the tail end of the query. That is, if you have 25
names and the last one ends with BEAVIS, the database engine doesn't have
to go through the BUTTHEADS and KENNYs and etc.

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're
using "persistent" connections (and of course when the connection closes
the cursor closes).

I wanted very badly to use PostgreSQL for a web project I'm working on,
but it just wouldn't do the job :-(.

--
Eric Lee Green eric@linux-hw.com http://www.linux-hw.com/~eric
"To call Microsoft an innovator is like calling the Pope Jewish ..."
-- James Love (Consumer Project on Technology)

#6Marc G. Fournier
scrappy@hub.org
In reply to: Eric Lee Green (#5)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Eric Lee Green wrote:

On Tue, 13 Oct 1998, Jeff Hoffmann wrote:

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
be achieved by declaring a cursor and fetching however many records in the
cursor? it's a very noticeable improvement when you only want the first 20
out of 500 in a 200k record database, at least.

The problem with declaring a cursor vs. the "LIMIT" clause is that the
"LIMIT" clause, if used properly by the database engine (along with the
database engine using indexes in "ORDER BY" clauses) allows the database
engine to short-circuit the tail end of the query. That is, if you have 25
names and the last one ends with BEAVIS, the database engine doesn't have
to go through the BUTTHEADS and KENNYs and etc.

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're
using "persistent" connections (and of course when the connection closes
the cursor closes).

Ookay, I'm sorry, butyou lost me here. I haven't gotten into using
CURSORs/FETCHs yet, since I haven't need it...but can you give an example
of what you would want to do using a LIMIT? I may be missing something,
but wha is the different between using LIMIT to get X records, and
definiing a cursor to FETCH X records?

Practical example of *at least* the LIMIT side would be good, so that we
can at least see a physical example of what LIMIT can do that
CURSORs/FETCH can't...

Marc G. Fournier scrappy@hub.org
Systems Administrator @ hub.org
scrappy@{postgresql|isc}.org ICQ#7615664

#7Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Terry Mackintosh (#2)
Re: [HACKERS] What about LIMIT in SELECT ?

Hi, my 2 cents...

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

I would not expect it for this release, but could it get put on the TODO
list for next time? I am even willing to work at an apprentise level on
this with a more expeireanced person that knows this stuff.

I assume everyone has read the FAQ item:

How do I <I>select</I> only the first few rows of a query?

See the fetch manual page.<P>

This only prevents all row results from being transfered to the client.
The entire query must be evaluated, even if you only want just the first
few rows. Consider a query that has an order by. There is no way
to return any rows until the entire query is evaluated and sorted.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#8Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Eric Lee Green (#5)
Re: [HACKERS] What about LIMIT in SELECT ?

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're
using "persistent" connections (and of course when the connection closes
the cursor closes).

I wanted very badly to use PostgreSQL for a web project I'm working on,
but it just wouldn't do the job :-(.

See my other posting mentioning the FAQ item on this subject. If you
are going after only one table(no joins), and have no ORDER BY, we could
short-circuit the evaluation, but how many queries could use LIMIT in
that case? Zero, I think.

What we could do is _if_ there is only one table(no joins), and an index
exists that matches the ORDER BY, we could use the index to
short-circuit the query.

I have added this item to the TODO list:

* Allow LIMIT ability on single-table queries that have no ORDER BY or
a matching index

This looks do-able, and a real win. Would this make web applications
happier? If there is an ORDER BY and no index, or a join, I can't
figure out how we would short-circuit the query.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#9Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Marc G. Fournier (#6)
Re: [HACKERS] What about LIMIT in SELECT ?

Ookay, I'm sorry, butyou lost me here. I haven't gotten into using
CURSORs/FETCHs yet, since I haven't need it...but can you give an example
of what you would want to do using a LIMIT? I may be missing something,
but wha is the different between using LIMIT to get X records, and
definiing a cursor to FETCH X records?

Practical example of *at least* the LIMIT side would be good, so that we
can at least see a physical example of what LIMIT can do that
CURSORs/FETCH can't...

My guess in a web application is that the transaction is started for
every new page, so you can't have transactions spanning SQL sessions.

LIMIT theoretically would allow you to start up where you left off.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#10Jeff Hoffmann
jeff@remapcorp.com
In reply to: Bruce Momjian (#9)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Eric Lee Green wrote:

On Tue, 13 Oct 1998, Jeff Hoffmann wrote:

I agree completely, LIMIT would be VERY usefull in web based apps,

which

is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

i'm not familiar with mysql and using "LIMIT" but wouldn't this same

effect

be achieved by declaring a cursor and fetching however many records in

the

cursor? it's a very noticeable improvement when you only want the

first 20

out of 500 in a 200k record database, at least.

The problem with declaring a cursor vs. the "LIMIT" clause is that the
"LIMIT" clause, if used properly by the database engine (along with the
database engine using indexes in "ORDER BY" clauses) allows the database
engine to short-circuit the tail end of the query. That is, if you have

25

names and the last one ends with BEAVIS, the database engine doesn't have
to go through the BUTTHEADS and KENNYs and etc.

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're
using "persistent" connections (and of course when the connection closes
the cursor closes).

Ookay, I'm sorry, butyou lost me here. I haven't gotten into using
CURSORs/FETCHs yet, since I haven't need it...but can you give an example
of what you would want to do using a LIMIT? I may be missing something,
but wha is the different between using LIMIT to get X records, and
definiing a cursor to FETCH X records?

Practical example of *at least* the LIMIT side would be good, so that we
can at least see a physical example of what LIMIT can do that
CURSORs/FETCH can't...

fetch with cursors should work properly (i.e., you can short circuit it by
just ending your transaction) my understanding on how this works is exactly
how you explained LIMIT to work. here's some empirical proof from one of my
sample databases:

the sample table i'm using has 156k records (names of people)
i'm using a PP180 with 128MB RAM and some old slow SCSI drives.

public_mn=> select count(*) from public_ramsey;
count
------
156566
(1 row)

i did the following query:
public_mn=> select * from public_ramsey where ownerlname ~ 'SMITH';

which returned 711 matches and took about 12 seconds.

i did the same thing with a cursor:

public_mn=> begin;
BEGIN
public_mn=> declare test cursor for select * from public_ramsey where
ownerlname ~ 'SMITH';
SELECT

the select was instantaneous.

public_mn=> fetch 20 in test;

returns 20 records almost instantaneously. each additional 20 took less
than a second, as well.

if this isn't what you're talking about, i don't understand what you're
saying.

jeff

#11Eric Lee Green
eric@linux-hw.com
In reply to: Marc G. Fournier (#6)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Marc G. Fournier wrote:

On Tue, 13 Oct 1998, Eric Lee Green wrote:

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're

Ookay, I'm sorry, butyou lost me here. I haven't gotten into using
CURSORs/FETCHs yet, since I haven't need it...but can you give an example
of what you would want to do using a LIMIT? I may be missing something,

Whoops! Sorry, I goofed in my post (typing faster than my brain :-).
What I *MEANT* to say was that this superiority of cursors was not
applicable in a web environment.

but wha is the different between using LIMIT to get X records, and
definiing a cursor to FETCH X records?

From a logical point of view, none. From an implementation point of

view, it is a matter of speed. Declaring a cursor four times, doing a
query four times, and fetching X records four times takes more time
than just doing a query with a LIMIT clause four times (assuming your
query results in four screenfulls of records).

Practical example of *at least* the LIMIT side would be good, so that we
can at least see a physical example of what LIMIT can do that
CURSORs/FETCH can't...

You can do everything with CURSORs/FETCH that you can do with LIMIT.
In a non-web environment, where you have stateful connections, a FETCH
is always going to be faster than a SELECT...LIMIT statement. (Well,
it would be if implemented correctly, but I'll leave that to others to
haggle over). However: In a CGI-type environment, cursors are a huge
performance drain because in the example above you end up doing this
huge query four times, with its results stored in the cursor four
times, and only a few values are ever fetched from the cursor before it
is destroyed by the end of the CGI script.

Whereas with the SELECT...LIMIT paradigm, the database engine does NOT
process the entire huge query, it quits processing once it reaches the
limit. (Well, at least MySQL does so, if you happen to be using an
"ORDER BY" supported by an index). Obviously doing 1/4th the work four times
is better than doing the whole tamale four times :-}.

--
Eric Lee Green eric@linux-hw.com http://www.linux-hw.com/~eric
"To call Microsoft an innovator is like calling the Pope Jewish ..."
-- James Love (Consumer Project on Technology)

#12Marc G. Fournier
scrappy@hub.org
In reply to: Eric Lee Green (#11)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Eric Lee Green wrote:

Whoops! Sorry, I goofed in my post (typing faster than my brain :-).
What I *MEANT* to say was that this superiority of cursors was not
applicable in a web environment.

S'alright...now please backup your statement with the *why*...

but wha is the different between using LIMIT to get X records, and
definiing a cursor to FETCH X records?

From a logical point of view, none. From an implementation point of

view, it is a matter of speed. Declaring a cursor four times, doing a
query four times, and fetching X records four times takes more time
than just doing a query with a LIMIT clause four times (assuming your
query results in four screenfulls of records).

I'm going to be brain-dead here, since, as I've disclaimered
before, I've not used CURSORs/FETCHs as of yet...one person came back
already and stated that, for him, CURSOR/FETCH results were near
instantaneous with a 167k+ table...have you tested the two to ensure that,
in fact, one is/isn't faster then the other?

Marc G. Fournier scrappy@hub.org
Systems Administrator @ hub.org
scrappy@{postgresql|isc}.org ICQ#7615664

#13Eric Lee Green
eric@linux-hw.com
In reply to: Bruce Momjian (#8)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Bruce Momjian wrote:

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're
using "persistent" connections (and of course when the connection closes

What we could do is _if_ there is only one table(no joins), and an index
exists that matches the ORDER BY, we could use the index to
short-circuit the query.

This is exactly what MySQL does in this situation, except that it can use
the ORDER BY to do the short circuiting even if there is a join involved
if all of the elements of the ORDER BY belong to one table. Obviously if
I'm doing an "ORDER BY table1.foo table2.bar" that isn't going to work!
But "select table1.fsname,table1.lname,table2.receivables where
table2.receivables > 0 and table1.custnum=table2.custnum order by
(table1.lname,table1.fsname) limit 50" can be short-circuited by fiddling
with the join order -- table1.fsname table1.lname have to be the first two
things in the join order.

Whether this is feasible in PostgreSQL I have no earthly idea. This would
seem to conflict with the join optimizer.

happier? If there is an ORDER BY and no index, or a join, I can't
figure out how we would short-circuit the query.

If there is an ORDER BY and no index you can't short-circuit the query.
MySQL doesn't either. Under certain circumstances (such as above) you can
short-circuit a join, but it's unclear whether it'd be easy to add such
a capability to PostgreSQL given the current structure of the query
optimizer. (And I certainly am not in a position to tackle it, at the
moment MySQL is sufficing for my project despite the fact that it is
quite limited compared to PostgreSQL, I need to get my project finished
first).

--
Eric Lee Green eric@linux-hw.com http://www.linux-hw.com/~eric
"To call Microsoft an innovator is like calling the Pope Jewish ..."
-- James Love (Consumer Project on Technology)

#14Eric Lee Green
eric@linux-hw.com
In reply to: Marc G. Fournier (#12)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Marc G. Fournier wrote:

On Tue, 13 Oct 1998, Eric Lee Green wrote:

Whoops! Sorry, I goofed in my post (typing faster than my brain :-).
What I *MEANT* to say was that this superiority of cursors was not
applicable in a web environment.

S'alright...now please backup your statement with the *why*...

Okay. It is because CGI is a stateless environment. You cannot just keep a
cursor open and walk up and down it, which is the superiority of cursors
(it is always faster to walk up and down a pre-SELECT'ed list than it is
to perform additional SELECTs). You have to destroy it upon exiting the
CGI script (which presumably just fetched 25 items or so to display on
an HTML page -- think DejaNews).

Creating a cursor and destroying a cursor take time. Less time, in a
normal environment, than it would take to make multiple SELECT statements,
which is the superiority of cursors in a normal environment. But, like I
said, CGI isn't normal -- the CGI script exits at the end of displaying 25
items, at which point the cursor is destroyed, thus destroying any benefit
you could have gotten while adding additional overhead.

In addition there is the possibility of short-circuiting the SELECT if
there is a LIMIT clause and there is no ORDER BY clause or the ORDER BY
clause is walking down an index (the later being a possibility only if
there is no 'join' involved or if the 'join' is simple enough that it can
be done without running afoul of the join optimizer). Cursors, by their
nature, require performing the entire tamale first.

From a logical point of view, none. From an implementation point of

view, it is a matter of speed. Declaring a cursor four times, doing a

already and stated that, for him, CURSOR/FETCH results were near
instantaneous with a 167k+ table...have you tested the two to ensure that,
in fact, one is/isn't faster then the other?

CURSOR/FETCH *SHOULD* be nearly instantaneous, because you're merely
fetching values from a pre-existing query result. As I said, in normal
(non-CGI) use, a cursor is FAR superior to a "LIMIT" clause.

But the question of whether declaring a cursor four times and destroying
it four times takes a sizable amount of time compared to a LIMIT
clause... it's really hard to test, unfortunately, due to the differing
natures of MySQL and PostgreSQL. MySQL starts up a connection very fast
while PostgreSQL takes awhile (has anybody done work on the "herd of
servers" concept to tackle that?). It is hard, in a CGI environment, to
detirmine if the poor speed (in terms of number of hits the server can
take) is due to the slow connection startup time or due to the cursor
overhead. I could write a benchmark program that kept the connection open
and did just the cursor timings, but I'm not particularly motivated.

I think RMS has a point when he decries the fact that non-free software is
becoming more available for Linux (MySQL is definitely non-free)... i.e.,
that it takes away people's motivation to improve the free software. The
only good part there is that MySQL is hardly suitable for normal database
work -- it is very much optimized for web serving and other applications
of that sort where speed and CGI-friendliness are more important than
functionality.

--
Eric Lee Green eric@linux-hw.com http://www.linux-hw.com/~eric
"To call Microsoft an innovator is like calling the Pope Jewish ..."
-- James Love (Consumer Project on Technology)

#15Marc Howard Zuckman
marc@fallon.classyad.com
In reply to: Eric Lee Green (#14)
[HACKERS] Alternative to LIMIT in SELECT ?

I can't speak to the relative efficiencies of the methods, but I do
perform queries that present data subsets to web browsers using postgresql
with the following method:

1) collect data input; do cgi query; write tuples to temporary file
2) html page index sent back to browser contains page specific
references to temporary file name and tuple range.
3) Subsequent data retrievals reference temporary file using sed and
tuple range
4) temporary file is destroyed 15min after last access time by a
background process.
This consumes disk space, but I assume it conserves memory compared to
a cursor/fetch sequence performed in a persistent db connection.

For a general purpose query, I'm not sure if there is any other
alternative to this method unless you wish to reperform the query
for each retrieved html page.

Marc Zuckman
marc@fallon.classyad.com

_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
_ Visit The Home and Condo MarketPlace _
_ http://www.ClassyAd.com _
_ _
_ FREE basic property listings/advertisements and searches. _
_ _
_ Try our premium, yet inexpensive services for a real _
_ selling or buying edge! _
_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_

#16Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Eric Lee Green (#11)
Re: [HACKERS] What about LIMIT in SELECT ?

Whereas with the SELECT...LIMIT paradigm, the database engine does NOT
process the entire huge query, it quits processing once it reaches the
limit. (Well, at least MySQL does so, if you happen to be using an
"ORDER BY" supported by an index). Obviously doing 1/4th the work four times
is better than doing the whole tamale four times :-}.

And no join, I assume.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#17Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Marc Howard Zuckman (#15)
Re: [HACKERS] Alternative to LIMIT in SELECT ?

This might be off-topic, but...

I've found ExecutorLimit() (in executor/execMain.c) is useful for me
especially when issuing an ad-hock query via psql. I personally use
the function with customized set command.

set query_limit to 'num';

limit the max number of results returned by the backend

show query_limit;

display the current query limit

reset query_limit;

disable the query limit (unlimited number of results allowed)
--
Tatsuo Ishii
t-ishii@sra.co.jp

#18Thomas G. Lockhart
lockhart@alumni.caltech.edu
In reply to: Tatsuo Ishii (#17)
Re: [HACKERS] Alternative to LIMIT in SELECT ?

I've found ExecutorLimit() (in executor/execMain.c) is useful for me
especially when issuing an ad-hock query via psql. I personally use
the function with customized set command.

Looks interesting. So where are the patches? :)

- Tom

#19Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Thomas G. Lockhart (#18)
Re: [HACKERS] Alternative to LIMIT in SELECT ?

I've found ExecutorLimit() (in executor/execMain.c) is useful for me
especially when issuing an ad-hock query via psql. I personally use
the function with customized set command.

Looks interesting. So where are the patches? :)

I'll post pacthes within 24 hours:-)
--
Tatsuo Ishii
t-ishii@sra.co.jp

#20Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#19)
Re: [HACKERS] Alternative to LIMIT in SELECT ?

I've found ExecutorLimit() (in executor/execMain.c) is useful for me
especially when issuing an ad-hock query via psql. I personally use
the function with customized set command.

Looks interesting. So where are the patches? :)

I'll post pacthes within 24 hours:-)

Here it is.
--
Tatsuo Ishii
t-ishii@sra.co.jp
----------------------------------------------------------------
*** backend/commands/variable.c.orig	Fri Oct  9 09:56:51 1998
--- backend/commands/variable.c	Wed Oct 14 13:06:15 1998
***************
*** 18,23 ****
--- 18,27 ----
  #ifdef MULTIBYTE
  #include "mb/pg_wchar.h"
  #endif
+ #ifdef QUERY_LIMIT
+ #include "executor/executor.h"
+ #include "executor/execdefs.h"
+ #endif
  static bool show_date(void);
  static bool reset_date(void);
***************
*** 40,45 ****
--- 44,54 ----
  static bool show_ksqo(void);
  static bool reset_ksqo(void);
  static bool parse_ksqo(const char *);
+ #ifdef QUERY_LIMIT
+ static bool show_query_limit(void);
+ static bool reset_query_limit(void);
+ static bool parse_query_limit(const char *);
+ #endif
  extern Cost _cpu_page_wight_;
  extern Cost _cpu_index_page_wight_;
***************
*** 546,551 ****
--- 555,600 ----
  }	/* reset_timezone() */
  /*-----------------------------------------------------------------------*/
+ #ifdef QUERY_LIMIT
+ static bool
+ parse_query_limit(const char *value)
+ {
+   int32 limit;
+ 
+   if (value == NULL) {
+     reset_query_limit();
+     return(TRUE);
+   }
+   limit = pg_atoi(value, sizeof(int32), '\0');
+   if (limit <= -1) {
+     elog(ERROR, "Bad value for # of query limit (%s)", value);
+   }
+   ExecutorLimit(limit);
+   return(TRUE);
+ }
+ 
+ static bool
+ show_query_limit(void)
+ {
+   int limit;
+ 
+   limit = ExecutorGetLimit();
+   if (limit == ALL_TUPLES) {
+     elog(NOTICE, "No query limit is set");
+   } else {
+     elog(NOTICE, "query limit is %d",limit);
+   }
+   return(TRUE);
+ }
+ 
+ static bool
+ reset_query_limit(void)
+ {
+   ExecutorLimit(ALL_TUPLES);
+   return(TRUE);
+ }
+ #endif
+ /*-----------------------------------------------------------------------*/
  struct VariableParsers
  {
  	const char *name;
***************
*** 584,589 ****
--- 633,643 ----
  	{
  		"ksqo", parse_ksqo, show_ksqo, reset_ksqo
  	},
+ #ifdef QUERY_LIMIT
+ 	{
+ 		"query_limit", parse_query_limit, show_query_limit, reset_query_limit
+ 	},
+ #endif
  	{
  		NULL, NULL, NULL, NULL
  	}
*** backend/executor/execMain.c.orig	Thu Oct  1 11:03:58 1998
--- backend/executor/execMain.c	Wed Oct 14 11:24:06 1998
***************
*** 83,94 ****
  #undef ALL_TUPLES
  #define ALL_TUPLES queryLimit

- int ExecutorLimit(int limit);
-
int
ExecutorLimit(int limit)
{
return queryLimit = limit;
}

  #endif
--- 83,98 ----
  #undef ALL_TUPLES
  #define ALL_TUPLES queryLimit
  int
  ExecutorLimit(int limit)
  {
  	return queryLimit = limit;
+ }
+ 
+ int
+ ExecutorGetLimit()
+ {
+ 	return queryLimit;
  }
  #endif
*** include/executor/executor.h.orig	Fri Oct  9 10:02:07 1998
--- include/executor/executor.h	Wed Oct 14 11:24:07 1998
***************
*** 86,91 ****
--- 86,95 ----
  extern TupleTableSlot *ExecutorRun(QueryDesc *queryDesc, EState *estate, int feature, int count);
  extern void ExecutorEnd(QueryDesc *queryDesc, EState *estate);
  extern HeapTuple ExecConstraints(char *caller, Relation rel, HeapTuple tuple);
+ #ifdef QUERY_LIMIT
+ extern int ExecutorLimit(int limit);
+ extern int ExecutorGetLimit(void);
+ #endif

/*
* prototypes from functions in execProcnode.c

#21Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Tatsuo Ishii (#20)
Re: [HACKERS] Alternative to LIMIT in SELECT ?

Applied, with one question.

I've found ExecutorLimit() (in executor/execMain.c) is useful for me
especially when issuing an ad-hock query via psql. I personally use
the function with customized set command.

Looks interesting. So where are the patches? :)

I'll post pacthes within 24 hours:-)

Here it is.
+ #ifdef QUERY_LIMIT
+ static bool
+ parse_query_limit(const char *value)
+ {
+   int32 limit;
+ 
+   if (value == NULL) {
+     reset_query_limit();
+     return(TRUE);
+   }

Any idea how 'value' could be null? I could not see how that would
happen. I can see how GEQO could have a NULL when you say ON, and no
value. Same with rplans.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#22Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Bruce Momjian (#21)
Re: [HACKERS] Alternative to LIMIT in SELECT ?
+ #ifdef QUERY_LIMIT
+ static bool
+ parse_query_limit(const char *value)
+ {
+   int32 limit;
+ 
+   if (value == NULL) {
+     reset_query_limit();
+     return(TRUE);
+   }

Any idea how 'value' could be null? I could not see how that would
happen.

Not sure. I just followed the way other set commands are doing.

I can see how GEQO could have a NULL when you say ON, and no
value. Same with rplans.

Hmm... I think in that case, 'value' would be 'ON', not NULL. right?
--
Tatsuo Ishii
t-ishii@sra.co.jp

#23Thomas G. Lockhart
lockhart@alumni.caltech.edu
In reply to: Tatsuo Ishii (#22)
Re: [HACKERS] Alternative to LIMIT in SELECT ?
+   if (value == NULL) {
+     reset_query_limit();
+     return(TRUE);
+   }

Any idea how 'value' could be null? I could not see how that would
happen.

Not sure. I just followed the way other set commands are doing.

This is how RESET is implemented.

- Tom

#24Matthew N. Dodd
winter@jurai.net
In reply to: Bruce Momjian (#9)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Bruce Momjian wrote:

My guess in a web application is that the transaction is started for
every new page, so you can't have transactions spanning SQL sessions.

LIMIT theoretically would allow you to start up where you left off.

It really does depend largly on the architecuture of the website doesn't
it.

LIMIT probably allows web site developers a quick and dirty way to do what
should properly be done with a web-DB proxy. I seem to remember mod_perl
having a solution for this sort of thing.

--
| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |
| winter@jurai.net | This Space For Rent | ix86,sparc,m68k,pmax,vax |
| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |

#25Matthew N. Dodd
winter@jurai.net
In reply to: Eric Lee Green (#14)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Eric Lee Green wrote:

it's really hard to test, unfortunately, due to the differing natures
of MySQL and PostgreSQL. MySQL starts up a connection very fast while
PostgreSQL takes awhile (has anybody done work on the "herd of
servers" concept to tackle that?).

Is MySQL really all that much faster? I've got a large number of CLI
utilities that pull data from a DB on a central server and I'm lagging on
scroll speed in my xterms for the most part. I've yet to see any
measureable lag in connection setup.

My hardware isn't all that fast either. (Ultra5 client, Ultra1/170E
server.)

--
| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |
| winter@jurai.net | This Space For Rent | ix86,sparc,m68k,pmax,vax |
| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |

#26Noname
jwieck@debis.com
In reply to: Eric Lee Green (#5)
Re: [HACKERS] What about LIMIT in SELECT ?

Eric Lee Green wrote:

On Tue, 13 Oct 1998, Jeff Hoffmann wrote:

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
be achieved by declaring a cursor and fetching however many records in the
cursor? it's a very noticeable improvement when you only want the first 20
out of 500 in a 200k record database, at least.

The problem with declaring a cursor vs. the "LIMIT" clause is that the
"LIMIT" clause, if used properly by the database engine (along with the
database engine using indexes in "ORDER BY" clauses) allows the database
engine to short-circuit the tail end of the query. That is, if you have 25
names and the last one ends with BEAVIS, the database engine doesn't have
to go through the BUTTHEADS and KENNYs and etc.

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're
using "persistent" connections (and of course when the connection closes
the cursor closes).

I'm missing something. Well it's right that in the stateless
web environment a cursor has to be declared and closed for
any single CGI call. But even if you have a LIMIT clause,
your CGI must know with which key to start.

So your query must look like

SELECT ... WHERE key > 'last processed key' ORDER BY key;

And your key must be unique (or at least contain no duplicate
entries) or you might miss some rows between the pages (have
100 Brown's in the table and last processed key was a Brown
while using LIMIT).

In postgres you could actually do the following (but read on
below - it's not optimized correct)

BEGIN;
DECLARE c CURSOR FOR SELECT ... WHERE key > 'last' ORDER BY key;
FETCH 20 IN c;
(process the 20 rows in CGI)
CLOSE c;
COMMIT;

Having LIMIT looks more elegant and has less overhead in CGI-
backend communication. But the cursor version is SQL
standard and portable.

I wanted very badly to use PostgreSQL for a web project I'm working on,
but it just wouldn't do the job :-(.

I've done some tests and what I found out might be a bug in
PostgreSQL's query optimizer. Having a table with 25k rows
where key is a text field with a unique index. Now I used
EXPLAIN for some queries

SELECT * FROM tab;

results in a seqscan - expected.

SELECT * FROM tab ORDER BY key;

results in a sort->seqscan - I would have
expected an indexscan!

SELECT * FROM tab WHERE key > 'G';

results in an indexscan - expected.

SELECT * FROM tab WHERE key > 'G' ORDER BY key;

results in a sort->indexscan - hmmm.

These results stay the same even if I blow up the table by
duplicating all rows (now with a non-unique index) to 100k
rows and have them presorted in the table.

Needless to say that everything is vacuum'd for statistics.

The last one is the query we would need in the web
environment used over a cursor as in the example above. But
due to the sort, the backend selects until the end of the
table, sorts them and then returns only the first 20 rows
(out of sorts result).

This is very painful if the qualification (key > ...) points
to the beginning of the key list.

Looking at planner.c I can see, that if there is a sortClause
in the parsetree, the planner creates a sort node and does
absolutely not check if there is an index that could be used
to do it. In the examples above, the sort is absolutely
needless because the index scan will already return the
tuples in the right order :-).

Somewhere deep in my brain I found a statement that sorting
sorted data isn't only unnecessary (except the order
changes), it is slow too compared against sorting randomly
ordered data.

Can we fix this before 6.4 release, will it be a past 6.4 or
am I doing something wrong here? I think it isn't a fix (it's
a planner enhancement) so it should really be a past 6.4
item.

For now, the only possibility is to omit the ORDER BY in the
query and hope the planner will always generate an index scan
(because of the qualification 'key > ...'). Doing so I
selected multiple times 20 rows (with the last key qual like
a CGI would do) in separate transactions. Using cursor and
fetch speeds up the access by a factor of 1000! But it is
unsafe and thus NOT RECOMMENDED! It's only a test if cursors
can do the LIMIT job - and they could if the planner would do
a better job.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#27Oleg Bartunov
oleg@sai.msu.su
In reply to: Noname (#26)
Re: [HACKERS] What about LIMIT in SELECT ?

On Wed, 14 Oct 1998, Jan Wieck wrote:

Date: Wed, 14 Oct 1998 13:09:21 +0200 (MET DST)
From: Jan Wieck <jwieck@debis.com>
To: Eric Lee Green <eric@linux-hw.com>
Cc: jeff@remapcorp.com, hackers@postgreSQL.org
Subject: Re: [HACKERS] What about LIMIT in SELECT ?

Eric Lee Green wrote:

On Tue, 13 Oct 1998, Jeff Hoffmann wrote:

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
be achieved by declaring a cursor and fetching however many records in the
cursor? it's a very noticeable improvement when you only want the first 20
out of 500 in a 200k record database, at least.

The problem with declaring a cursor vs. the "LIMIT" clause is that the
"LIMIT" clause, if used properly by the database engine (along with the
database engine using indexes in "ORDER BY" clauses) allows the database
engine to short-circuit the tail end of the query. That is, if you have 25
names and the last one ends with BEAVIS, the database engine doesn't have
to go through the BUTTHEADS and KENNYs and etc.

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're
using "persistent" connections (and of course when the connection closes
the cursor closes).

I'm missing something. Well it's right that in the stateless
web environment a cursor has to be declared and closed for
any single CGI call. But even if you have a LIMIT clause,
your CGI must know with which key to start.

This is not a problem for CGI-script to know which key to start.
Without LIMIT every CGI call backend will do *FULL* selection
and cursor helps just in fetching a definite number of rows,
in principle I can do this with CGI-script. Also, cursor
returns data back in ASCII format (man l declare) and this requires
additional job for backend to convert data from intrinsic (binary)
format. Right implementation of LIMIT offset,number_of_rows could be
a great win and make postgres superior free database engine for
Web applications. Many colleagues of mine used mysql instead of
postgres just because of lacking LIMIT. Tatsuo posted a patch
for set query_limit to 'num', I just tested it and seems it
works fine. Now, we need only possibility to specify offset,
say
set query_limit to 'offset,num'
( Tatsuo, How difficult to do this ?)
and LIMIT problem will ne gone.

I'm wonder how many useful patches could be hidden from people :-),

Regards,

Oleg

PS.

Tatsuo, do you have patch for 6.3.2 ?
I can't wait for 6.4 :-)
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

#28Thomas G. Lockhart
lockhart@alumni.caltech.edu
In reply to: Noname (#26)
Re: [HACKERS] What about LIMIT in SELECT ?

I've done some tests and what I found out might be a bug in
PostgreSQL's query optimizer.
SELECT * FROM tab ORDER BY key;
results in a sort->seqscan - I would have
expected an indexscan!

Given that a table _could_ be completely unsorted on disk, it is
probably reasonable to suck the data in for a possible in-memory sort
rather than skipping around the disk to pick up individual tuples via
the index. Don't know if vacuum has a statistic on "orderness"...

SELECT * FROM tab WHERE key > 'G' ORDER BY key;
results in a sort->indexscan - hmmm.
The last one is the query we would need in the web
environment used over a cursor as in the example above. But
due to the sort, the backend selects until the end of the
table, sorts them and then returns only the first 20 rows
(out of sorts result).

So you are saying that for this last case the sort was unnecessary? Does
the backend traverse the index in the correct order to guarantee that
the tuples are coming out already sorted? Does a hash index give the
same plan (I would expect a sort->seqscan for a hash index)?

- Tom

#29Noname
jwieck@debis.com
In reply to: Oleg Bartunov (#27)
Re: [HACKERS] What about LIMIT in SELECT ?

Oleg Bartunov wrote:

This is not a problem for CGI-script to know which key to start.

Never meant that would be a problem. A FORM variable will of
course do this.

Without LIMIT every CGI call backend will do *FULL* selection
and cursor helps just in fetching a definite number of rows,
in principle I can do this with CGI-script. Also, cursor
returns data back in ASCII format (man l declare) and this requires
additional job for backend to convert data from intrinsic (binary)
format. Right implementation of LIMIT offset,number_of_rows could be
a great win and make postgres superior free database engine for
Web applications. Many colleagues of mine used mysql instead of

That's the point I was missing. The offset!

postgres just because of lacking LIMIT. Tatsuo posted a patch
for set query_limit to 'num', I just tested it and seems it
works fine. Now, we need only possibility to specify offset,
say
set query_limit to 'offset,num'
( Tatsuo, How difficult to do this ?)
and LIMIT problem will ne gone.

Think you haven't read my posting completely. Even with the
executor limit, the complete scan into the sort is done by
the backend. You need to specify ORDER BY to get the same
list again (without the offset doesn't make sense). But
currently, ORDER BY forces a sort node into the query plan.

What the executor limit tells is how many rows will be
returned from the sorted data. Not what goes into the sort.
Filling the sort and sorting the data consumes the most time
of the queries execution.

I haven't looked at Tatsuo's patch very well. But if it
limits the amount of data going into the sort (on ORDER BY),
it will break it! The requested ordering could be different
from what the choosen index might return. The used index is
choosen by the planner upon the qualifications given, not the
ordering wanted.

So if you select WHERE b = 1 ORDER BY a, then it will use an
index on attribute b to match the qualification. The complete
result of that index scan goes into the sort to get ordered
by a. If now the executor limit stops sort filling after the
limit is exceeded, only the same tuples will go into the sort
every time. But they have nothing to do with the requested
order by a.

What LIMIT first needs is a planner enhancement. In file
backend/optimizer/plan/planner.c line 284 it must be checked
if the actual plan is an indexscan, if the indexed attributes
are all the same as those in the given sort clause and that
the requested sort order (operator) is that what the index
will return. If that all matches, it can ignore the sort
clause and return the index scan itself.

Second enhancement must be the handling of the offset. In
the executor, the index scan must skip offset index tuples
before returning the first. But NOT if the plan isn't a
1-table-index-scan. In that case the result tuples (from the
topmost unique/join/whatever node) have to be skipped.

With these enhancements, the index tuples to be skipped
(offset) will still be scanned, but not the data tuples they
point to. Index scanning might be somewhat faster.

This all will only speedup simple 1-table-queries, no joins
or if the requested order isn't that what the index exactly
returns.

Anyway, I'll take a look if I can change the planner to omit
the sort if the tests described above are true. I think it
would be good anyway.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#30Noname
jwieck@debis.com
In reply to: Thomas G. Lockhart (#28)
Re: [HACKERS] What about LIMIT in SELECT ?

SELECT * FROM tab WHERE key > 'G' ORDER BY key;
results in a sort->indexscan - hmmm.
The last one is the query we would need in the web
environment used over a cursor as in the example above. But
due to the sort, the backend selects until the end of the
table, sorts them and then returns only the first 20 rows
(out of sorts result).

So you are saying that for this last case the sort was unnecessary? Does
the backend traverse the index in the correct order to guarantee that
the tuples are coming out already sorted? Does a hash index give the
same plan (I would expect a sort->seqscan for a hash index)?

Good point! As far as I can see, the planner chooses index
usage only depending on the WHERE clause. A hash index is
only usable when the given qualification uses = on the
indexed attribute(s).

If the sortClause exactly matches the indexed attributes of
the ONE used btree index and all operators request ascending
order I think the index scan already returns the correct
order. Who know's definitely?

Addition to my last posting: ... and if the index scan is
using a btree index ...

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#31Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Tatsuo Ishii (#22)
Re: [HACKERS] Alternative to LIMIT in SELECT ?
+ #ifdef QUERY_LIMIT
+ static bool
+ parse_query_limit(const char *value)
+ {
+   int32 limit;
+ 
+   if (value == NULL) {
+     reset_query_limit();
+     return(TRUE);
+   }

Any idea how 'value' could be null? I could not see how that would
happen.

Not sure. I just followed the way other set commands are doing.

I can see how GEQO could have a NULL when you say ON, and no
value. Same with rplans.

Hmm... I think in that case, 'value' would be 'ON', not NULL. right?

Yes, I think so, value would be ON. I will look into it.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#32Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Thomas G. Lockhart (#23)
Re: [HACKERS] Alternative to LIMIT in SELECT ?
+   if (value == NULL) {
+     reset_query_limit();
+     return(TRUE);
+   }

Any idea how 'value' could be null? I could not see how that would
happen.

Not sure. I just followed the way other set commands are doing.

This is how RESET is implemented.

Oh.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#33Terry Mackintosh
terry@terrym.com
In reply to: Marc G. Fournier (#3)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Marc G. Fournier wrote:

What is wrong with the already implemented FETCH command?

Ah ... I did not know about it :)
Guess I should RTFM.

Terry Mackintosh <terry@terrym.com> http://www.terrym.com
sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!

#34Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Thomas G. Lockhart (#28)
Re: [HACKERS] What about LIMIT in SELECT ?

I've done some tests and what I found out might be a bug in
PostgreSQL's query optimizer.
SELECT * FROM tab ORDER BY key;
results in a sort->seqscan - I would have
expected an indexscan!

Given that a table _could_ be completely unsorted on disk, it is
probably reasonable to suck the data in for a possible in-memory sort
rather than skipping around the disk to pick up individual tuples via
the index. Don't know if vacuum has a statistic on "orderness"...

Thomas is correct on this. Vadim has run some tests, and with our
optimized psort() code, the in-memory sort is often faster than using
the index to get the tuple, because you are jumping all over the drive.
I don't remember, but obviously there is a break-even point where
getting X rows using the index on a table of Y rows is faster , but
getting X+1 rows on a table of Y rows is faster getting all the rows
sequentailly, and doing the sort.

You would have to pick only certain queries(no joins, index matches
ORDER BY), take the number of rows requested, and the number of rows
selected, and figure out if it is faster to use the index, or a
sequential scan and do the ORDER BY yourself.

Add to this the OFFSET capability. I am not sure how you are going to
get into the index and start at the n-th entry, unless perhaps you just
sequential scan the index.

In fact, many queries just get column already indexed, and we could just
pull the data right out of the index.

I have added this to the TODO list:

* Pull requested data directly from indexes, bypassing heap data

I think this has to be post-6.4 work, but I think we need to work in
this direction. I am holding off any cnfify fixes for post-6.4, but a
6.4.1 performance release certainly is possible.

But, you are correct that certain cases where in index is already being
used on a query, you could just skip the sort IF you used the index to
get the rows from the base table.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#35Terry Mackintosh
terry@terrym.com
In reply to: Jeff Hoffmann (#4)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Jeff Hoffmann wrote:

Hi, my 2 cents...

I agree completely, LIMIT would be VERY usefull in web based apps, which
is all I run. It does not matter to me if it is not part of a formal
standard. The idea is so common that it is a defacto standard.

i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
be achieved by declaring a cursor and fetching however many records in the
cursor? it's a very noticeable improvement when you only want the first 20
out of 500 in a 200k record database, at least.

Yes, while this is an improvement, it still has to do the entire query,
would be nice if the query could be terminated after a designated number
of rows where found, thus freeing system resources that are other wise
consumed.
I have seen web users run ridculous querys, like search for the
letter 'a', and it happens to be a substring search, now the box go'es ape
shit for 5 or 10 min.s while it basically gets the whole db as the search
result. All this befor you can do a 'FETCH', as I understand FETCH, I
will need to read up on it.

Note that I do not have any databases that larg on my box, I was thinking
back to my VFP/NT experiances.

Have a great day
Terry Mackintosh <terry@terrym.com> http://www.terrym.com
sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!

#36Terry Mackintosh
terry@terrym.com
In reply to: Bruce Momjian (#7)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Bruce Momjian wrote:

See the fetch manual page.<P>

OK, I will.

This only prevents all row results from being transfered to the client.

Yes, this is good, but this is only half the problem ...

The entire query must be evaluated, even if you only want just the first

... this is the other half.

few rows. Consider a query that has an order by. There is no way
to return any rows until the entire query is evaluated and sorted.

This is where I was hoping one of you guru types might have some insight,
-- how to stop short a query at X rows, even if it has an order by.

No way?

Terry Mackintosh <terry@terrym.com> http://www.terrym.com
sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!

#37Terry Mackintosh
terry@terrym.com
In reply to: Bruce Momjian (#8)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Bruce Momjian wrote:

What we could do is _if_ there is only one table(no joins), and an index
exists that matches the ORDER BY, we could use the index to
short-circuit the query.

I have added this item to the TODO list:

* Allow LIMIT ability on single-table queries that have no ORDER BY or
a matching index

This looks do-able, and a real win. Would this make web applications
happier? If there is an ORDER BY and no index, or a join, I can't
figure out how we would short-circuit the query.

Yes, this would do for most of my apps.
It may just be my lack of sophistication, but I find that most web apps
are very simple in nature/table layout, and thus queries are often on only
a single table.

Thanks
Terry Mackintosh <terry@terrym.com> http://www.terrym.com
sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!

#38Terry Mackintosh
terry@terrym.com
In reply to: Bruce Momjian (#9)
Re: [HACKERS] What about LIMIT in SELECT ?

On Tue, 13 Oct 1998, Bruce Momjian wrote:

My guess in a web application is that the transaction is started for
every new page, so you can't have transactions spanning SQL sessions.

LIMIT theoretically would allow you to start up where you left off.

************ EXACTLY !-)
Plus, it could also be used to limit bogus-run-away queries.

Terry Mackintosh <terry@terrym.com> http://www.terrym.com
sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!

#39Taral
taral@mail.utexas.edu
In reply to: Bruce Momjian (#34)
RE: [HACKERS] What about LIMIT in SELECT ?

Thomas is correct on this. Vadim has run some tests, and with our
optimized psort() code, the in-memory sort is often faster than using
the index to get the tuple, because you are jumping all over the drive.
I don't remember, but obviously there is a break-even point where
getting X rows using the index on a table of Y rows is faster , but
getting X+1 rows on a table of Y rows is faster getting all the rows
sequentailly, and doing the sort.

You would have to pick only certain queries(no joins, index matches
ORDER BY), take the number of rows requested, and the number of rows
selected, and figure out if it is faster to use the index, or a
sequential scan and do the ORDER BY yourself.

Since a sort loads the data into memory anyway, how about speeding up the
sort by using the index? Or does that take up too much memory? (approx 40%
more than the data alone, I think)

TAra

#40Noname
jwieck@debis.com
In reply to: Bruce Momjian (#34)
Re: [HACKERS] What about LIMIT in SELECT ?

But, you are correct that certain cases where in index is already being
used on a query, you could just skip the sort IF you used the index to
get the rows from the base table.

Especially in the case where

SELECT ... WHERE key > 'val' ORDER BY key;

creates a Sort->IndexScan plan. The index scan already jumps
around on the disc to collect the sorts input and the sort
finally returns exactly the same output (if the used index is
only on key).

And this is the case for large tables. The planner first
decides to use an index scan due to the WHERE clause and
later it notices the ORDER BY clause and creates a sort over
the scan.

I'm actually hacking around on it to see what happens if I
suppress the sort node in some cases.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#41Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Taral (#39)
Re: [HACKERS] What about LIMIT in SELECT ?

[Charset iso-8859-1 unsupported, filtering to ASCII...]

Thomas is correct on this. Vadim has run some tests, and with our
optimized psort() code, the in-memory sort is often faster than using
the index to get the tuple, because you are jumping all over the drive.
I don't remember, but obviously there is a break-even point where
getting X rows using the index on a table of Y rows is faster , but
getting X+1 rows on a table of Y rows is faster getting all the rows
sequentailly, and doing the sort.

You would have to pick only certain queries(no joins, index matches
ORDER BY), take the number of rows requested, and the number of rows
selected, and figure out if it is faster to use the index, or a
sequential scan and do the ORDER BY yourself.

Since a sort loads the data into memory anyway, how about speeding up the
sort by using the index? Or does that take up too much memory? (approx 40%
more than the data alone, I think)

Not sure you can do that. The index points to heap tuples/tids, and
though there are tids in the rows, you can't access them as tids in
memory.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#42Thomas G. Lockhart
lockhart@alumni.caltech.edu
In reply to: Bruce Momjian (#32)
Re: [DOCS] Re: [HACKERS] Alternative to LIMIT in SELECT ?

OK, I've installed new beta html docs on the web page, and put them into
tar files in ftp://postgresql.org/pub/patches.

I've also set my umask to allow group write permissions, so that the
pgsql group on hub.org can use my directories. Once we've gotten the
jade problem solved (see below) you should be able to regenerate docs at
will with a simple "gmake postgres.html" from my cvs tree.

There is a fully functioning jade/docbook installation on hub.org
(postgresql.org).

Note to scrappy:

jade seems to have gone away, or I've got my paths wrong:

$ gmake user.html
(rm -rf *.htm)
jade -D ref -D ../graphics -V %use-id-as-filename% -d
/home/users/t/thomas/db118.d/docbook/html/docbook.dsl -t sgml user.sgml
gmake: jade: Command not found
gmake: *** [user.html] Error 127

Help!

- Tom

#43Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Bruce Momjian (#34)
Re: [HACKERS] What about LIMIT in SELECT ?

Thomas is correct on this. Vadim has run some tests, and with our
optimized psort() code, the in-memory sort is often faster than using
the index to get the tuple, because you are jumping all over the drive.
I don't remember, but obviously there is a break-even point where
getting X rows using the index on a table of Y rows is faster , but
getting X+1 rows on a table of Y rows is faster getting all the rows
sequentailly, and doing the sort.

You would have to pick only certain queries(no joins, index matches
ORDER BY), take the number of rows requested, and the number of rows
selected, and figure out if it is faster to use the index, or a
sequential scan and do the ORDER BY yourself.

Add to this the OFFSET capability. I am not sure how you are going to
get into the index and start at the n-th entry, unless perhaps you just
sequential scan the index.

In fact, many queries just get column already indexed, and we could just
pull the data right out of the index.

I have added this to the TODO list:

* Pull requested data directly from indexes, bypassing heap data

I think this has to be post-6.4 work, but I think we need to work in
this direction. I am holding off any cnfify fixes for post-6.4, but a
6.4.1 performance release certainly is possible.

But, you are correct that certain cases where in index is already being
used on a query, you could just skip the sort IF you used the index to
get the rows from the base table.

I have had more time to think about this. Basically, for pre-sorted
data, our psort code is very fast, because it does not need to sort
anything. It just moves the rows in and out of the sort memory. Yes,
it could be removed in some cases, and probably should be, but it is not
going to produce great speedups.

The more general case I will describe below.

First, let's look at a normal query:

SELECT *
FROM tab
ORDER BY col1

This is not going to use an index, and probably should not because it is
faster for large tables to sort them in memory, rather than moving all
over the disk. For small tables, if the entire table fits in the buffer
cache, it may be faster to use the index, but on a small table the sort
doesn't take very long either, and the buffer cache effectiveness is
affected by other backends using it, so it may be better not to count on
it for a speedup.

However, if you only want the first 10 rows, that is a different story.
We pull all the rows into the backend, sort them, then return 10 rows.
The query, if we could do it, should be written as:

SELECT *
FROM tab
WHERE col1 < some_unknown_value
ORDER BY col1

In this case, the optimizer looks at the column statistics, and properly
uses an index to pull only a small subset of the table. This is the
type of behavior people want for queries returning only a few values.

But, unfortunately, we don't know that mystery value.

Now, everyone agrees we need an index matching the ORDER BY to make this
query quick, but we don't know that mystery value, so currently we
execute the whole query, and do a fetch.

What I am now thinking is that maybe we need a way to walk around that
index. Someone months ago asked how to do that, and we told him he
couldn't, because this not a C-ISAM/dbm type database. However, if we
could somehow pass into the query the index location we want to start
at, and how many rows we need, that would solve our problem, and perhaps
even allow joined queries to work, assuming the table in the ORDER BY is
in an outer join loop.

SELECT *
FROM tab
WHERE col1 < some_unknown_value
ORDER BY col1
USING INDEX tab_idx(452) COUNT 100

where 452 is an 452th index entry, and COUNT is the number of index rows
you want to process. The query may return more or less than 100 rows if
there is a join and it joins to zero or more than one row in the joined
table, but this seems like perhaps a good way to go at it. We need to
do it this way because if a single index row returns 4 result rows, and
only two of the four rows fit in the number of rows returnd as set by the
user, it is hard to re-start the query at the proper point, because you
would have to process the index rows a second time, and return just part
of the result, and that is hard.

If the index changes, or rows are added, the results are going to be
unreliable, but that is probably going to be true of any state-less
implementation we can devise.

I think this may be fairly easy to implement. We could sequential scan
the index to get to the 452th row. That is going to be quick. We can
pass the 452 into the btree index code, so only a certain range of index
tuples are returned, and the system believes it has processed the entire
query, while we know it hasn't. Doesn't really work with hash, so we
will not allow it for those indexes.

To make it really easy, we could implement it as a 'SET' command, so we
don't actually have it as part of the query, and have to pass it around
through all the modules. You would do the proper 'SET' before running
the query. Optimizer would look at 'SET' value to force index use.

SET INDEX TO tab_idx START 452 COUNT 100

or

SET INDEX TO tab_idx FROM 452 COUNT 451

There would have to be some way to signal that the end of the index had
been reached, because returning zero rows is not enough of a guarantee
in a joined SELECT.

Comments?

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#44Noname
jwieck@debis.com
In reply to: Bruce Momjian (#43)
Re: [HACKERS] What about LIMIT in SELECT ?

I have had more time to think about this. Basically, for pre-sorted
data, our psort code is very fast, because it does not need to sort
anything. It just moves the rows in and out of the sort memory. Yes,
it could be removed in some cases, and probably should be, but it is not
going to produce great speedups.

And I got the time to hack around about this.

I hacked in a little check into the planner, that compares
the sortClause against the key field list of an index scan
and just suppresses the sort node if it exactly matchs and
all sort operators are "<".

I tested with a 10k row table where key is a text field. The
base query is a

SELECT ... WHERE key > 'val' ORDER BY key;

The used 'val' is always a key that is close to the first of
all keys in the table ('' on the first query and the last
selected value on subsequent ones).

Scenario 1 (S1) uses exactly the above query but processes
only the first 20 rows from the result buffer. Thus the
frontend receives nearly the whole table.

Scenario 2 (S2) uses a cursor and FETCH 20. But closes the
cursor and creates a new one for the next selection (only
with another 'val') as it would occur in a web application.

If there is no index on key, the backend will allways do a
Sort->SeqScan and due to the 'val' close to the lowest
existing key nearly all tuples get scanned and put into the
sort. S1 here runs about 10 seconds and S2 about 6 seconds.
The speedup in S2 results from the reduced overhead of
sending not wanted tuples into the frontend.

Now with a btree index on key and an unpatched backend.
Produced plan is always a Sort->IndexScan. S1 needs 16
seconds and S2 needs 12 seconds. Again nearly all data is put
into the sort but this time over the index scan and that is
slower.

Last with the btree index on key and the patched backend.
This time the plan is a plain IndexScan because the ORDER BY
clause exactly matches the sort order of the choosen index.
S1 needs 13 seconds and S2 less than 0.2! This dramatic
speedup comes from the fact, that this time the index scan is
the toplevel executor node and the executor run is stopped
after 20 tuples have been selected.

Analysis of the above timings:

If there is an ORDER BY clause, using an index scan is the
clever way if the indexqual dramatically reduces the the
amount of data selected and sorted. I think this is the
normal case (who really selects nearly all rows from a 5M row
table?). So choosing the index path is correct. This will
hurt if someone really selects most of the rows and the index
scan jumps over the disc. But here the programmer should use
an unqualified query to perform a seqscan and do the
qualification in the frontend application.

The speedup for the cursor/fetch scenario is so impressive
that I'll create a post 6.4 patch. I don't want it in 6.4
because there is absolutely no query in the whole regression
test, where it suppresses the sort node. So we have
absolutely no check that it doesn't break anything.

For a web application, that can use a unique key to select
the next amount of rows, it will be a big win.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#45Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Noname (#44)
Re: [HACKERS] What about LIMIT in SELECT ?

postgres just because of lacking LIMIT. Tatsuo posted a patch
for set query_limit to 'num', I just tested it and seems it
works fine. Now, we need only possibility to specify offset,
say
set query_limit to 'offset,num'
( Tatsuo, How difficult to do this ?)
and LIMIT problem will ne gone.

Think you haven't read my posting completely. Even with the
executor limit, the complete scan into the sort is done by
the backend. You need to specify ORDER BY to get the same
list again (without the offset doesn't make sense). But
currently, ORDER BY forces a sort node into the query plan.

I think we have understanded your point. set query_limit is just a
easy alternative of using cursor and fetch.

I haven't looked at Tatsuo's patch very well. But if it
limits the amount of data going into the sort (on ORDER BY),
it will break it! The requested ordering could be different
from what the choosen index might return. The used index is
choosen by the planner upon the qualifications given, not the
ordering wanted.

I think it limits the final result. When query_limit is set,
the arg "numberTuples" of ExecutePlan() is set to it instead of 0
(this means no limit).

Talking about "offset," it shouldn't be very difficult. I guess all we
have to do is adding a new arg "offset" to ExecutePlan() then making
obvious modifications. (and of course we have to modify set
query_limit syntax but it's trivial)

However, before going ahead, I would like to ask other hackers about
this direction. This might be convenient for some users, but still the
essential performance issue would remain. In another word, this is a
short-term solution not a intrinsic one, IMHO.
--
Tatsuo Ishii
t-ishii@sra.co.jp

#46Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Oleg Bartunov (#1)
RE: [HACKERS] What about LIMIT in SELECT ?

Hi all.
I didn't follow all the posts about this thread.
So this post may be out of center.

I think current PostgreSQL lacks the concern to the response to get first
rows quickly.
For example,queries with ORDER BY clause necessarily include sort steps
and process all target rows to get first rows only.
So I modified my code for ORDER BY cases and use on trial.
I don't understand PostgreSQL sources,so my code is not complete.

I modified my code for the following 2 cases.

1.In many cases the following query uses index scan.
SELECT * from ... where key > ...; (where (key) is an index)
If so,we can omit sort steps from the access plan for the following
query.
SELECT * from ... where key > ... order by key;

Currently cursors without sort steps may be sensitive diffrent from
cursors with sort steps. But no one mind it.

2.In many cases the following query uses index scan same as case 1.
SELECT * from ... where key < ...;(where (key) is an index)
If so and if we scan the index backward,we can omit sort steps from
the access plan for the following query.
SELECT * from ... where key < ... order by key desc;

To achive this(backward scan),I used hidden(provided for the future ?)code
that is never executed and is not necessarily correct.

In the following cases I didn't modify my code to use index scan,
because I couldn't formulate how to tell PostgreSQL optimizer whether
the response to get first rows is needed or the throughput to process
sufficiently many target rows is needed.

3.The access plan made by current PostgreSQL optimizer for a query with
ORDER BY clause doesn't include index scan.

I thought the use of Tatsuo's QUERY_LIMIT to decide that the responce
is needed. It is sufficient but not necessary ?
In Oracle the hints FIRST_ROWS,ALL_ROWS are used.

Thanks.

Hiroshi Inoue
Inoue@tpf.co.jp

#47Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Noname (#44)
Re: [HACKERS] What about LIMIT in SELECT ?

I have had more time to think about this. Basically, for pre-sorted
data, our psort code is very fast, because it does not need to sort
anything. It just moves the rows in and out of the sort memory. Yes,
it could be removed in some cases, and probably should be, but it is not
going to produce great speedups.

And I got the time to hack around about this.

I hacked in a little check into the planner, that compares
the sortClause against the key field list of an index scan
and just suppresses the sort node if it exactly matchs and
all sort operators are "<".

I tested with a 10k row table where key is a text field. The
base query is a

SELECT ... WHERE key > 'val' ORDER BY key;

The used 'val' is always a key that is close to the first of
all keys in the table ('' on the first query and the last
selected value on subsequent ones).

This is good stuff. I want to think about it for a day. Sounds very
promising.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#48Noname
jwieck@debis.com
In reply to: Tatsuo Ishii (#45)
Re: [HACKERS] What about LIMIT in SELECT ?

Tatsuo Ishii wrote:

I think we have understanded your point. set query_limit is just a
easy alternative of using cursor and fetch.

I haven't looked at Tatsuo's patch very well. But if it
limits the amount of data going into the sort (on ORDER BY),
it will break it! The requested ordering could be different
from what the choosen index might return. The used index is
choosen by the planner upon the qualifications given, not the
ordering wanted.

I think it limits the final result. When query_limit is set,
the arg "numberTuples" of ExecutePlan() is set to it instead of 0
(this means no limit).

Talking about "offset," it shouldn't be very difficult. I guess all we
have to do is adding a new arg "offset" to ExecutePlan() then making
obvious modifications. (and of course we have to modify set
query_limit syntax but it's trivial)

The offset could become

FETCH n IN cursor [OFFSET n];

and

SELECT ... [LIMIT offset,count];

The FETCH command already calls ExecutorRun() with the given
count (the tuple limit). Telling it the offset too is really
simple. And ExecutorRun() could check if the toplevel
executor node is an index scan. Skipping tuples during the
index scan requires, that all qualifications are in the
indexqual, thus any tuple returned by it will become a final
result row (as it would be in the simple 1-table-queries we
discussed). If that isn't the case, the executor must
fallback to skip the final result tuples and that is after an
eventually processed sort/merge of the complete result set.
That would only reduce communication to the client and memory
required there to buffer the result set (not a bad thing
either).

ProcessQueryDesc() in tcop/pquery.c also calls ExecutorRun()
but with a constant 0 tuple count. Having offset and count in
the parsetree would make it without any state variables or
SET command. And it's the only clean way to restrict LIMIT to
SELECT queries. Any thrown in LIMIT to ExecutorRun() from
another place could badly hurt the rewrite system. Remember
that non-instead actions on insert/update/delete are
processed before the original query! And what about SQL
functions that get processed during the evaluation of another
query (view using an SQL function for count(*))?

A little better would it be to make the LIMIT values able to
be parameter nodes. C or PL functions use the prepared plan
feature of the SPI manager for performance reasons.
Especially the offset value might there need to be a
parameter that the executor has to pick out first. If we
change the count argument of ExecutorRun to a List *limit,
this one could be NIL (to mean the old 0 count 0 offset
behaviour) or a list of two elements that both can be either
a Const or a Param of type int4. Easy for the executor to
evaluate.

The only places where ExecutorRun() is called are
tcop/pquery.c (queries from frontend), commands/command.c
(FETCH command), executor/functions.c (SQL functions) and
executor/spi.c (SPI manager). So it is easy to change the
call interface too.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#49Oleg Bartunov
oleg@sai.msu.su
In reply to: Noname (#48)
Re: [HACKERS] What about LIMIT in SELECT ?

This is a little bit off-topic,
I did some timings with latest cvs on my real database
( all output redirected to /dev/null ), table contains 8798 records,
31 columns, order key have indices.

1.select count(*) from work_flats;
0.02user 0.00system 0:00.18elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+21minor)pagefaults 0swaps

2.select * from work_flats order by rooms, metro_id;
2.35user 0.25system 0:10.11elapsed 25%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+2799minor)pagefaults 0swaps

3.set query_limit to '150';
SET VARIABLE
select * from work_flats order by rooms, metro_id;
0.06user 0.00system 0:02.75elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+67minor)pagefaults 0swaps

4.begin;
declare tt cursor for
select * from work_flats order by rooms, metro_id;
fetch 150 in tt;
end;
0.05user 0.01system 0:02.76elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+67minor)pagefaults 0swaps

As you can see timings for query_limit and cursor are very similar,
I didn't expected this. So, in principle, enhanced version of fetch
(with offset) would cover all we need from LIMIT, but query_limit would be
still useful, for example to restrict loadness of server.
Will all enhancements you discussed go to the 6.4 ?
I'm really interested in testing this stuff because I begin new project
and everything we discussed here are badly needed.

Regards,

Oleg

On Thu, 15 Oct 1998, Jan Wieck wrote:

Date: Thu, 15 Oct 1998 14:23:43 +0200 (MET DST)
From: Jan Wieck <jwieck@debis.com>
To: t-ishii@sra.co.jp
Cc: jwieck@debis.com, oleg@sai.msu.su, hackers@postgreSQL.org
Subject: Re: [HACKERS] What about LIMIT in SELECT ?

Tatsuo Ishii wrote:

I think we have understanded your point. set query_limit is just a
easy alternative of using cursor and fetch.

I haven't looked at Tatsuo's patch very well. But if it
limits the amount of data going into the sort (on ORDER BY),
it will break it! The requested ordering could be different
from what the choosen index might return. The used index is
choosen by the planner upon the qualifications given, not the
ordering wanted.

I think it limits the final result. When query_limit is set,
the arg "numberTuples" of ExecutePlan() is set to it instead of 0
(this means no limit).

Talking about "offset," it shouldn't be very difficult. I guess all we
have to do is adding a new arg "offset" to ExecutePlan() then making
obvious modifications. (and of course we have to modify set
query_limit syntax but it's trivial)

The offset could become

FETCH n IN cursor [OFFSET n];

and

SELECT ... [LIMIT offset,count];

The FETCH command already calls ExecutorRun() with the given
count (the tuple limit). Telling it the offset too is really
simple. And ExecutorRun() could check if the toplevel
executor node is an index scan. Skipping tuples during the
index scan requires, that all qualifications are in the
indexqual, thus any tuple returned by it will become a final
result row (as it would be in the simple 1-table-queries we
discussed). If that isn't the case, the executor must
fallback to skip the final result tuples and that is after an
eventually processed sort/merge of the complete result set.
That would only reduce communication to the client and memory
required there to buffer the result set (not a bad thing
either).

ProcessQueryDesc() in tcop/pquery.c also calls ExecutorRun()
but with a constant 0 tuple count. Having offset and count in
the parsetree would make it without any state variables or
SET command. And it's the only clean way to restrict LIMIT to
SELECT queries. Any thrown in LIMIT to ExecutorRun() from
another place could badly hurt the rewrite system. Remember
that non-instead actions on insert/update/delete are
processed before the original query! And what about SQL
functions that get processed during the evaluation of another
query (view using an SQL function for count(*))?

A little better would it be to make the LIMIT values able to
be parameter nodes. C or PL functions use the prepared plan
feature of the SPI manager for performance reasons.
Especially the offset value might there need to be a
parameter that the executor has to pick out first. If we
change the count argument of ExecutorRun to a List *limit,
this one could be NIL (to mean the old 0 count 0 offset
behaviour) or a list of two elements that both can be either
a Const or a Param of type int4. Easy for the executor to
evaluate.

The only places where ExecutorRun() is called are
tcop/pquery.c (queries from frontend), commands/command.c
(FETCH command), executor/functions.c (SQL functions) and
executor/spi.c (SPI manager). So it is easy to change the
call interface too.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

#50Hannu Krosing
hannu@trust.ee
In reply to: Oleg Bartunov (#49)
Re: [HACKERS] What about LIMIT in SELECT ?

Jan Wieck wrote:

And I got the time to hack around about this.

<...
a lovely explanation of a patch to achiev 60X speedup
for a typical web query
...>

The speedup for the cursor/fetch scenario is so impressive
that I'll create a post 6.4 patch. I don't want it in 6.4
because there is absolutely no query in the whole regression
test, where it suppresses the sort node.

Good, then it works as expected ;)

More seriously, it is not within powers of current regression test
framework to test speed improvements (only the case where
performance-wise bad implementation will actually crash the backend,
as in the cnfify problem, but AFAIK we dont test for those now)

So we have absolutely no check that it doesn't break anything.

If it did pass the regression, then IMHO it did not break anything.

I would vote for putting it in (maybe with a
'set fix_optimiser_stupidity on' safeguard to enable it). I see no
reason to postpone it to 6.4.1 and force almost everybody to first
patch their copy and then upgrade very soon.

I would even go far enough to call it a bugfix, as it does not really
introduce any new functionality only fixes some existing functionality
so that much bigger databases can be actually used.

I would compare it in this sense to finding the places where
username/password get truncated below their actual values in pg_passwd
;)

---------------
Hannu Krosing

#51Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Hannu Krosing (#50)
Re: [HACKERS] What about LIMIT in SELECT ?

I would even go far enough to call it a bugfix, as it does not really
introduce any new functionality only fixes some existing functionality
so that much bigger databases can be actually used.

I would compare it in this sense to finding the places where
username/password get truncated below their actual values in pg_passwd
;)

We just can't test is on the wide variation of people's queries, though
passing the regression test is a good indication it is OK.

However, we are very close to release. Yes, I know it is a pain to
wait, but we are not even done discussion all the options yet, and I
still have the cnfify fix to look at.

I am sure we will have post 6.4 releases, just like we have everyone
runing 6.3.2 rather than 6.3. There will be other now-undiscovered
fixes in post 6.4 cleanup releases, and we will hopefully have a _full_
solution to the problem at that point.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#52Noname
jwieck@debis.com
In reply to: Hannu Krosing (#50)
Re: [HACKERS] What about LIMIT in SELECT ?

Hannu Krosing wrote:

Jan Wieck wrote:

The speedup for the cursor/fetch scenario is so impressive
that I'll create a post 6.4 patch. I don't want it in 6.4
because there is absolutely no query in the whole regression
test, where it suppresses the sort node.

Good, then it works as expected ;)

More seriously, it is not within powers of current regression test
framework to test speed improvements (only the case where
performance-wise bad implementation will actually crash the backend,
as in the cnfify problem, but AFAIK we dont test for those now)

So we have absolutely no check that it doesn't break anything.

If it did pass the regression, then IMHO it did not break anything.

Thats the point. The check if the sort node is required
returns TRUE for ALL queries of the regression. So the
behaviour when it returns FALSE is absolutely not tested.

I would vote for putting it in (maybe with a
'set fix_optimiser_stupidity on' safeguard to enable it). I see no
reason to postpone it to 6.4.1 and force almost everybody to first
patch their copy and then upgrade very soon.

I would even go far enough to call it a bugfix, as it does not really
introduce any new functionality only fixes some existing functionality
so that much bigger databases can be actually used.

I can't call it a bugfix because it is only a performance win
in some situations. And I feel the risk is too high to put
untested code into the backend at BETA2 time. The max we
should do is to take this one and the LIMIT thing (maybe
implemented as I suggested lately), and put out a Web-
Performance-Release at the same time we release 6.4.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#53Hannu Krosing
hannu@trust.ee
In reply to: Noname (#52)
Re: [HACKERS] What about LIMIT in SELECT ?

Jan Wieck wrote:

Hannu Krosing wrote:

Jan Wieck wrote:

The speedup for the cursor/fetch scenario is so impressive
that I'll create a post 6.4 patch. I don't want it in 6.4
because there is absolutely no query in the whole regression
test, where it suppresses the sort node.

Good, then it works as expected ;)

More seriously, it is not within powers of current regression test
framework to test speed improvements (only the case where
performance-wise bad implementation will actually crash the backend,
as in the cnfify problem, but AFAIK we dont test for those now)

So we have absolutely no check that it doesn't break anything.

If it did pass the regression, then IMHO it did not break anything.

Thats the point. The check if the sort node is required
returns TRUE for ALL queries of the regression. So the
behaviour when it returns FALSE is absolutely not tested.

The only way to find out is to make a new test, maybe by comparing

select * from t where key > 1 order by key;

where sort node can be dropped

and

select * from t where (key+1) > 2 order by key;

where it probably can't (I guess the optimiser is currently not smart
enough)

I can't call it a bugfix because it is only a performance win
in some situations.

In the extreme case the situation can be exhaustion of swap and disk
space resulting in a frozen computer, just trying to get 10 first rows
from a table. Its not exactly a bug, but it's not the expected
behaviour either.

And I feel the risk is too high to put
untested code into the backend at BETA2 time. The max we
should do is to take this one and the LIMIT thing (maybe
implemented as I suggested lately), and put out a Web-
Performance-Release at the same time we release 6.4.

Or perhaps have the patches in /contrib in 6.4 distribution
(preferrably with an option to configure to apply them ;)

-----------------
Hannu

#54Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Noname (#44)
Re: [HACKERS] What about LIMIT in SELECT ?

OK, I have had my day of thinking, and will address this specific
posting first, because it is the most fundamental concerning the future
direction of the optimization.

And I got the time to hack around about this.

I hacked in a little check into the planner, that compares
the sortClause against the key field list of an index scan
and just suppresses the sort node if it exactly matchs and
all sort operators are "<".

I tested with a 10k row table where key is a text field. The
base query is a

SELECT ... WHERE key > 'val' ORDER BY key;

The used 'val' is always a key that is close to the first of
all keys in the table ('' on the first query and the last
selected value on subsequent ones).

Scenario 1 (S1) uses exactly the above query but processes
only the first 20 rows from the result buffer. Thus the
frontend receives nearly the whole table.

OK.

Scenario 2 (S2) uses a cursor and FETCH 20. But closes the
cursor and creates a new one for the next selection (only
with another 'val') as it would occur in a web application.

If there is no index on key, the backend will allways do a
Sort->SeqScan and due to the 'val' close to the lowest
existing key nearly all tuples get scanned and put into the
sort. S1 here runs about 10 seconds and S2 about 6 seconds.
The speedup in S2 results from the reduced overhead of
sending not wanted tuples into the frontend.

Makes sense. All rows are processed, but not sent to client.

Now with a btree index on key and an unpatched backend.
Produced plan is always a Sort->IndexScan. S1 needs 16
seconds and S2 needs 12 seconds. Again nearly all data is put
into the sort but this time over the index scan and that is
slower.

VACUUM ANALYZE could affect this. Because it had no stats, it thought
index use would be faster, but in fact because 'val' was near the lowest
value, it as selecting 90% of the table, and would have been better with
a sequential scan. pg_statistics's low/hi values for a column could
have told that to the optimizer.

I know the good part of the posting is coming.

Last with the btree index on key and the patched backend.
This time the plan is a plain IndexScan because the ORDER BY
clause exactly matches the sort order of the chosen index.
S1 needs 13 seconds and S2 less than 0.2! This dramatic
speedup comes from the fact, that this time the index scan is
the toplevel executor node and the executor run is stopped
after 20 tuples have been selected.

OK, seems like in the S1 case, the use of the psort/ORDER BY code on top
of the index was taking and extra 3 seconds, which is 23%. That is a
lot more than I thought for the psort code, and shows we could gain a
lot by removing unneeded sorts from queries that are already using
matching indexes.

Just for clarity, added to TODO. I think everyone is clear on this one,
and its magnitude is a surprise to me:

* Prevent psort() usage when query already using index matching ORDER BY

Analysis of the above timings:

If there is an ORDER BY clause, using an index scan is the
clever way if the indexqual dramatically reduces the the
amount of data selected and sorted. I think this is the
normal case (who really selects nearly all rows from a 5M row
table?). So choosing the index path is correct. This will
hurt if someone really selects most of the rows and the index
scan jumps over the disc. But here the programmer should use
an unqualified query to perform a seqscan and do the
qualification in the frontend application.

Fortunately, the optimizer already does the index selection for us, and
guesses pretty well if the index or sequential scan is better. Once we
implement the above removal of psort(), we will have to change the
timings because now you have to compare index scan against sequential
scan AND psort(), because in the index scan situation, you don't need
the psort(), assuming the ORDER BY matches the index exactly.

The speedup for the cursor/fetch scenario is so impressive
that I'll create a post 6.4 patch. I don't want it in 6.4
because there is absolutely no query in the whole regression
test, where it suppresses the sort node. So we have
absolutely no check that it doesn't break anything.

For a web application, that can use a unique key to select
the next amount of rows, it will be a big win.

OK, I think the reason the regression test did not show your code being
used is important.

First, most of the tables are small in the regression test, so sequential
scans are faster. Second, most queries using indexes are either joins,
which do the entire table, or equality tests, like col = 3, where there
is no matching ORDER BY because all the col values are 3. Again, your
code can't help with these.

The only regression-type code that would use it would be a 'col > 3'
qualification with a col ORDER BY, and there aren't many of those.

However, if we think of the actual application you are addressing, it is
a major win. If we are going after only one row of the index, it is
fast. If we are going after the entire table, it is faster to
sequential scan and psort(). You big win is with the partial queries,
where you end up doing a full sequential scan or index scan, then and
ORDER BY, while you really only need a few rows from the query, and if
you deal directly with the index, you can prevent many rows from being
processed. It is the ability to skip processing those extra rows that
makes it a big win, not so much the removal of the ORDER BY, though that
helps too.

Your solution really is tailored for this 'partial' query application,
and I think it is a big need for certain applications that can't use
cursors, like web apps. Most other apps have long-time connections to
the database, and are better off with cursors.

I did profiling to improve startup time, because the database
requirements of web apps are different from normal db apps, and we have
to adjust to that.

So, to reiterate, full queries are not benefited as much from the new
code, because sequential scan/psort is faster, or because the index only
retrieves a small number of rows because the qualification of values is
very specific.

Those open-ended, give me the rows from 100 to 199 really need your
modifications.

OK, we have QUERY_LIMIT, and that allows us to throw any query at the
system, and it will return that many of the first rows for the ORDER BY.
No fancy stuff required. If we can get a matching index, we may be able
to remove the requirement of scanning all the row (with Jan's patch),
and that is a big win. If not, we at least prevent the rows from being
returned to the client.

However, there is the OFFSET issue. This is really a case where the
user wants to _restart_ the query where they left off. That is a
different problem. All of a sudden, we need to evaluate more of the
query, and return a segment from the middle of the result set.

I think we need to decide how to handle such a restart. Do we
re-evaluate the entire query, skipping all the rows up to OFFSET, and
return the number of rows they requested after OFFSET. I would think we
don't want to do that, do we. It would be much easier to code. If it
is a single table, skipping forward has to be done anyway, because we
can't just _jump_ to the 100th entry in the index, unless we pass some
_tid_ to the user, and expect them to pass that back to start the query.
I don't think we went to do that. It is ugly, and the row may have
moved since we started. So, for a single table, adding a QUERY_OFFSET
would do exactly what we need, with Jan's patches.

For a joined query, I think you will have to do the entire _join_ before
returning anything.

You can't just process all the joins up to the OFFSET location, and you
can't just jump to the 100th index location, because you don't know that
the 100th index location produced the 100th result just returned to the
user. You have to process the whole query, and because of the join and
not knowing which data row from each table is going to make which entry
in the final result. If you are really craft, and the ORDER BY table is
in the outer part of the join loop, you could start processing the table
that is part of the outer loop in _index_ order, because you know that
the rows processed in index order are going to produce the output in
result order. You then could process and throw away the results up to
offset, and generate the needed rows and stop.

The other way of doing it is to specify a query limit based on specific
index entries, so you say I want the query returned by the first 20
index entries matching the ORDER BY, or entries 100-199, and the query
is limited to using only those entries in the index. In that case,
though, in joins, you could return more or less rows in the result
depending on the other tables, and that may be unacceptable. However,
for this case, the advantage is that you don't need to process the rows
from 1 to 99 because you have been told the user only wants rows from
certain index slots. If the user requests rows 50000-50100, this would
be much faster because you don't have to process the 50000 rows before
returning any data. However, I question how often people grab stuff
from the center of large data sets. Seems the QUERY_OFFSET idea may be
easier for users.

I will be commenting on the rest of the optimization postings tomorrow.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#55Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Oleg Bartunov (#49)
Re: [HACKERS] What about LIMIT in SELECT ?

This is a little bit off-topic,
I did some timings with latest cvs on my real database
( all output redirected to /dev/null ), table contains 8798 records,
31 columns, order key have indices.

1.select count(*) from work_flats;
0.02user 0.00system 0:00.18elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+21minor)pagefaults 0swaps

2.select * from work_flats order by rooms, metro_id;
2.35user 0.25system 0:10.11elapsed 25%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+2799minor)pagefaults 0swaps

3.set query_limit to '150';
SET VARIABLE
select * from work_flats order by rooms, metro_id;
0.06user 0.00system 0:02.75elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+67minor)pagefaults 0swaps

4.begin;
declare tt cursor for
select * from work_flats order by rooms, metro_id;
fetch 150 in tt;
end;
0.05user 0.01system 0:02.76elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+67minor)pagefaults 0swaps

As you can see timings for query_limit and cursor are very similar,
I didn't expected this. So, in principle, enhanced version of fetch
(with offset) would cover all we need from LIMIT, but query_limit would be
still useful, for example to restrict loadness of server.
Will all enhancements you discussed go to the 6.4 ?
I'm really interested in testing this stuff because I begin new project
and everything we discussed here are badly needed.

When you say output to /dev/null, is that on the client, on the backend?
I will assume the client, because of the timings you are reporting.

What is the time of this, which has no ORDER BY?

select * from work_flats;

As far as I can tell, the timing differences you are seeing are based on
the fact that the data is not being transfered to the client. This is
the current sole use of query_limit, and a good one. The web-app need
is to prevent processing of the entire table for just a few rows, and
currently query_limit does not do this, though Jan's patches do this.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#56Thomas G. Lockhart
lockhart@alumni.caltech.edu
In reply to: Hannu Krosing (#50)
Re: [HACKERS] What about LIMIT in SELECT ?

More seriously, it is not within powers of current regression test
framework to test speed improvements (only the case where
performance-wise bad implementation will actually crash the backend,
as in the cnfify problem, but AFAIK we dont test for those now)

Actually, I keep informal track of run-times for the entire regression
test, which gives me an indication of how things are going. For much of
the v6.4 development, I was seeing runtimes of around 2:30 on my system
(a bit less, a bit more depending on the phase of the moon).

The runtime is currently up at 3:48 (3:40 with egcs-1.1b and
-mpentiumpro rather than the usual gcc-2.7.1 and -m486). I am hoping
that most of that recent increase in runtime is from the recent
additions of Jan's rules and embedded programming language tests.

Although the times aren't directly comparable with older releases (e.g.
we used to have char2,4,8,16 tests and we now have Jan's new tests)
there has been a distinct downward trend in runtimes.

But you're correct in that these timing tests are fairly insensitive to
major improvements in only one query scenerio, since that makes a
relatively small change in the total runtime.

- Tom

#57Thomas G. Lockhart
lockhart@alumni.caltech.edu
In reply to: Bruce Momjian (#54)
Re: [HACKERS] What about LIMIT in SELECT ?

Your solution really is tailored for this 'partial' query application,
and I think it is a big need for certain applications that can't use
cursors, like web apps. Most other apps have long-time connections to
the database, and are better off with cursors.

And there are persistant web servers available too, to help work around
this "stateless connection problem"? Let's remember that we are solving
a problem which has few requirements for data integrity, and which is
starting to get out of the realm of Postgres' strengths (almost any
scheme can barf data up to a client if it doesn't care whether it is
repeatable or complete).

Neat stuff though :)

- Tom

#58Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Thomas G. Lockhart (#57)
Re: [HACKERS] What about LIMIT in SELECT ?

I want to confirm about OFFSET.
(Sorry. There are so many mails for me to check that I can't follow
what(where) it is now.)

I think OFFSET was the offset for indices when

SELECT *
FROM tab
WHERE col1 < some_unknown_value
ORDER BY col1
USING INDEX tab_idx(452) COUNT 100
or
SET INDEX TO tab_idx START 452 COUNT 100

by Bruce

The definition of OFFSET was changed to mean the offset for resultset
(the set of target rows(without OFFSET)) now ?

such as
SELECT ... [LIMIT count [, offset]]

by Jan

Thanks.

Yes, old meaning was index size to scan, new meaning is result size to
scan.

The old and new meaning are the same if there is an index matching the
ORDER BY. The old meaning was not workable if there was not matching
index, which is bad. In a join situation, the old meaning offered a few
more optimization possibilities, because you could do the OFFSET on the
index rather than computing the join, then the OFFSET. However, the
number of result rows could vary in a join situation, that that is weird
for people. Add to that there is no logical way to know you have
reached the end, because a list of 100 index entries could return zero
rows. That is also bad. And I don't think people are going to be
seeking into the middle of large result sets very often, so the new
meaning of counting the result rows is the best way to go.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#59Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Bruce Momjian (#58)
RE: [HACKERS] What about LIMIT in SELECT ?

-----Original Message-----
From: Jan Wieck [mailto:jwieck@debis.com]
Sent: Friday, October 16, 1998 5:21 PM
To: Hiroshi Inoue
Cc: maillist@candle.pha.pa.us; jwieck@debis.com
Subject: Re: [HACKERS] What about LIMIT in SELECT ?

Hiroshi Inoue wrote:

In the following cases I didn't modify my code to use index scan,
because I couldn't formulate how to tell PostgreSQL optimizer whether
the response to get first rows is needed or the throughput to process
sufficiently many target rows is needed.

3.The access plan made by current PostgreSQL optimizer for a query with
ORDER BY clause doesn't include index scan.

I thought the use of Tatsuo's QUERY_LIMIT to decide that the responce
is needed. It is sufficient but not necessary ?
In Oracle the hints FIRST_ROWS,ALL_ROWS are used.

I still think that the QUERY LIMIT should be part of the
parse tree and not thrown in by a magic SET command. If
rewriting or function calls turn the one query sent to the
backend into multiple queries processed internal, how should
this QUERY LIMIT variable know to which of all the queries it
has to be applied? It can really break functions and rewrite
rules if this variable is used on all queries while it is
set.

For your case 3 I think, if there is a QUERY LIMIT in the
parse tree, the (future) optimizer definitely knows that not
all rows will get processed even if there is no qualification
given. So if there is an index, that matches the ORDER BY
clause and it is no a join and the (future) executor handles
OFFSET in single table index scans fast, it could choose an
index scan for this query too.

When using cursors,in most cases the response to get first(next) rows
is necessary for me,not the throughput.
How can we tell PostgreSQL optimzer that the response is necessary ?

Hiroshi Inoue
Inoue@tpf.co.jp

#60Noname
jwieck@debis.com
In reply to: Hiroshi Inoue (#59)
Re: [HACKERS] What about LIMIT in SELECT ?

Hiroshi Inoue wrote:

When using cursors,in most cases the response to get first(next) rows
is necessary for me,not the throughput.
How can we tell PostgreSQL optimzer that the response is necessary ?

With my LIMIT patch, the offset and the row count are part of
the querytree. And if a LIMIT is given, the limitCount elemet
of the querytree (a Node *) isn't NULL what it is by default.

When a LIMIT is given, the optimizer could assume that first
rows is wanted (even if the limit is ALL maybe - but I have
to think about this some more). And this assumption might let
it decide to use an index to resolve an ORDER BY even if no
qualification was given.

Telling the optimizer that first rows wanted in a cursor
operation would read

DECLARE CURSOR c FOR SELECT * FROM mytab ORDER BY a LIMIT ALL;

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#61Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Bruce Momjian (#54)
RE: [HACKERS] What about LIMIT in SELECT ?

I know the good part of the posting is coming.

Last with the btree index on key and the patched backend.
This time the plan is a plain IndexScan because the ORDER BY
clause exactly matches the sort order of the chosen index.
S1 needs 13 seconds and S2 less than 0.2! This dramatic
speedup comes from the fact, that this time the index scan is
the toplevel executor node and the executor run is stopped
after 20 tuples have been selected.

OK, seems like in the S1 case, the use of the psort/ORDER BY code on top
of the index was taking and extra 3 seconds, which is 23%. That is a
lot more than I thought for the psort code, and shows we could gain a
lot by removing unneeded sorts from queries that are already using
matching indexes.

Just for clarity, added to TODO. I think everyone is clear on this one,
and its magnitude is a surprise to me:

* Prevent psort() usage when query already using index matching ORDER BY

I can't find the reference to descending order cases except my posting.
If we use an index scan to remove sorts in those cases,backward positioning
and scanning are necessary.

Analysis of the above timings:

If there is an ORDER BY clause, using an index scan is the
clever way if the indexqual dramatically reduces the the
amount of data selected and sorted. I think this is the
normal case (who really selects nearly all rows from a 5M row
table?). So choosing the index path is correct. This will
hurt if someone really selects most of the rows and the index
scan jumps over the disc. But here the programmer should use
an unqualified query to perform a seqscan and do the
qualification in the frontend application.

Fortunately, the optimizer already does the index selection for us, and
guesses pretty well if the index or sequential scan is better. Once we
implement the above removal of psort(), we will have to change the
timings because now you have to compare index scan against sequential
scan AND psort(), because in the index scan situation, you don't need
the psort(), assuming the ORDER BY matches the index exactly.

Let t be a table with 2 indices, index1(key1,key2), index2(key1,key3).
i.e. key1 is common to index1 and index2.

And for the query
select * from t where key1>....;

If PosgreSQL optimizer choose [ index scan on index1 ] we can't remove
sorts from the following query.
select * from t where key1>... order by key1,key3;

Similarly if [ index scan on index2 ] are chosen we can't remove sorts
from the following query.
select * from t where key1>... order by key1,key2;

But in both cases (clever) optimizer can choose another index for scan.

Thanks.

Hiroshi Inoue
Inoue@tpf.co.jp

#62Noname
jwieck@debis.com
In reply to: Hiroshi Inoue (#61)
Re: [HACKERS] What about LIMIT in SELECT ?

Hiroshi Inoue wrote:

* Prevent psort() usage when query already using index matching ORDER BY

I can't find the reference to descending order cases except my posting.
If we use an index scan to remove sorts in those cases,backward positioning
and scanning are necessary.

I think it's only thought as a reminder that the optimizer
needs some optimization.

That topic, and the LIMIT stuff too I think, is past 6.4 work
and may go into a 6.4.1 performance release. So when we are
after 6.4, we have enough time to work out a real solution,
instead of just throwing in a patch as a quick shot.

What we two did where steps in the same direction. Your one
covers more situations, but after all if multiple people have
the same idea there is a good chance that it is the right
thing to do.

Let t be a table with 2 indices, index1(key1,key2), index2(key1,key3).
i.e. key1 is common to index1 and index2.

And for the query
select * from t where key1>....;

If PosgreSQL optimizer choose [ index scan on index1 ] we can't remove
sorts from the following query.
select * from t where key1>... order by key1,key3;

Similarly if [ index scan on index2 ] are chosen we can't remove sorts
from the following query.
select * from t where key1>... order by key1,key2;

But in both cases (clever) optimizer can choose another index for scan.

Right. As I remember, your solution does basically the same
as my one. It does not change the optimizers decision about
the index or if an index at all is used. So I assume they
hook into the same position where depending on the order by
clause the sort node is added. And that is at the very end of
the optimizer.

What you describe above requires changes in upper levels of
optimization. Doing that is far away from my knowledge about
the optimizer. And some of your earlier statements let me
think you aren't familiar enough with it too. We need at
least help from others to do it well.

I don't want to dive that deep into the optimizer. There was
a far too long time where the rule system was broken and got
out of sync with the parser/optimizer capabilities. I fixed
many things in it for 6.4. My first priority now is, not to
let such a situation come up again.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#63Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Hiroshi Inoue (#61)
Re: [HACKERS] What about LIMIT in SELECT ?

[Charset iso-8859-1 unsupported, filtering to ASCII...]

I know the good part of the posting is coming.

Last with the btree index on key and the patched backend.
This time the plan is a plain IndexScan because the ORDER BY
clause exactly matches the sort order of the chosen index.
S1 needs 13 seconds and S2 less than 0.2! This dramatic
speedup comes from the fact, that this time the index scan is
the toplevel executor node and the executor run is stopped
after 20 tuples have been selected.

OK, seems like in the S1 case, the use of the psort/ORDER BY code on top
of the index was taking and extra 3 seconds, which is 23%. That is a
lot more than I thought for the psort code, and shows we could gain a
lot by removing unneeded sorts from queries that are already using
matching indexes.

Just for clarity, added to TODO. I think everyone is clear on this one,
and its magnitude is a surprise to me:

* Prevent psort() usage when query already using index matching ORDER BY

In a multi-column ORDER BY, the direction of the sorts will have to be
identical too. That is assumed, I think. If all are descending, I
think we can traverse the index in reverse order, or can't we do that.
I am not sure, but if we can't, descending would fail, and require a
psort.

I can't find the reference to descending order cases except my posting.
If we use an index scan to remove sorts in those cases,backward positioning
and scanning are necessary.

Analysis of the above timings:

If there is an ORDER BY clause, using an index scan is the
clever way if the indexqual dramatically reduces the the
amount of data selected and sorted. I think this is the
normal case (who really selects nearly all rows from a 5M row
table?). So choosing the index path is correct. This will
hurt if someone really selects most of the rows and the index
scan jumps over the disc. But here the programmer should use
an unqualified query to perform a seqscan and do the
qualification in the frontend application.

Fortunately, the optimizer already does the index selection for us, and
guesses pretty well if the index or sequential scan is better. Once we
implement the above removal of psort(), we will have to change the
timings because now you have to compare index scan against sequential
scan AND psort(), because in the index scan situation, you don't need
the psort(), assuming the ORDER BY matches the index exactly.

Let t be a table with 2 indices, index1(key1,key2), index2(key1,key3).
i.e. key1 is common to index1 and index2.

And for the query
select * from t where key1>....;

If PosgreSQL optimizer choose [ index scan on index1 ] we can't remove
sorts from the following query.
select * from t where key1>... order by key1,key3;

Similarly if [ index scan on index2 ] are chosen we can't remove sorts
from the following query.
select * from t where key1>... order by key1,key2;

But in both cases (clever) optimizer can choose another index for scan.

Yes, the optimizer is going to have to be smart by looking at the ORDER
BY, and nudging the code to favor a certain index. This is also true in
a join, where we will want to use an index in cases we would normally
not use it, and prefer a certain index over others.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#64Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Noname (#62)
Re: [HACKERS] What about LIMIT in SELECT ?

Right. As I remember, your solution does basically the same
as my one. It does not change the optimizers decision about
the index or if an index at all is used. So I assume they
hook into the same position where depending on the order by
clause the sort node is added. And that is at the very end of
the optimizer.

What you describe above requires changes in upper levels of
optimization. Doing that is far away from my knowledge about
the optimizer. And some of your earlier statements let me
think you aren't familiar enough with it too. We need at
least help from others to do it well.

I don't want to dive that deep into the optimizer. There was
a far too long time where the rule system was broken and got
out of sync with the parser/optimizer capabilities. I fixed
many things in it for 6.4. My first priority now is, not to
let such a situation come up again.

I agree. Another good thing is that the LIMIT thing will not require a
dump/reload, so it is a good candidate for a minor release.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#65Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Bruce Momjian (#64)
Re: [HACKERS] What about LIMIT in SELECT ?

I agree. Another good thing is that the LIMIT thing will not require a
dump/reload, so it is a good candidate for a minor release.

That's wrong, sorry.

The limit thing as I implemented it adds 2 new variables to
the Query structure. Rewrite rules are stored as querytrees
and in the existing pg_rewrite entries that would be missing.

Oh, sorry. I forgot. That could be tough.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#66Noname
jwieck@debis.com
In reply to: Bruce Momjian (#64)
Re: [HACKERS] What about LIMIT in SELECT ?

I agree. Another good thing is that the LIMIT thing will not require a
dump/reload, so it is a good candidate for a minor release.

That's wrong, sorry.

The limit thing as I implemented it adds 2 new variables to
the Query structure. Rewrite rules are stored as querytrees
and in the existing pg_rewrite entries that would be missing.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#67Noname
jwieck@debis.com
In reply to: Bruce Momjian (#65)
Re: [HACKERS] What about LIMIT in SELECT ?

I agree. Another good thing is that the LIMIT thing will not require a
dump/reload, so it is a good candidate for a minor release.

That's wrong, sorry.

The limit thing as I implemented it adds 2 new variables to
the Query structure. Rewrite rules are stored as querytrees
and in the existing pg_rewrite entries that would be missing.

Oh, sorry. I forgot. That could be tough.

But it wouldn't hurt to add them now to have them in
place. The required out-, read- and copyfuncs are in
my patch too. This would prevent dump/load when we
later add the real LIMIT functionality. And it does
not change anything now.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#68Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Noname (#66)
Re: [HACKERS] What about LIMIT in SELECT ?

I agree. Another good thing is that the LIMIT thing will not require a
dump/reload, so it is a good candidate for a minor release.

That's wrong, sorry.

The limit thing as I implemented it adds 2 new variables to
the Query structure. Rewrite rules are stored as querytrees
and in the existing pg_rewrite entries that would be missing.

Not sure how to address this. Perhaps we could write a query as part of
the upgrade that added these to the existing rules, or we could require
an initdb of all beta users.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#69Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Noname (#67)
Re: [HACKERS] What about LIMIT in SELECT ?

I agree. Another good thing is that the LIMIT thing will not require a
dump/reload, so it is a good candidate for a minor release.

That's wrong, sorry.

The limit thing as I implemented it adds 2 new variables to
the Query structure. Rewrite rules are stored as querytrees
and in the existing pg_rewrite entries that would be missing.

Oh, sorry. I forgot. That could be tough.

But it wouldn't hurt to add them now to have them in
place. The required out-, read- and copyfuncs are in
my patch too. This would prevent dump/load when we
later add the real LIMIT functionality. And it does
not change anything now.

Jan, we found that I am having to require an initdb for the INET/CIDR
type, so if you want stuff to change the views/rules for the limit
addition post 6.4, please send them in and I will apply them.

You clearly have the syntax down, so I think you should go ahead.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#70Noname
jwieck@debis.com
In reply to: Bruce Momjian (#69)
Re: [HACKERS] What about LIMIT in SELECT ?

Jan, we found that I am having to require an initdb for the INET/CIDR
type, so if you want stuff to change the views/rules for the limit
addition post 6.4, please send them in and I will apply them.

You clearly have the syntax down, so I think you should go ahead.

This is the part that will enable post 6.4 add of the
LIMIT stuff without initdb.

Regression tested.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

diff -cr src.orig/backend/nodes/copyfuncs.c src/backend/nodes/copyfuncs.c
*** src.orig/backend/nodes/copyfuncs.c	Fri Oct 16 11:53:40 1998
--- src/backend/nodes/copyfuncs.c	Fri Oct 16 13:32:35 1998
***************
*** 1578,1583 ****
--- 1578,1586 ----
  		newnode->unionClause = temp_list;
  	}
+ 	Node_Copy(from, newnode, limitOffset);
+ 	Node_Copy(from, newnode, limitCount);
+ 
  	return newnode;
  }
diff -cr src.orig/backend/nodes/outfuncs.c src/backend/nodes/outfuncs.c
*** src.orig/backend/nodes/outfuncs.c	Fri Oct 16 11:53:40 1998
--- src/backend/nodes/outfuncs.c	Fri Oct 16 13:30:50 1998
***************
*** 259,264 ****
--- 259,268 ----
  	appendStringInfo(str, (node->hasSubLinks ? "true" : "false"));
  	appendStringInfo(str, " :unionClause ");
  	_outNode(str, node->unionClause);
+ 	appendStringInfo(str, " :limitOffset ");
+ 	_outNode(str, node->limitOffset);
+ 	appendStringInfo(str, " :limitCount ");
+ 	_outNode(str, node->limitCount);
  }
  static void
diff -cr src.orig/backend/nodes/readfuncs.c src/backend/nodes/readfuncs.c
*** src.orig/backend/nodes/readfuncs.c	Fri Oct 16 11:53:40 1998
--- src/backend/nodes/readfuncs.c	Fri Oct 16 13:31:43 1998
***************
*** 163,168 ****
--- 163,174 ----
  	token = lsptok(NULL, &length);		/* skip :unionClause */
  	local_node->unionClause = nodeRead(true);
+ 	token = lsptok(NULL, &length);		/* skip :limitOffset */
+ 	local_node->limitOffset = nodeRead(true);
+ 
+ 	token = lsptok(NULL, &length);		/* skip :limitCount */
+ 	local_node->limitCount = nodeRead(true);
+ 
  	return local_node;
  }
diff -cr src.orig/include/nodes/parsenodes.h src/include/nodes/parsenodes.h
*** src.orig/include/nodes/parsenodes.h	Fri Oct 16 11:53:58 1998
--- src/include/nodes/parsenodes.h	Fri Oct 16 13:35:32 1998
***************
*** 60,65 ****
--- 60,67 ----
  	List	   *unionClause;	/* unions are linked under the previous
  								 * query */
+ 	Node	   *limitOffset;	/* # of result tuples to skip */
+ 	Node	   *limitCount;		/* # of result tuples to return */
  	/* internal to planner */
  	List	   *base_rel_list;	/* base relation list */
***************
*** 639,644 ****
--- 641,648 ----
  	char	   *portalname;		/* the portal (cursor) to create */
  	bool		binary;			/* a binary (internal) portal? */
  	bool		unionall;		/* union without unique sort */
+ 	Node	   *limitOffset;	/* # of result tuples to skip */
+ 	Node	   *limitCount;		/* # of result tuples to return */
  } SelectStmt;
#71Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Noname (#70)
Re: [HACKERS] What about LIMIT in SELECT ?

Jan, we found that I am having to require an initdb for the INET/CIDR
type, so if you want stuff to change the views/rules for the limit
addition post 6.4, please send them in and I will apply them.

You clearly have the syntax down, so I think you should go ahead.

This is the part that will enable post 6.4 add of the
LIMIT stuff without initdb.

Regression tested.

Applied.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#72Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Noname (#26)
elephant picture

Jan, and luck in getting the older elephant picture where the elephant's
color was more natural and not so gray?

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#73Vince Vielhaber
vev@michvhf.com
In reply to: Bruce Momjian (#72)
RE: [DOCS] elephant picture

On 14-Jun-99 Bruce Momjian wrote:

Jan, and luck in getting the older elephant picture where the elephant's
color was more natural and not so gray?

I wouldn't worry about it. The new one has a blue tint to it :)

Vince.
--
==========================================================================
Vince Vielhaber -- KA8CSH email: vev@michvhf.com flame-mail: /dev/null
# include <std/disclaimers.h> TEAM-OS2
Online Campground Directory http://www.camping-usa.com
Online Giftshop Superstore http://www.cloudninegifts.com
==========================================================================

#74Noname
wieck@debis.com
In reply to: Vince Vielhaber (#73)
Re: [DOCS] elephant picture

On 14-Jun-99 Bruce Momjian wrote:

Jan, and luck in getting the older elephant picture where the elephant's
color was more natural and not so gray?

I wouldn't worry about it. The new one has a blue tint to it :)

Like an HP49G :-)

I like it too.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#========================================= wieck@debis.com (Jan Wieck) #