Why count(*) doest use index?
I use pgsql 9.0.3 and I know that postgresql tries to use the fields
in indexes instead of the original table if it possible
But when I run
SELECT COUNT(id) FROM tab
or
SELECT COUNT(*) FROM tab
where
there "id" is PRIMARY KEY and there are other indexes there I get
execution plan that doesnt use any indexes, but sequentab scanning the
original table.
"Aggregate (cost=38685.98..38685.99 rows=1 width=0)"
" -> Seq Scan on tab (cost=0.00..36372.38 rows=925438 width=0)"
Why is
it so?
---
Paul
On 03/03/2011 05:29 AM, obamabarak@e1.ru wrote:
I use pgsql 9.0.3 and I know that postgresql tries to use the fields in
indexes instead of the original table if it possibleBut when I run
SELECT COUNT(id) FROM tab
or
SELECT COUNT(*) FROM tab
where there "id" is PRIMARY KEY and there are other indexes there I get
execution plan that doesnt use any indexes, but sequentab scanning the
original table."Aggregate (cost=38685.98..38685.99 rows=1 width=0)"
" -> Seq Scan on tab (cost=0.00..36372.38 rows=925438 width=0)"Why is it so?
See here:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
---
Paul
--
Adrian Klaver
adrian.klaver@gmail.com
On 03/03/2011 13:29, obamabarak@e1.ru wrote:
I use pgsql 9.0.3 and I know that postgresql tries to use the fields in
indexes instead of the original table if it possibleBut when I run
SELECT COUNT(id) FROM tab
or
SELECT COUNT(*) FROM tab
where there "id" is PRIMARY KEY and there are other indexes there I get
execution plan that doesnt use any indexes, but sequentab scanning the
original table.
Because when you do SELECT COUNT(*) without any WHERE.... clause, then
PostgreSQL has to scan through *all* the rows in the table in order to
count them.
Ray.
--
Raymond O'Donnell :: Galway :: Ireland
rod@iol.ie
On Sat, Mar 5, 2011 at 8:02 PM, Raymond O'Donnell <rod@iol.ie> wrote:
On 03/03/2011 13:29, obamabarak@e1.ru wrote:
I use pgsql 9.0.3 and I know that postgresql tries to use the fields in
indexes instead of the original table if it possibleBut when I run
SELECT COUNT(id) FROM tab
or
SELECT COUNT(*) FROM tab
where there "id" is PRIMARY KEY and there are other indexes there I get
execution plan that doesnt use any indexes, but sequentab scanning the
original table.Because when you do SELECT COUNT(*) without any WHERE.... clause, then
PostgreSQL has to scan through *all* the rows in the table in order to count
them.Ray.
--
Raymond O'Donnell :: Galway :: Ireland
rod@iol.ie--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Is it possible in theory to efficiently perform count the primary or
unique indices underlying data structures, regardless whether there is
a WHERE clause detailing filtration base on values from such index or
not?
Allan.
On 03/05/11 2:05 PM, Allan Kamau wrote:
Is it possible in theory to efficiently perform count the primary or
unique indices underlying data structures, regardless whether there is
a WHERE clause detailing filtration base on values from such index or
not?
indexes are not exact, due to possibly constant changes in the current
number of visible elements in the relation.
Other well known dbms's do have this possibility, because they place deleted
or updated records in a separate table or file (plz correct me if i'm
wrong). But this has other, greater performance disadvantages. The count(*)
problem is a bit of a publicity problem rather than a real performance
problem (i've been told). People are aware of the fact that count(*) is
faster in other dbms's, but "we" don't want superficial things like
optimizing count(*) for publicity ruin other, more important things for us,
performance-wise.
On Sat, Mar 5, 2011 at 11:46 PM, John R Pierce <pierce@hogranch.com> wrote:
On 03/05/11 2:05 PM, Allan Kamau wrote:
Is it possible in theory to efficiently perform count the primary or
unique indices underlying data structures, regardless whether there is
a WHERE clause detailing filtration base on values from such index or
not?indexes are not exact, due to possibly constant changes in the current
number of visible elements in the relation.--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
"Patriotism is the conviction that your country is superior to all others
because you were born in it." -- George Bernard Shaw
On Sun, Mar 6, 2011 at 1:46 AM, John R Pierce <pierce@hogranch.com> wrote:
On 03/05/11 2:05 PM, Allan Kamau wrote:
Is it possible in theory to efficiently perform count the primary or
unique indices underlying data structures, regardless whether there is
a WHERE clause detailing filtration base on values from such index or
not?indexes are not exact, due to possibly constant changes in the current
number of visible elements in the relation.--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I would assume the primary key or unique indexes are the cornerstone
of each insertion and deletion.
During the start of an insert into a tuple the primary and unique (not
null) indexes are first consulted. Same as the case of a delete as the
relation should still allow for insertion of a tuple having a value in
the primary index matching the value of a just deleted tuple.
If this is true it seems that the primary key and perhaps other unique
indexes do indeed contain exact details of the uniqueness of the
persisted tuples of a given relation at any given time. Even though
some other field values of the relation are being updated the number
of tuples may not change without the involvement of the primary or
unique indices.
Or am I missing a crucial point.
Allan.
On Sun, Mar 06, 2011 at 11:03:23AM +0300, Allan Kamau wrote:
I would assume the primary key or unique indexes are the cornerstone
of each insertion and deletion.
<snip>
Or am I missing a crucial point.
The real issue is that you can have four programs all doing count(*)
and all getting different answers. How? Because what you see is
dependant on what snapshot of the database you're looking at. And
information about what snapshot can see what tuple is stored in the
table. An index does not have enough information to work this out.
The DBs that don't have this issue are usually like MyISAM, no
transactions so no issues about different snapshots. And crappy
concurrency. As soon as you go to more advanced systems the easy option
falls away. For example
http://www.mysqlperformanceblog.com/2006/12/01/count-for-innodb-tables/
If it's really really important there are ways you can use trigger
tables and summary views to achieve the results you want. Except it's
expensive and when people are told that all of the sudden the count(*)
performance isn't so important any more. :)
The other option is visibility data in the index. Doubles the size of
your indexes though.
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Patriotism is when love of your own people comes first; nationalism,
when hate for people other than your own comes first.
- Charles de Gaulle
On 6 Mar 2011, at 9:03, Allan Kamau wrote:
If this is true it seems that the primary key and perhaps other unique
indexes do indeed contain exact details of the uniqueness of the
persisted tuples of a given relation at any given time.
That is true within a single transaction, but indexes contain information about ALL active transactions. Because of that the contents of the indexes are not guaranteed to be unique and it's possible they contain references to rows that are not visible to the current transaction.
Or am I missing a crucial point.
Yup, you're missing the effects of concurrency.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.
!DSPAM:737,4d7365c5235885600661482!
On Sun, Mar 6, 2011 at 5:41 AM, Martijn van Oosterhout <kleptog@svana.org>wrote:
If it's really really important there are ways you can use trigger
tables and summary views to achieve the results you want. Except it's
expensive and when people are told that all of the sudden the count(*)
performance isn't so important any more. :)
That's often perfectly fine, with read-heavy, single-writer workloads.
I definitely wish there was a way to create indexes to track counters on
various types of queries, even if it eliminates write concurrency on
affected writes. Doing it by hand is a pain.
--
Glenn Maynard
On Sun, Mar 6, 2011 at 2:57 PM, Glenn Maynard <glenn@zewt.org> wrote:
On Sun, Mar 6, 2011 at 5:41 AM, Martijn van Oosterhout <kleptog@svana.org>
wrote:If it's really really important there are ways you can use trigger
tables and summary views to achieve the results you want. Except it's
expensive and when people are told that all of the sudden the count(*)
performance isn't so important any more. :)That's often perfectly fine, with read-heavy, single-writer workloads.
I definitely wish there was a way to create indexes to track counters on
various types of queries, even if it eliminates write concurrency on
affected writes. Doing it by hand is a pain.
beyond what the stats system does you mean?
If you aren't interested in high concurrency count it really isn't all
that difficult -- just push table modifying queries into a procedure
and grab rows affected. Row level trigger can also do it but
performance will suck unless you are already doing all row by row
processing (in which case your performance already sucks).
The way to do this in with high concurrency is like the above, but
insert (not update) rows affected into a table modification log that
is rolled up on time interval or user demand so you don't serialize
access w/every statement. Or you dispense with all the fuss and grab
fee'n'easy approximate count from the stats system which is really
what people want 99% of the time.
In the old days this was much more complicated problem because to eek
every bit of oltp performance out of the server you had to disable the
stats collector. Today you don't, so let it do your work for you.
merlin
On Sun, Mar 6, 2011 at 3:41 AM, Martijn van Oosterhout
<kleptog@svana.org> wrote:
The other option is visibility data in the index. Doubles the size of
your indexes though.
Also requires both table and index be locked while you update both so
you don't get race conditions. so has a real performance impact there
as well.
On Mon, Mar 7, 2011 at 1:13 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Sun, Mar 6, 2011 at 2:57 PM, Glenn Maynard <glenn@zewt.org> wrote:
That's often perfectly fine, with read-heavy, single-writer workloads.
I definitely wish there was a way to create indexes to track counters on
various types of queries, even if it eliminates write concurrency on
affected writes. Doing it by hand is a pain.beyond what the stats system does you mean?
The stats system only helps for the most basic case--counting the number of
rows in a table. In my experience that's not very common; most of the time
it's counting total results from some more interesting query, eg. for
pagination. In my particular case, I'm caching results for SELECT COUNT(*),
expr2 FROM table WHERE expr GROUP BY expr2 (for a very limited set of
expressions).
If you aren't interested in high concurrency count it really isn't all
that difficult -- just push table modifying queries into a procedure
and grab rows affected. Row level trigger can also do it but
performance will suck unless you are already doing all row by row
processing (in which case your performance already sucks).
Row triggers are fast enough for my case--it's a read-heavy workload, so
it's okay to take a bit more time inserting new data. It's easier to ensure
consistency with row triggers, since they can be tested independently of
anything modifying the table.
--
Glenn Maynard
On Mon, Mar 7, 2011 at 3:16 PM, Glenn Maynard <glenn@zewt.org> wrote:
On Mon, Mar 7, 2011 at 1:13 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Sun, Mar 6, 2011 at 2:57 PM, Glenn Maynard <glenn@zewt.org> wrote:
That's often perfectly fine, with read-heavy, single-writer workloads.
I definitely wish there was a way to create indexes to track counters on
various types of queries, even if it eliminates write concurrency on
affected writes. Doing it by hand is a pain.beyond what the stats system does you mean?
The stats system only helps for the most basic case--counting the number of
rows in a table. In my experience that's not very common; most of the time
it's counting total results from some more interesting query, eg. for
pagination. In my particular case, I'm caching results for SELECT COUNT(*),
expr2 FROM table WHERE expr GROUP BY expr2 (for a very limited set of
expressions).
SELECT COUNT(*) FROM table WHERE expr;
will use index (assuming expr is optimizable and is worth while to
optimize). Your case might be interesting for cache purposes if expr2
is expensive, but has nothing to do with postgres index usage via
count(*). mysql/myisam needs to scan as well in this case -- it
can't magically 'look up' the value as it can for the in filtered
(very special) case... it only differs from pg in that it can skip
heap visibility check because all records are known good (and pg is
moving towards optimizing this case in mostly read only workloads!)
merlin
2011/3/8 Merlin Moncure <mmoncure@gmail.com>
On Mon, Mar 7, 2011 at 3:16 PM, Glenn Maynard <glenn@zewt.org> wrote:
On Mon, Mar 7, 2011 at 1:13 PM, Merlin Moncure <mmoncure@gmail.com>
wrote:
On Sun, Mar 6, 2011 at 2:57 PM, Glenn Maynard <glenn@zewt.org> wrote:
That's often perfectly fine, with read-heavy, single-writer workloads.
I definitely wish there was a way to create indexes to track counters
on
various types of queries, even if it eliminates write concurrency on
affected writes. Doing it by hand is a pain.beyond what the stats system does you mean?
The stats system only helps for the most basic case--counting the number
of
rows in a table. In my experience that's not very common; most of the
time
it's counting total results from some more interesting query, eg. for
pagination. In my particular case, I'm caching results for SELECTCOUNT(*),
expr2 FROM table WHERE expr GROUP BY expr2 (for a very limited set of
expressions).SELECT COUNT(*) FROM table WHERE expr;
will use index (assuming expr is optimizable and is worth while to
optimize). Your case might be interesting for cache purposes if expr2
is expensive, but has nothing to do with postgres index usage via
count(*). mysql/myisam needs to scan as well in this case -- it
can't magically 'look up' the value as it can for the in filtered
(very special) case...
Exactly!
it only differs from pg in that it can skip
heap visibility check because all records are known good (and pg is
moving towards optimizing this case in mostly read only workloads!)merlin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
// Dmitriy.
On Mon, Mar 7, 2011 at 4:35 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
SELECT COUNT(*) FROM table WHERE expr;
will use index (assuming expr is optimizable and is worth while to
optimize). Your case might be interesting for cache purposes if expr2
is expensive, but has nothing to do with postgres index usage via
count(*). mysql/myisam needs to scan as well in this case -- it
can't magically 'look up' the value as it can for the in filtered
(very special) case... it only differs from pg in that it can skip
heap visibility check because all records are known good (and pg is
moving towards optimizing this case in mostly read only workloads!)
It'll do an index scan, but it's still a scan--linear time over the size of
the set. That's too expensive for many cases.
My particular case is something like this:
SELECT COUNT(*), event_time::date FROM events
WHERE event_time::date >= '2011-01-01' AND event_time::date < '2011-02-01'
AND user=50
GROUP BY event_time::date;
An index on "events(user, event_time::date)" could optimize this, eg.
effectively maintaining a count of matching rows for each (user, day)
tuple--which is ultimately what I'm doing manually with triggers. Of
course, it would have a significant cost, in some combination of complexity,
index size and write concurrency, and couldn't be the default behavior for
an index.
--
Glenn Maynard
On Mon, Mar 7, 2011 at 4:26 PM, Glenn Maynard <glenn@zewt.org> wrote:
On Mon, Mar 7, 2011 at 4:35 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
SELECT COUNT(*) FROM table WHERE expr;
will use index (assuming expr is optimizable and is worth while to
optimize). Your case might be interesting for cache purposes if expr2
is expensive, but has nothing to do with postgres index usage via
count(*). mysql/myisam needs to scan as well in this case -- it
can't magically 'look up' the value as it can for the in filtered
(very special) case... it only differs from pg in that it can skip
heap visibility check because all records are known good (and pg is
moving towards optimizing this case in mostly read only workloads!)It'll do an index scan, but it's still a scan--linear time over the size of
the set. That's too expensive for many cases.My particular case is something like this:
SELECT COUNT(*), event_time::date FROM events
WHERE event_time::date >= '2011-01-01' AND event_time::date < '2011-02-01'
AND user=50
GROUP BY event_time::date;An index on "events(user, event_time::date)" could optimize this, eg.
effectively maintaining a count of matching rows for each (user, day)
tuple--which is ultimately what I'm doing manually with triggers. Of
course, it would have a significant cost, in some combination of complexity,
index size and write concurrency, and couldn't be the default behavior for
an index.
create index on events(user, (event_time::date));
select count(*) from events
where
(user, event_time::date) >= (50, '2011-01-01')
and (user, event_time::date) < (50, '2011-02-01')
group by event_time::date;
Note the create index will only work above if event_time is of
timestamp (not timestamptz) because of time zone dependency. Any ad
hoc caching would also have the same problem, if users from different
time zones were hitting the cache -- they could get the wrong answer.
merlin
On Mon, Mar 7, 2011 at 5:58 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
SELECT COUNT(*), event_time::date FROM events
WHERE event_time::date >= '2011-01-01' AND event_time::date <'2011-02-01'
AND user=50
GROUP BY event_time::date;select count(*) from events
where
(user, event_time::date) >= (50, '2011-01-01')
and (user, event_time::date) < (50, '2011-02-01')
group by event_time::date;
Postgresql is smart enough to know "x = 1 and y = 2" is the same as "(x, y)
= (1, 2)". Either way you get an index scan at best--better than a seq
scan, to be sure, but still expensive when you have a lot of data per (user,
month) and you're doing a lot of these queries.
Note the create index will only work above if event_time is of
timestamp (not timestamptz) because of time zone dependency. Any ad
hoc caching would also have the same problem, if users from different
time zones were hitting the cache -- they could get the wrong answer.
It's designed with this in mind.
--
Glenn Maynard
On 7 Mar 2011, at 22:16, Glenn Maynard wrote:
The stats system only helps for the most basic case--counting the number of rows in a table. In my experience that's not very common; most of the time it's counting total results from some more interesting query, eg. for pagination. In my particular case, I'm caching results for SELECT COUNT(*), expr2 FROM table WHERE expr GROUP BY expr2 (for a very limited set of expressions).
It's not uncommon to track your own statistics on your data. Often this doesn't go beyond tracking COUNT(*) on various combinations of conditions.
If your data approaches a normal distribution though, I think you can go a step further by tracking the distribution of values in one column for a given value in another.
I'm not a mathematician, but I'm pretty sure you could do something like this (with the example given down-thread) to describe the distributions of values in your main table:
CREATE TABLE user_event_time (
user integer UNIQUE REFERENCES events (user),
count integer,
min date,
max date,
avg date,
stddev date
);
CREATE TABLE event_time_user (
event_time date UNIQUE REFERENCES events (event_time),
count integer,
min integer,
max integer,
avg integer,
stddev integer
);
Now, given a user ID, the first table gives you the chance of a specific event_time occurring - which with a normal distribution should be very close to the percentage of the total number of rows that match the set. Say you have 1000 rows and there's 23% chance that there's an event involving user 50 at '2011-01-01', then that means 230 rows match those conditions.
You can do the same query the other way around base on the event time and the distribution of users at that date.
Combining both will give you better accuracy.
Whether this is practical to do is another question entirely, I just thought of this while reading this thread ;)
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.
!DSPAM:737,4d760ff7235881825915661!
-----Original Message-----
From: Glenn Maynard [mailto:glenn@zewt.org]
Sent: Monday, March 07, 2011 5:27 PM
To: pgsql-general@postgresql.org
Subject: Re: Why count(*) doest use index?An index on "events(user, event_time::date)" could optimize
this, eg. effectively maintaining a count of matching rows
for each (user, day) tuple--which is ultimately what I'm
doing manually with triggers. Of course, it would have a
significant cost, in some combination of complexity, index
size and write concurrency, and couldn't be the default
behavior for an index.--
Glenn Maynard
Indexes don't "maintain counts", indexes maintain pointers to the table
records.
What you need is "materialized view" storing aggregates.
And it looks like you already have it with your triggers.
Regards,
Igor Neyman