Slow COUNT

Started by Poul Møller Hansenover 20 years ago15 messagesgeneral
Jump to latest
#1Poul Møller Hansen
freebsd@pbnet.dk

I can see it has been discussed before, why COUNT(*) tends to be slow on
PostgreSQL compared with ex. MySQL.
As far as I understood it has something to do with missing numbering on
the rows in the indexes and that there should be plenty of reasons not
to implement that in PostgreSQL, not that I found an explanation.
However I can imagine it will have an impact on inserts.

My questions is, which statements can use to count the rows faster ?
32 secs compared to 10 ms !

Thanks,
Poul

db=# explain analyze select count(*) from my.table;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=60008.28..60008.28 rows=1 width=0) (actual
time=32028.469..32028.474 rows=1 loops=1)
-> Seq Scan on table (cost=0.00..54962.62 rows=2018262 width=0)
(actual time=14.492..19592.014 rows=2018252 loops=1)
Total runtime: 32028.750 ms
(3 rows)

db=# explain analyze select count(*) from my.table where node =
'1234567890';

QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=50023.14..50023.14 rows=1 width=0) (actual
time=1790.967..1790.971 rows=1 loops=1)
-> Index Scan using idx_node_date_id on table (cost=0.00..49968.76
rows=21753 width=0) (actual time=80.218..1570.747 rows=34648 loops=1)
Index Cond: ((node)::text = '1234567890'::text)
Total runtime: 1792.084 ms
(4 rows)

mysql>select count(*) from table;
+----------+
| count(*) |
+----------+
| 2018160 |
+----------+
1 row in set (0.01 sec)

mysql>select count(*) from table where node = '1234567890';
+----------+
| count(*) |
+----------+
| 34648 |
+----------+
1 row in set (0.23 sec)

#2Rodrigo Gonzalez
rjgonzale@gmail.com
In reply to: Poul Møller Hansen (#1)
Re: Slow COUNT

Poul,

2 things....first, why do you think it will have an impact on inserts?

And the second one....use InnoDb tables in MySQL, and you will have the
same than with PostgreSQL, it's because of MVCC

Best regards

Rodrigo

Poul Møller Hansen wrote:

Show quoted text

I can see it has been discussed before, why COUNT(*) tends to be slow
on PostgreSQL compared with ex. MySQL.
As far as I understood it has something to do with missing numbering
on the rows in the indexes and that there should be plenty of reasons
not to implement that in PostgreSQL, not that I found an explanation.
However I can imagine it will have an impact on inserts.

My questions is, which statements can use to count the rows faster ?
32 secs compared to 10 ms !

Thanks,
Poul

db=# explain analyze select count(*) from my.table;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------

Aggregate (cost=60008.28..60008.28 rows=1 width=0) (actual
time=32028.469..32028.474 rows=1 loops=1)
-> Seq Scan on table (cost=0.00..54962.62 rows=2018262 width=0)
(actual time=14.492..19592.014 rows=2018252 loops=1)
Total runtime: 32028.750 ms
(3 rows)

db=# explain analyze select count(*) from my.table where node =
'1234567890';

QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------

Aggregate (cost=50023.14..50023.14 rows=1 width=0) (actual
time=1790.967..1790.971 rows=1 loops=1)
-> Index Scan using idx_node_date_id on table
(cost=0.00..49968.76 rows=21753 width=0) (actual time=80.218..1570.747
rows=34648 loops=1)
Index Cond: ((node)::text = '1234567890'::text)
Total runtime: 1792.084 ms
(4 rows)

mysql>select count(*) from table;
+----------+
| count(*) |
+----------+
| 2018160 |
+----------+
1 row in set (0.01 sec)

mysql>select count(*) from table where node = '1234567890';
+----------+
| count(*) |
+----------+
| 34648 |
+----------+
1 row in set (0.23 sec)

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

#3Bruce Momjian
bruce@momjian.us
In reply to: Poul Møller Hansen (#1)
Re: Slow COUNT

Poul M�ller Hansen <freebsd@pbnet.dk> writes:

My questions is, which statements can use to count the rows faster ?
32 secs compared to 10 ms !

Try starting a transaction in one window updating one of those records and see
how long it takes your count(*) to complete in the other window while that
update is still pending.

--
greg

#4Andrew Schmidt
andrew@lifescale.com
In reply to: Rodrigo Gonzalez (#2)
Re: Slow COUNT

And InnoDB is actually quite a bit worse than PostgreSQL (ignoring the
slight difference in row numbers)

InnoDB:

olp_live> select count(*) from team_players;
+----------+
| count(*) |
+----------+
| 465004 |
+----------+
1 row in set (1.54 sec)

PostgreSQL:

olp_live=# select count(*) from team_players;
+--------+
| count |
+--------+
| 464747 |
+--------+
(1 row)

Time: 934.935 ms

Rodrigo Gonzalez wrote:

Poul,

2 things....first, why do you think it will have an impact on inserts?

And the second one....use InnoDb tables in MySQL, and you will have
the same than with PostgreSQL, it's because of MVCC

Best regards

Rodrigo

Poul Møller Hansen wrote:

I can see it has been discussed before, why COUNT(*) tends to be slow
on PostgreSQL compared with ex. MySQL.
As far as I understood it has something to do with missing numbering
on the rows in the indexes and that there should be plenty of reasons
not to implement that in PostgreSQL, not that I found an explanation.
However I can imagine it will have an impact on inserts.

My questions is, which statements can use to count the rows faster ?
32 secs compared to 10 ms !

Thanks,
Poul

db=# explain analyze select count(*) from my.table;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------

Aggregate (cost=60008.28..60008.28 rows=1 width=0) (actual
time=32028.469..32028.474 rows=1 loops=1)
-> Seq Scan on table (cost=0.00..54962.62 rows=2018262 width=0)
(actual time=14.492..19592.014 rows=2018252 loops=1)
Total runtime: 32028.750 ms
(3 rows)

db=# explain analyze select count(*) from my.table where node =
'1234567890';

QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------

Aggregate (cost=50023.14..50023.14 rows=1 width=0) (actual
time=1790.967..1790.971 rows=1 loops=1)
-> Index Scan using idx_node_date_id on table
(cost=0.00..49968.76 rows=21753 width=0) (actual
time=80.218..1570.747 rows=34648 loops=1)
Index Cond: ((node)::text = '1234567890'::text)
Total runtime: 1792.084 ms
(4 rows)

mysql>select count(*) from table;
+----------+
| count(*) |
+----------+
| 2018160 |
+----------+
1 row in set (0.01 sec)

mysql>select count(*) from table where node = '1234567890';
+----------+
| count(*) |
+----------+
| 34648 |
+----------+
1 row in set (0.23 sec)

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

.

--
_____________________________________________
G l o b a l D i a g n o s t i c s I n c.
Andrew Schmidt t.416-304-0049 x206
aschmidt@lifescale.com f.866-697-8726
_____________________________________________

#5Rodrigo Gonzalez
rjgonzale@gmail.com
In reply to: Poul Møller Hansen (#1)
Re: Slow COUNT

I answer here so each one can help you.

Cannot understand what you mean....index is numbered? Are you talking
about "autoincrement" as called in mysql? use sequences please

Best regards

Rodrigo

Poul Møller Hansen wrote:

Show quoted text

Rodrigo Gonzalez wrote:

Poul,

2 things....first, why do you think it will have an impact on inserts?

And the second one....use InnoDb tables in MySQL, and you will have
the same than with PostgreSQL, it's because of MVCC

Best regards

Rodrigo

If the index is numbered, I assume or rather guessing that the indexes
must be renumbered at inserts.

I was not trying to favourite MySQL to PostgreSQL, I prefer PostgreSQL
to MySQL at any time, I was just wondering why it was much faster in
MySQL and how I can achieve the same result in PostgreSQL ?

Regards, Poul

#6Poul Møller Hansen
freebsd@pbnet.dk
In reply to: Rodrigo Gonzalez (#5)
Re: Slow COUNT

Rodrigo Gonzalez wrote:

I answer here so each one can help you.

Cannot understand what you mean....index is numbered? Are you talking
about "autoincrement" as called in mysql? use sequences please

Sorry, forgot that your email address was in the reply-to field.

I was seeking a solution on how to make a faster count on a lot of rows,
and I was wondering on the difference between PostgreSQL's and MySQL's
(MyISAM) of handling counts.

I understand the advantages of MVCC compared to row/table locking.
And as far as I have understood PostgreSQL count the rows looping
through all rows, and that's why it takes that long when there are many
rows.

But how is MySQL (MyISAM) doing it, and why doesn't that way work in
the MVCC model.

Thanks,
Poul

#7Rodrigo Gonzalez
rjgonzale@gmail.com
In reply to: Poul Møller Hansen (#6)
Re: Slow COUNT

No problem at all.

MyISAM stores the row count in it's header (.frm file).

You can do something like this using a table that has for example
tablename, rowcount.

On the other hand, MySQL with MyISAM does not have row locking, so this
problem is not considered.

Maybe if you tell what you want and why you want to know exact row count
someone can help you

Poul Møller Hansen wrote:

Show quoted text

Rodrigo Gonzalez wrote:

I answer here so each one can help you.

Cannot understand what you mean....index is numbered? Are you talking
about "autoincrement" as called in mysql? use sequences please

Sorry, forgot that your email address was in the reply-to field.

I was seeking a solution on how to make a faster count on a lot of rows,
and I was wondering on the difference between PostgreSQL's and MySQL's
(MyISAM) of handling counts.

I understand the advantages of MVCC compared to row/table locking.
And as far as I have understood PostgreSQL count the rows looping
through all rows, and that's why it takes that long when there are many
rows.

But how is MySQL (MyISAM) doing it, and why doesn't that way work in
the MVCC model.

Thanks,
Poul

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

#8Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Poul Møller Hansen (#6)
Re: Slow COUNT

On 12/2/05, Poul Møller Hansen <freebsd@pbnet.dk> wrote:

Rodrigo Gonzalez wrote:

I answer here so each one can help you.

Cannot understand what you mean....index is numbered? Are you talking
about "autoincrement" as called in mysql? use sequences please

Sorry, forgot that your email address was in the reply-to field.

I was seeking a solution on how to make a faster count on a lot of rows,
and I was wondering on the difference between PostgreSQL's and MySQL's
(MyISAM) of handling counts.

I understand the advantages of MVCC compared to row/table locking.
And as far as I have understood PostgreSQL count the rows looping
through all rows, and that's why it takes that long when there are many
rows.

But how is MySQL (MyISAM) doing it, and why doesn't that way work in
the MVCC model.

Thanks,
Poul

That's because MyISAM isn't concerned about transactions and
visibility stuff... it simply stores and increments...

in postgres you have to now if the row is visible to the transaction
that is counting, if the row was deleted by a concurrent
transaction... etc, etc... it's not as easy as insert, increment...

so the way to do it is create a trigger that record in a table the
number of rows...
in postgres there isn't such mechanism implicit for all tables because
it will be a penalty for both: insert and deletes in all tables and
the case is that there few tables were you want know exact counts, if
any

--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

#9Jan Wieck
JanWieck@Yahoo.com
In reply to: Jaime Casanova (#8)
Re: Slow COUNT

On 12/2/2005 2:02 PM, Jaime Casanova wrote:

so the way to do it is create a trigger that record in a table the
number of rows...

Neither, because now you have to update one single row in that new
table, which causes a row exclusive lock. That is worse than an
exclusive lock on the original table because it has the same
serialization of writers but the additional work to update the count
table as well as vacuum it.

What you need is a separate table where your trigger will insert delta
rows with +1 or -1 for insert and delete. A view will sum() over that
and tell you the true number of rows. Periodically you condense the
table by replacing all current rows with one that represents the sum().

Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #

#10Guido Neitzer
guido.neitzer@pharmaline.de
In reply to: Jaime Casanova (#8)
Re: Slow COUNT

On 02.12.2005, at 20:02 Uhr, Jaime Casanova wrote:

so the way to do it is create a trigger that record in a table the
number of rows...

As there are SO MANY questions about the "count(*)" issue, I wonder
whether it makes sense to add a mechanism which does exactly the
method mentioned above in a default PostgreSQL installation (perhaps
switched of by default for other performance impacts)?!

cug

--
PharmaLine, Essen, GERMANY
Software and Database Development

Attachments:

smime.p7sapplication/pkcs7-signature; name=smime.p7sDownload
#11Tino Wildenhain
tino@wildenhain.de
In reply to: Guido Neitzer (#10)
Re: Slow COUNT

Am Sonntag, den 04.12.2005, 14:02 +0100 schrieb Guido Neitzer:

On 02.12.2005, at 20:02 Uhr, Jaime Casanova wrote:

so the way to do it is create a trigger that record in a table the
number of rows...

As there are SO MANY questions about the "count(*)" issue, I wonder
whether it makes sense to add a mechanism which does exactly the
method mentioned above in a default PostgreSQL installation (perhaps
switched of by default for other performance impacts)?!

I dont think this would match postgres style - to include
a kludge for a rarely usefull special case. I may be wrong
but personally I never needed unqualified count(*) on a
table to be very fast.

Doing something to enable aggregates in general to use
an existent index would be a nice ide imho.
(With all the visibility hinting in place)

Just my 0.02Ct.

++Tino

#12Bruno Wolff III
bruno@wolff.to
In reply to: Tino Wildenhain (#11)
Re: Slow COUNT

On Sun, Dec 04, 2005 at 14:40:49 +0100,
Tino Wildenhain <tino@wildenhain.de> wrote:

Doing something to enable aggregates in general to use
an existent index would be a nice ide imho.
(With all the visibility hinting in place)

Assuming you are refering to max and min, this has already been done and is
in 8.1.

#13Tino Wildenhain
tino@wildenhain.de
In reply to: Bruno Wolff III (#12)
Re: Slow COUNT

Am Sonntag, den 04.12.2005, 09:56 -0600 schrieb Bruno Wolff III:

On Sun, Dec 04, 2005 at 14:40:49 +0100,
Tino Wildenhain <tino@wildenhain.de> wrote:

Doing something to enable aggregates in general to use
an existent index would be a nice ide imho.
(With all the visibility hinting in place)

Assuming you are refering to max and min, this has already been done and is
in 8.1.

I also mean sum, avg, ... and last not least count :-)

Thx for info though.

++Tino

#14Bruce Momjian
bruce@momjian.us
In reply to: Tino Wildenhain (#13)
Re: Slow COUNT

Tino Wildenhain <tino@wildenhain.de> writes:

Am Sonntag, den 04.12.2005, 09:56 -0600 schrieb Bruno Wolff III:

On Sun, Dec 04, 2005 at 14:40:49 +0100,

Assuming you are refering to max and min, this has already been done and is
in 8.1.

I also mean sum, avg, ... and last not least count :-)

The naive implementation would mean serializing all table updates. In other
words only one person can update, insert, or delete at a time. Until that user
commits everybody else would be locked out of the table. You may as well be
using something like mysql then if that's acceptable.

The more sophisticated implementation would require customization to get
right. It requires a second table keeping track of deltas and a periodic job
aggregating those deltas. Which aggregates to put in it, how often to
aggregate them, and when to consult them instead of consulting the main table
would all be things that would require human intervention to get right.

It would be cool if there were a shrinkwrapped package, perhaps in contrib, to
do this with knobs for the user to play with instead of having to roll your
own. perhaps in contrib. But nobody's done a good enough version yet to
consider it.

--
greg

#15Bruno Wolff III
bruno@wolff.to
In reply to: Tino Wildenhain (#13)
Re: Slow COUNT

On Sun, Dec 04, 2005 at 18:28:53 +0100,
Tino Wildenhain <tino@wildenhain.de> wrote:

Am Sonntag, den 04.12.2005, 09:56 -0600 schrieb Bruno Wolff III:

On Sun, Dec 04, 2005 at 14:40:49 +0100,
Tino Wildenhain <tino@wildenhain.de> wrote:

Doing something to enable aggregates in general to use
an existent index would be a nice ide imho.
(With all the visibility hinting in place)

Assuming you are refering to max and min, this has already been done and is
in 8.1.

I also mean sum, avg, ... and last not least count :-)

Your comment about indexes threw me there. Indexes are not the problem. If you
use a WHERE clause with enough selectivity and the is an appropiate index, an
an index scan will be used. There is a related issue that when postgres does
an index scan, it also needs to visit the hep to check visibility. The issue
there is that maintaining visibility in the index has costs that are currently
believed to outweigh the benefits of not having to check visibility in the
heap. (Though recently there have been some new suggestions in this area.)

What you are looking for seems to be caching values for the case where the
full table is selected. That has problems as described in the other response
and in more details in the archives. This isn't something you want turned on
by default, but it would be nice if there was something packaged to make doing
this easier for people who want it for selected tables.