Minor Optimization?

Started by Nonameabout 21 years ago1 messages
#1Noname
mweilguni@sime.com

I had to fix an old, badly written web application, it had some performance
problems sometimes. I found out there are some links broken:
http://...../?id=123456778995934853

in fact the link should be:
http://...../?id=1234567&78995934853

where the part &78995934853 was random and should force a reload. In the
application itself the number was only tested with a regular
expression /^\d+$/ (bad bad bad).

Unfortunatly this leads to a sequential scan with the postgresql version (7.4)
and leads to a performance problem (~300000 records).

This is a normal indexed operation (both are from a test system with far fewer
records):
EXPLAIN ANALYZE SELECT * from beitraege where id=10000;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Index Scan using beitraege_pkey on beitraege (cost=0.00..5.86 rows=2
width=411) (actual time=0.045..0.045 rows=0 loops=1)
Index Cond: (id = 10000)

And this is with long numbers:
cms_dev=# EXPLAIN ANALYZE SELECT * from beitraege where
id=100009999999999999999;
QUERY PLAN
----------------------------------------------------------------------------------------------------------
Seq Scan on beitraege (cost=0.00..329.26 rows=30 width=411) (actual
time=15.689..15.689 rows=0 loops=1)
Filter: ((id)::numeric = 100009999999999999999::numeric)

In that case it would be possible to rewrite the part
"id=100009999999999999999" to false, because "integer" type is never able to
hold such large numbers.

Of course the bug must be fixed at appliaction level, but this might be a
cheap optimization. I've no clue where to start looking in the source, maybe
someone can tell me where to start. Thanks!

Regards,
Mario Weilguni