Re: Join query on 1M row table slow
On Tue, 10 Feb 2004, CSN wrote:
I have a pretty simple select query that joins a table
(p) with 125K rows with another table (pc) with almost
one million rows:select p.*
from product_categories pc
inner join products p
on pc.product_id = p.id
where pc.category_id = $category_id
order by p.title
limit 25
offset $offsetThe query usually takes about five seconds to execute
(all other PG queries perform fast enough). I have
indexes on everything needed, and EXPLAIN shows
they're being used. Is there anything else I can do to
improve performance - such as tweaking some settings
in the config?Redhat 9, PG 7.4.1.
Could you give more info on the hardware?
You did not mention how often you do your vacuum analyze or how often data
is updated/deleted. The more info you provide the more we can try to
help.
How about your buffer and other settings?
Import Notes
Reply to msg id not found: 20040210195110.75252.qmail@web40601.mail.yahoo.comReference msg id not found: 20040210195110.75252.qmail@web40601.mail.yahoo.com
--- lists@natserv.com wrote:
On Tue, 10 Feb 2004, CSN wrote:
I have a pretty simple select query that joins a
table
(p) with 125K rows with another table (pc) with
almost
one million rows:
select p.*
from product_categories pc
inner join products p
on pc.product_id = p.id
where pc.category_id = $category_id
order by p.title
limit 25
offset $offsetThe query usually takes about five seconds to
execute
(all other PG queries perform fast enough). I have
indexes on everything needed, and EXPLAIN shows
they're being used. Is there anything else I cando to
improve performance - such as tweaking some
settings
in the config?
Redhat 9, PG 7.4.1.
Could you give more info on the hardware?
Intel(R) Celeron(R) CPU 1.70GHz
1 GB RAM
You did not mention how often you do your vacuum
analyze or how often data
is updated/deleted.
I've done both vaccuum and vaccuum analyze on the
database. Vaccuum full takes forever (I haven't let it
run its full course yet). The data is completely
static - no updates/deletes, just selects.
How about your buffer and other settings?
shared_buffers = 1000
That's about the only setting I changed in
postgresql.conf
TIA,
CSN
The more info you provide the
more we can try to
help.How about your buffer and other settings?
__________________________________
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html
Here's the EXPLAIN:
Limit (cost=9595.99..9596.05 rows=25 width=290)
-> Sort (cost=9595.99..9598.45 rows=986
width=290)
Sort Key: p.title
-> Nested Loop (cost=0.00..9546.96 rows=986
width=290)
-> Index Scan using idx_pc_category_id
on product_categories pc (cost=0.00..3607.28 rows=986
width=4)
Index Cond: (category_id = 1016)
-> Index Scan using pkey_products_id
on products p (cost=0.00..6.01 rows=1 width=290)
Index Cond: ("outer".product_id =
p.id)
(8 rows)
Is the "cost" high?
CSN
--- lists@natserv.com wrote:
On Tue, 10 Feb 2004, CSN wrote:
I have a pretty simple select query that joins a
table
(p) with 125K rows with another table (pc) with
almost
one million rows:
select p.*
from product_categories pc
inner join products p
on pc.product_id = p.id
where pc.category_id = $category_id
order by p.title
limit 25
offset $offsetThe query usually takes about five seconds to
execute
(all other PG queries perform fast enough). I have
indexes on everything needed, and EXPLAIN shows
they're being used. Is there anything else I cando to
improve performance - such as tweaking some
settings
in the config?
Redhat 9, PG 7.4.1.
Could you give more info on the hardware?
You did not mention how often you do your vacuum
analyze or how often data
is updated/deleted. The more info you provide the
more we can try to
help.How about your buffer and other settings?
__________________________________
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html