Can I add Index to make a query faster which involves joins on unnest ?
The below query basically gives the result by maintaining the order of the sizes in the list.
explain analyze select
"price_levels"."name",
"price_levels"."size"
from
"price_levels"
join unnest(array['M',
'L',
'XL',
'2XL',
'3XL',
'4XL',
'5XL',
'6XL',
'S']) with ordinality t(size,
ord)
using (size)
order by
t.size
I have a Btree index on the size column.
Explain output is:
Merge Join (cost=4.61..5165.38 rows=60000 width=46) (actual time=0.157..57.872 rows=60000 loops=1)
Merge Cond: ((price_levels.size)::text = t.size)
-> Index Scan using price_levels_size_idx on price_levels (cost=0.29..4111.05 rows=60000 width=14) (actual time=0.044..25.941 rows=60000 loops=1)
-> Sort (cost=4.32..4.57 rows=100 width=32) (actual time=0.108..3.946 rows=53289 loops=1)
Sort Key: t.size
Sort Method: quicksort Memory: 25kB
-> Function Scan on unnest t (cost=0.00..1.00 rows=100 width=32) (actual time=0.030..0.033 rows=9 loops=1)
Planning time: 0.667 ms
Execution time: 62.846 ms
Thanks,
Arup Rakshit
ar@zeit.io
On Sep 13, 2018, at 12:17 PM, Arup Rakshit <ar@zeit.io> wrote:
The below query basically gives the result by maintaining the order of the sizes in the list.
explain analyze select
"price_levels"."name",
"price_levels"."size"
from
"price_levels"
join unnest(array['M',
'L',
'XL',
'2XL',
'3XL',
'4XL',
'5XL',
'6XL',
'S']) with ordinality t(size,
ord)
using (size)
order by
t.sizeI have a Btree index on the size column.
Explain output is:
Merge Join (cost=4.61..5165.38 rows=60000 width=46) (actual time=0.157..57.872 rows=60000 loops=1)
Merge Cond: ((price_levels.size)::text = t.size)
-> Index Scan using price_levels_size_idx on price_levels (cost=0.29..4111.05 rows=60000 width=14) (actual time=0.044..25.941 rows=60000 loops=1)
-> Sort (cost=4.32..4.57 rows=100 width=32) (actual time=0.108..3.946 rows=53289 loops=1)
Sort Key: t.size
Sort Method: quicksort Memory: 25kB
-> Function Scan on unnest t (cost=0.00..1.00 rows=100 width=32) (actual time=0.030..0.033 rows=9 loops=1)
Planning time: 0.667 ms
Execution time: 62.846 msThanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>
There are not value of size fit it to be a worthwhile key.
Show quoted text
Hello Rob,
So how do I improve this query speed?
Thanks,
Arup Rakshit
ar@zeit.io
Show quoted text
On 14-Sep-2018, at 12:27 AM, Rob Sargent <robjsargent@gmail.com> wrote:
On Sep 13, 2018, at 12:17 PM, Arup Rakshit <ar@zeit.io <mailto:ar@zeit.io>> wrote:
The below query basically gives the result by maintaining the order of the sizes in the list.
explain analyze select
"price_levels"."name",
"price_levels"."size"
from
"price_levels"
join unnest(array['M',
'L',
'XL',
'2XL',
'3XL',
'4XL',
'5XL',
'6XL',
'S']) with ordinality t(size,
ord)
using (size)
order by
t.sizeI have a Btree index on the size column.
Explain output is:
Merge Join (cost=4.61..5165.38 rows=60000 width=46) (actual time=0.157..57.872 rows=60000 loops=1)
Merge Cond: ((price_levels.size)::text = t.size)
-> Index Scan using price_levels_size_idx on price_levels (cost=0.29..4111.05 rows=60000 width=14) (actual time=0.044..25.941 rows=60000 loops=1)
-> Sort (cost=4.32..4.57 rows=100 width=32) (actual time=0.108..3.946 rows=53289 loops=1)
Sort Key: t.size
Sort Method: quicksort Memory: 25kB
-> Function Scan on unnest t (cost=0.00..1.00 rows=100 width=32) (actual time=0.030..0.033 rows=9 loops=1)
Planning time: 0.667 ms
Execution time: 62.846 msThanks,
Arup Rakshit
ar@zeit.io <mailto:ar@zeit.io>There are not value of size fit it to be a worthwhile key.