Cube Index Size

Started by Nick Rajover 14 years ago10 messages
#1Nick Raj
nickrajjain@gmail.com

Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
select pg_size_pretty(pg_relation_
size('cubtest')); //Table size without index
pg_size_pretty
----------------
8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
pg_size_pretty
----------------
16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?

Thanks
Nick

#2Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Nick Raj (#1)
Re: Cube Index Size

On 30.05.2011 21:51, Nick Raj wrote:

Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
select pg_size_pretty(pg_relation_
size('cubtest')); //Table size without index
pg_size_pretty
----------------
8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
pg_size_pretty
----------------
16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?

Which version of PostgreSQL are you using? I wonder if this could be due
to the bug in cube's picksplit algorithm that was fixed a while ago:

http://archives.postgresql.org/message-id/AANLkTimC8W6guHpWJeWdjQA6WGoVH-7qG9Ar4pem2N2V@mail.gmail.com

If not, please post a self-contained test case to create and populate
the table, so that others can easily try to reproduce it.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#3Nicolas Barbier
nicolas.barbier@gmail.com
In reply to: Nick Raj (#1)
Re: Cube Index Size

2011/5/30, Nick Raj <nickrajjain@gmail.com>:

3. When tuples are 5 lakh

For the benefit of the others: "5 lakh" seems to mean 500,000.

<URL:http://en.wikipedia.org/wiki/Lakh&gt;

Nicolas

--
A. Because it breaks the logical sequence of discussion.
Q. Why is top posting bad?

#4Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Nick Raj (#1)
Re: Cube Index Size

On 01.06.2011 10:48, Nick Raj wrote:

On Tue, May 31, 2011 at 12:46 PM, Heikki Linnakangas<
heikki.linnakangas@enterprisedb.com> wrote:

If not, please post a self-contained test case to create and populate the
table, so that others can easily try to reproduce it.

I have attached .sql file that having 20000 tuples
Table creation - create table cubtest(c cube);
Index creation - create index t on cubtest using gist(c);

Ok, I can reproduce the issue with that. The index is only 4MB in size
when I populate it with random data (vs. 15 MB with your data). The
command I used is:

INSERT INTO cubtest SELECT cube(random(), random()) FROM
generate_series(1,20000);

My guess is that the picksplit algorithm performs poorly with that data.
Unfortunately, I have no idea how to improve that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#5Alexander Korotkov
aekorotkov@gmail.com
In reply to: Heikki Linnakangas (#4)
Re: Cube Index Size

On Wed, Jun 1, 2011 at 3:37 PM, Heikki Linnakangas <
heikki.linnakangas@enterprisedb.com> wrote:

My guess is that the picksplit algorithm performs poorly with that data.
Unfortunately, I have no idea how to improve that.

Current cube picksplit function have no storage utilization guarantees,
while original Guttman's picksplit has them (if one of group size reaches
some threshold, then all other entries go to another group). Also, current
picksplit is mix of Guttman's linear and quadratic algorithms. It picks
seeds quadratically, but distributes entries linearly.
I see following ways of solving picksplit problem for cube:
1) Add storage utilization guarantees to current picksplit. It may cause
increase of overlaps, but should descrease index size.
2) Add storage utilization guarantees to current picksplit and replace
entries distribution algorithm to the quadratic one. Picksplit will take
more time, but it should give more stable and predictable result.
3) I had some experiments with my own picksplit algorithm, which showed
pretty good results on tests which I've run. But current implementation is
dirty and it's require more testing.

------
With best regards,
Alexander Korotkov.

#6Teodor Sigaev
teodor@sigaev.ru
In reply to: Heikki Linnakangas (#4)
Re: Cube Index Size

Ok, I can reproduce the issue with that. The index is only 4MB in size
when I populate it with random data (vs. 15 MB with your data). The
command I used is:

INSERT INTO cubtest SELECT cube(random(), random()) FROM
generate_series(1,20000);

My guess is that the picksplit algorithm performs poorly with that data.
Unfortunately, I have no idea how to improve that.

One of idea is add sorting of Datums to be splitted by cost of insertion. It's
implemented in intarray/tsearch GiST indexes.

Although I'm not sure that it will help but our researches on Guttman's
picksplit algorimth show significant improvements.
--
Teodor Sigaev E-mail: teodor@sigaev.ru
WWW: http://www.sigaev.ru/

#7Alexander Korotkov
aekorotkov@gmail.com
In reply to: Teodor Sigaev (#6)
Re: Cube Index Size

2011/6/1 Teodor Sigaev <teodor@sigaev.ru>

One of idea is add sorting of Datums to be splitted by cost of insertion.
It's implemented in intarray/tsearch GiST indexes.

Yes, it's a good compromise between linear and quadratic entries
distribution algorithms. In quadratic algorithm each time entry with maximal
difference of inserion cost is inserted. Quadratic algorithm runs slowly
than sorting one, but on my tests it shows slightly better results.

------
With best regards,
Alexander Korotkov.

#8Nick Raj
nickrajjain@gmail.com
In reply to: Alexander Korotkov (#7)
Re: Cube Index Size

2011/6/1 Alexander Korotkov <aekorotkov@gmail.com>

2011/6/1 Teodor Sigaev <teodor@sigaev.ru>

One of idea is add sorting of Datums to be splitted by cost of insertion.
It's implemented in intarray/tsearch GiST indexes.

Yes, it's a good compromise between linear and quadratic entries
distribution algorithms. In quadratic algorithm each time entry with maximal
difference of inserion cost is inserted. Quadratic algorithm runs slowly
than sorting one, but on my tests it shows slightly better results.

Can we figure out some information about index i.e. whet is the height of

index tree, how many values are placed in one leaf node and one non leaf
level node?

Regards,
Nick

#9Teodor Sigaev
teodor@sigaev.ru
In reply to: Nick Raj (#8)
Re: Cube Index Size

Can we figure out some information about index i.e. whet is the height
of index tree, how many values are placed in one leaf node and one non
leaf level node?

http://www.sigaev.ru/cvsweb/cvsweb.cgi/gevel/
--
Teodor Sigaev E-mail: teodor@sigaev.ru
WWW: http://www.sigaev.ru/

#10Nick Raj
nickrajjain@gmail.com
In reply to: Teodor Sigaev (#9)
Re: Cube Index Size

2011/6/2 Teodor Sigaev <teodor@sigaev.ru>

Can we figure out some information about index i.e. whet is the height

of index tree, how many values are placed in one leaf node and one non
leaf level node?

http://www.sigaev.ru/cvsweb/cvsweb.cgi/gevel/

For improving space utilization, When node is splitted, then we have to
assign enteries to two groups. Once, one group is reached some threshod (m)
then, insert the remaining entries into another group.

Can you suggest some way to choose 'm' (beacuse cube store in form of NDBOX
that having variable length) or provide some guide with code?

Thanks

Show quoted text

--
Teodor Sigaev E-mail: teodor@sigaev.ru
WWW:
http://www.sigaev.ru/