Raw size
Hi,
i have a table:
create table triples(
att0 varchar(1000),
att1 int4,
att2 varchar(20),
att3 varchar(1000)
)
My table has 990 raws.
The (possibly wrong) way, with wich i compute the size of the table is:
att0: 1000 * 1 Byte + 4 = 1004 Bytes
att2: 20 * 1 Byte + 4 = 24 Bytes
att3: 1000 * 1 Byte + 4 = 1004
2032 Bytes + 40 (for oid) = 2072 Bytes
990 * 2072 = 2,051,280 Bytes
BUT after clustering triples according to an index on att1:
select relname, relpages from pg_class ;
relname | relpages
---------------------------------+----------
triples | 142 (8KB/buffer)
142 * 8 * 1024 = 1,163,264 Bytes
Is there any compression or what?
990 * 2072 = 2,051,280 Bytes
BUT after clustering triples according to an index on att1:
<snip>
142 * 8 * 1024 = 1,163,264 Bytes
Is there any compression or what?
varchar means 'character varying'. What varies is the length. So a
varchar(1000) with 'foo' in it only takes a few bytes ('foo' plus length
info) instead of 1000 bytes.
If you really want a fixed-length field, nchar or char should do what you
want.
Import Notes
Resolved by subject fallback
On Thu, 10 Mar 2005 20:07:13 +0200 (EET), Ioannis Theoharis
<theohari@ics.forth.gr> wrote:
Hi,
i have a table:
create table triples(
att0 varchar(1000),
att1 int4,
att2 varchar(20),
att3 varchar(1000)
)My table has 990 raws.
The (possibly wrong) way, with wich i compute the size of the table is:
att0: 1000 * 1 Byte + 4 = 1004 Bytes
i don't know what the varchar size is in byte but i think is not 1 per
character. IIRC, it varies on diferent encodings.
regards,
Jaime Casanova
BUT after clustering triples according to an index on att1:
select relname, relpages from pg_class ;
relname | relpages
---------------------------------+----------
triples | 142 (8KB/buffer)142 * 8 * 1024 = 1,163,264 Bytes
Is there any compression or what?
Yes, there is:
http://www.postgresql.org/docs/8.0/interactive/storage-toast.html
Chris
varchar means 'character varying'. What varies is the length. So a
varchar(1000) with 'foo' in it only takes a few bytes ('foo' plus length
info) instead of 1000 bytes.
Yes i know it, but i have vorgotten to inform you that all the values of
this attribute have really 1000 characthers length.
Show quoted text
If you really want a fixed-length field, nchar or char should do what you
want.
Is there any compression or what?
Yes, there is:
http://www.postgresql.org/docs/8.0/interactive/storage-toast.html
thanks, is there any way to increase the limit, upper wich toast strategy
is selected? By defaullt is Block_size/4 = about 2000 Bytes.
Is there any compression or what?
Yes, there is:
http://www.postgresql.org/docs/8.0/interactive/storage-toast.html
thanks, is there any way to increase the limit, upper wich toast strategy
is selected? By defaullt is Block_size/4 = about 2000 Bytes.
Dunno, but you can alter the column and go 'set storage' to control the
storage strategy for TOAST on the column...
Chris
Ühel kenal päeval (neljapäev, 10. märts 2005, 20:07+0200), kirjutas
Ioannis Theoharis:
Hi,
i have a table:
create table triples(
att0 varchar(1000),
att1 int4,
att2 varchar(20),
att3 varchar(1000)
)My table has 990 raws.
The (possibly wrong) way, with wich i compute the size of the table is:
Varchar fields (actually most *char and text fields) use only actual
length bytes + some overhead for tuple header + page header, so unless
you fill all varchar(1000) fields with exactly 1000-byte strings, you
should use less than that.
Is there any compression or what?
Compression is not used for tuples under 2k, so there _may_ be
coimpression depending on your exact data and TOAST settings.
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
--
Hannu Krosing <hannu@tm.ee>