Splitting one big table into smaller ones
I've got a big table of measures like that:
CREATE TABLE measures (
stationCode varchar(8),
when datetime,
value float,
quality char,
PRIMARY KEY (stationCode, when)
);
Each station (identified by stationCode) could have up to 10**6 measures. So I
want to split it into smaller tables to increase perfomance :
CREATE TABLE measures<1st stationCode> (
when datetime,
value float,
quality char,
PRIMARY KEY (when)
);
CREATE TABLE measures<2nd stationCode> (
... and so on.
Is there a way to handle that using SQL and PL/pgSQL languages ?
Thanks,
Guillaume Perr�al
Stagiaire MIAG
Cemagref - URH
France
Guillaume Perr�al wrote:
I've got a big table of measures like that:
CREATE TABLE measures (
stationCode varchar(8),
when datetime,
value float,
quality char,
PRIMARY KEY (stationCode, when)
);Each station (identified by stationCode) could have up to 10**6 measures. So I
want to split it into smaller tables to increase perfomance :CREATE TABLE measures<1st stationCode> (
when datetime,
value float,
quality char,
PRIMARY KEY (when)
);
CREATE TABLE measures<2nd stationCode> (
... and so on.Is there a way to handle that using SQL and PL/pgSQL languages ?
Pretty straight-forward to do that with perl/DBI, or even bash & friends. But I recall that PL/pgSQL has major problems with CREATE TABLE and other DDL statements.
BTW, I suspect your easiest significant performance improvement would come from adding an integer primary key for stationCode rather than a varchar key, eg.,
CREATE TABLE stations (
id serial, # or just integer
code varchar(*),
...
);
CREATE TABLE measures (
stationId integer,
when datetime,
value float,
quality char,
PRIMARY KEY (stationId, when)
);
If you try this, I'd be curious to hear the results. Your approach is surely faster, though by how much I don't know. It just looks like a bit of a headache to manage, that's all.
Regards,
Ed Loehr