Time to insert
Hi everyone
Could someone explain me why time of insertion become that long when
table increase ?
For the same kind of insertion:
When table is empty : 0.03s/row
Table has 3663 rows : 2.07s/row
Is that normal ?
Sincerely, Renaud THONNART
On Thu, 3 May 2001, Renaud Thonnart wrote:
Hi everyone
Could someone explain me why time of insertion become that long when
table increase ?
For the same kind of insertion:
When table is empty : 0.03s/row
Table has 3663 rows : 2.07s/row
Is that normal ?Sincerely, Renaud THONNART
Have you performed a VACUUM [ANALYZE]? If you are inserting data in larger
patches consider using copy or dropping indexes. What version of pgsql are
you using? There used to be (if I remember correctly) a bug in 7.1 betas
that made the inserts slow down.
- Einar Karttunen
It is a little difficult to perform VACUUM analyse because I'm writing an
application in C++ using libpq++.
I Use version 7.0.3
I try COPY too but result was about the same.
Renaud.
Einar Karttunen wrote:
Show quoted text
On Thu, 3 May 2001, Renaud Thonnart wrote:
Hi everyone
Could someone explain me why time of insertion become that long when
table increase ?
For the same kind of insertion:
When table is empty : 0.03s/row
Table has 3663 rows : 2.07s/row
Is that normal ?Sincerely, Renaud THONNART
Have you performed a VACUUM [ANALYZE]? If you are inserting data in larger
patches consider using copy or dropping indexes. What version of pgsql are
you using? There used to be (if I remember correctly) a bug in 7.1 betas
that made the inserts slow down.- Einar Karttunen
On Thu, 3 May 2001, Renaud Thonnart wrote:
It is a little difficult to perform VACUUM analyse because I'm writing an
application in C++ using libpq++.
I Use version 7.0.3
I try COPY too but result was about the same.
int PgConnection::ExecCommandOk("VACUUM ANALYZE");
If you can upgrading to 7.1 it will probably help with the performance
problem. The copy is good only if you perform very many inserts the
same time. I use a cron job that performs the vacuuming so I don't
have to worry about it in client aps. If you have multiple inserts,
but not so many that you should use a copy, try wrapping them in a
single transaction.
- Einar Karttunen
Thank you Einar and Colin for your help.
I have some little qusetions more :
- How do I interpret a VACUUM ANALYSE ?
- I'm going to load version 7.1 : what is the difference between 7.1 and
7.1rc4 ?
Renaud THONNART
Einar Karttunen wrote:
Show quoted text
On Thu, 3 May 2001, Renaud Thonnart wrote:
It is a little difficult to perform VACUUM analyse because I'm writing an
application in C++ using libpq++.
I Use version 7.0.3
I try COPY too but result was about the same.int PgConnection::ExecCommandOk("VACUUM ANALYZE");
If you can upgrading to 7.1 it will probably help with the performance
problem. The copy is good only if you perform very many inserts the
same time. I use a cron job that performs the vacuuming so I don't
have to worry about it in client aps. If you have multiple inserts,
but not so many that you should use a copy, try wrapping them in a
single transaction.- Einar Karttunen
On Thu, 3 May 2001, Renaud Thonnart wrote:
Thank you Einar and Colin for your help.
I have some little qusetions more :
- How do I interpret a VACUUM ANALYSE ?
- I'm going to load version 7.1 : what is the difference between 7.1 and
7.1rc4 ?
7.1rc4 is the fourth release candidate. 7.1 is the final release. So you
should use 7.1.
You just send "VACUUM ANALYZE" command to the backend. The vacuum analyze
deletes rows, which are marked as unused and recreates the statistics
about the table. Its use is essential (one time in 2-48h, or after large
modifications), keeps the database running smoothly. You use vacuum
analyze as any other command eg (using libpq++):
PgDatabase data;
data.ExecTuplesOk("MISC SQL QUERY")
// when you need it
data.ExecCommandOk("VACUUM ANALYZE");
I think that it is better to do the vacuuming server side, if the
application doesn't change a large amount of data.
- Einar Karttunen
Ok, I will try to do a VACUUM ANALYZE in my program.
I suppose it is better to do it too much than too less time....
Many thanks for your help Einar.
Renaud
Einar Karttunen wrote:
Show quoted text
On Thu, 3 May 2001, Renaud Thonnart wrote:
Thank you Einar and Colin for your help.
I have some little qusetions more :
- How do I interpret a VACUUM ANALYSE ?
- I'm going to load version 7.1 : what is the difference between 7.1 and
7.1rc4 ?7.1rc4 is the fourth release candidate. 7.1 is the final release. So you
should use 7.1.You just send "VACUUM ANALYZE" command to the backend. The vacuum analyze
deletes rows, which are marked as unused and recreates the statistics
about the table. Its use is essential (one time in 2-48h, or after large
modifications), keeps the database running smoothly. You use vacuum
analyze as any other command eg (using libpq++):PgDatabase data;
data.ExecTuplesOk("MISC SQL QUERY")
// when you need it
data.ExecCommandOk("VACUUM ANALYZE");I think that it is better to do the vacuuming server side, if the
application doesn't change a large amount of data.- Einar Karttunen
The fastest way to add many records is to drop the
indexes and then use copy., then rebuild the idexes.
Yes I know it's not pretty.
You can in fact use VACUUM from C/C++. Use fork/exec
to start a shell that runs vaccum. Takes abut six or
eight lines of code.
Show quoted text
-----Original Message-----
From: Renaud Thonnart [mailto:thonnart@amwdb.u-strasbg.fr]
Sent: Thursday, May 03, 2001 3:50 AM
To: Einar Karttunen
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Time to insertIt is a little difficult to perform VACUUM analyse because
I'm writing an
application in C++ using libpq++.
I Use version 7.0.3
I try COPY too but result was about the same.Renaud.
Einar Karttunen wrote:
On Thu, 3 May 2001, Renaud Thonnart wrote:
Hi everyone
Could someone explain me why time of insertion becomethat long when
table increase ?
For the same kind of insertion:
When table is empty : 0.03s/row
Table has 3663 rows : 2.07s/row
Is that normal ?Sincerely, Renaud THONNART
Have you performed a VACUUM [ANALYZE]? If you are inserting
data in larger
patches consider using copy or dropping indexes. What
version of pgsql are
you using? There used to be (if I remember correctly) a bug
in 7.1 betas
that made the inserts slow down.
- Einar Karttunen
---------------------------(end of
broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
Import Notes
Resolved by subject fallback