Drop in performance for each INSERT/DELETE combo

Started by Turbo Fredrikssonalmost 24 years ago7 messages
#1Turbo Fredriksson
turbo@bayour.com

We're using a function to insert some information into the database.
This information is later (within seconds) retrieved from a program,
that does the actual processing of the information. It is then
deleted from the database when we're done with it.

We see a MAJOR performance loss the longer the time. It starts out
from around 28 'data chunks' per second (inserts in a couple tables),
and drops down to below 10/s...

If doing 'VACUUM ANALYZE' every 20 minutes improves the performance,
with the expected drop when the VACUUM is done, but in general the
performance is steady...

Investigation have shown that it's the actual DELETE that's slow,
any idea how to find WHERE (and hopefully WHY :) this is so?
--
attack Serbian radar PLO ammonium toluene Legion of Doom congress DES
pits Ft. Bragg KGB Honduras kibo World Trade Center
[See http://www.aclu.org/echelonwatch/index.html for more about this]

#2Jean-Paul ARGUDO
jean-paul.argudo@idealx.com
In reply to: Turbo Fredriksson (#1)
Re: Drop in performance for each INSERT/DELETE combo

Could you send the schema of the table , the definition of the index on it and
the SQL query?

It is hard to help you without this info :-/

Cheers,

--
Jean-Paul ARGUDO

#3Dalibor Andzakovic
dalibor.andzakovic@swerve.co.nz
In reply to: Jean-Paul ARGUDO (#2)
Re: Drop in performance for each INSERT/DELETE combo

Did you try this using temporary tables?
I've noticed a better performance on one of our apps that used to do
just that (insert some records and delete some records from a sctrach
table)

We recoded it to basically create a temp table, insert records, do
whatever with them and than drop the temp table.

This is easily achieved with CREATE TEMP TABLE

Hope this helps

dali
-----Original Message-----
From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Turbo
Fredriksson
Sent: Wednesday, 6 March 2002 21:20
To: pgsql-hackers@postgresql.org
Subject: [HACKERS] Drop in performance for each INSERT/DELETE combo

We're using a function to insert some information into the database.
This information is later (within seconds) retrieved from a program,
that does the actual processing of the information. It is then
deleted from the database when we're done with it.

We see a MAJOR performance loss the longer the time. It starts out
from around 28 'data chunks' per second (inserts in a couple tables),
and drops down to below 10/s...

If doing 'VACUUM ANALYZE' every 20 minutes improves the performance,
with the expected drop when the VACUUM is done, but in general the
performance is steady...

Investigation have shown that it's the actual DELETE that's slow,
any idea how to find WHERE (and hopefully WHY :) this is so?
--
attack Serbian radar PLO ammonium toluene Legion of Doom congress DES
pits Ft. Bragg KGB Honduras kibo World Trade Center
[See http://www.aclu.org/echelonwatch/index.html for more about this]

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

#4Turbo Fredriksson
turbo@bayour.com
In reply to: Dalibor Andzakovic (#3)
Re: Drop in performance for each INSERT/DELETE combo

[let's keep this thread on the list please]

"Nikolay" == Nikolay Mihaylov <pg@nmmm.nu> writes:

Nikolay> Why you do not use UPDATE instead DELETE ? (e.g. flag if
Nikolay> the operation is finished)

That was my first response when the test crew said that 'they found
that the problem seemed to be in the DELETE, not the INSERT' (their
exact words :).

My idea was that that would decrease the fragmentation of the database...

The difference was minor, (yet again) according to the test crew...

Nikolay> We had similar problems, but a VACUUM once per 2-3 mo,
Nikolay> helps us (the database is not so big ~ 20 - 30MB).

Is this database constantly changing? Or is it more or less static?

The database won't be bigger than 10Mb at any time (and that's an
exaggeration). The real issue seem to be the constant changing of
the content...
--
Uzi Ortega 767 class struggle Clinton counter-intelligence
arrangements toluene PLO AK-47 Ft. Meade Soviet quiche Khaddafi
cracking
[See http://www.aclu.org/echelonwatch/index.html for more about this]

#5Turbo Fredriksson
turbo@bayour.com
In reply to: Jean-Paul ARGUDO (#2)
Re: Drop in performance for each INSERT/DELETE combo

"Jean-Paul" == Jean-Paul ARGUDO <jean-paul.argudo@idealx.com> writes:

Jean-Paul> Could you send the schema of the table , the definition
Jean-Paul> of the index on it and the SQL query?

I can't do that at the moment, it's a closed-source (ie, commercial)
product, and I'll need official aprovement etc :)

Jean-Paul> It is hard to help you without this info :-/

I know, that's why I formulated the mail like a question on how to
procreed, not how _YOU_ (ie, the mailinglist) could solve my problem :)

Thanx anyway.
--
Saddam Hussein SEAL Team 6 congress strategic ammonium arrangements
Noriega DES SDI FBI nuclear domestic disruption attack Marxist Delta
Force
[See http://www.aclu.org/echelonwatch/index.html for more about this]

#6Lincoln Yeoh
lyeoh@pop.jaring.my
In reply to: Turbo Fredriksson (#1)
Re: Drop in performance for each INSERT/DELETE combo

IANAD (I am not a developer) but deleted rows are not removed till
vacuuming occurs. They are just marked so.

Are you deleting specific rows? If you are then you have to keep vacuuming
to keep it going at about 30/sec. This can be more viable with 7.2.
Postgresql often has to go through relevant deleted rows in order to find
the valid rows.

If you want to delete everything, truncating might be faster. Unfortunately
truncating can't work in a transaction block.

Link.

At 09:19 AM 06-03-2002 +0100, Turbo Fredriksson wrote:

Show quoted text

We're using a function to insert some information into the database.
This information is later (within seconds) retrieved from a program,
that does the actual processing of the information. It is then
deleted from the database when we're done with it.

We see a MAJOR performance loss the longer the time. It starts out
from around 28 'data chunks' per second (inserts in a couple tables),
and drops down to below 10/s...

If doing 'VACUUM ANALYZE' every 20 minutes improves the performance,
with the expected drop when the VACUUM is done, but in general the
performance is steady...

Investigation have shown that it's the actual DELETE that's slow,
any idea how to find WHERE (and hopefully WHY :) this is so?

#7Hannu Krosing
hannu@itmeedia.ee
In reply to: Turbo Fredriksson (#1)
Re: Drop in performance for each INSERT/DELETE combo

----- Original Message -----
From: "Turbo Fredriksson" <turbo@bayour.com>
To: <pgsql-hackers@postgresql.org>
Sent: Wednesday, March 06, 2002 10:19 AM
Subject: [HACKERS] Drop in performance for each INSERT/DELETE combo

We're using a function to insert some information into the database.
This information is later (within seconds) retrieved from a program,
that does the actual processing of the information. It is then
deleted from the database when we're done with it.

We see a MAJOR performance loss the longer the time. It starts out
from around 28 'data chunks' per second (inserts in a couple tables),
and drops down to below 10/s...

If doing 'VACUUM ANALYZE' every 20 minutes improves the performance,
with the expected drop when the VACUUM is done, but in general the
performance is steady...

What version of PG are you running ?

On PG 7.2 vacuum itself does not incur very big performance hit. And you
don't need to run VACUUM ANALYZE that often, just plain VACUUM will do
nicely.

You can also restrict VACUUMING to your table only by doing VACUUM TABLENAME

If the total size of your table is small I'd recommend running VACUUM
TABLENAME
even more often, up to every few seconds.

Investigation have shown that it's the actual DELETE that's slow,

Do you have any foreign keys on that table ?

Or even an ON DELETE trigger.

any idea how to find WHERE (and hopefully WHY :) this is so?

Nope :)

-------------
Hannu