Postgresql "FIFO" Tables, How-To ?
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.
The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the most
elegant soluion to achieve our goal in your oppinion ?
Regards
Kirill
--
Kirill Ponazdyr
Technical Director
Codeangels Solutions GmbH
Tel: +41 (0)43 844 90 10
Fax: +41 (0)43 844 90 12
Web: www.codeangels.com
On Wed, 16 Jul 2003, Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the most
elegant soluion to achieve our goal in your oppinion ?
An after insert trigger springs to mind.
--
Nigel Andrews
On Wed, 16 Jul 2003, Shridhar Daithankar wrote:
On 16 Jul 2003 at 17:59, Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the most
elegant soluion to achieve our goal in your oppinion ?It is practically impossible due to concurrency limitation unless you
explicitly serialize everything which might be a bad idea.I think it is doable. Create a normal table 't' and write a before insert
trigger. Create another table 'control' which contains the limit value and oid
of last row deleted. In the before insert trigger, do a select for update on
table 'control' so that no other transaction can update it. Proceed to
insertion/deletion in table 't'.It would be a bad idea to update the control table itself. You need to release
the lock with transaction commit.( I hope it gets released with the commit) If
you update control table, you would generate a dead row for every insertion in
main table which could be a major performance penalty for sizes you are talking
about.Frankly I would like to know fist why do you want to do this. Unless there are
good enough practical reasons, I would not recommend this approach at all. Can
you tell us why do you want to do this?
If he only needs an approximate number of rows (i.e. having max +/- 100
rows is ok...) then maybe just use a sequence and delete any rows that
are current_max_seq - max_records???
Import Notes
Reply to msg id not found: 3F15C7A9.12666.1E6423D@localhost | Resolved by subject fallback
----- Original Message -----
From: "Nigel J. Andrews" <nandrews@investsystems.co.uk>
To: "Kirill Ponazdyr" <softlist@codeangels.com>
Cc: "pg_general" <pgsql-general@postgresql.org>
Sent: Wednesday, July 16, 2003 7:06 PM
Subject: Re: [GENERAL] Postgresql "FIFO" Tables, How-To ?
On Wed, 16 Jul 2003, Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the
most
elegant soluion to achieve our goal in your oppinion ?
An after insert trigger springs to mind.
I see that the tables are quite big and I think a procedure launched by cron
at certain time to truncate the tables is a better solution.
If the server runs well with the trigger than choose to create the trigger
otherwise...
use a PL/PGSQL function. Just do count(*) to find out how many there are, calculate how many to be deleted, and put a timestamp field in the table. NOW, how to select the correct ones to delete is PROBABLY done by:
DELETE FROM table_in_question
WHERE some_primary_key_id IN
(SELECT some_primary_key_id
FROM table_in_question
ORDER BY the_time_stamp_field
LIMIT the_qty_to_be_deleted);
More than likely, in a concurrent environment, you will oscillate between:
(the maximum number you want)
and
(the maximum number you want - the maximum current connections).
Unless you so some kind of table locking.
Kirill Ponazdyr wrote:
Show quoted text
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the most
elegant soluion to achieve our goal in your oppinion ?Regards
Kirill
On 16 Jul 2003 at 17:59, Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the most
elegant soluion to achieve our goal in your oppinion ?
It is practically impossible due to concurrency limitation unless you
explicitly serialize everything which might be a bad idea.
I think it is doable. Create a normal table 't' and write a before insert
trigger. Create another table 'control' which contains the limit value and oid
of last row deleted. In the before insert trigger, do a select for update on
table 'control' so that no other transaction can update it. Proceed to
insertion/deletion in table 't'.
It would be a bad idea to update the control table itself. You need to release
the lock with transaction commit.( I hope it gets released with the commit) If
you update control table, you would generate a dead row for every insertion in
main table which could be a major performance penalty for sizes you are talking
about.
Frankly I would like to know fist why do you want to do this. Unless there are
good enough practical reasons, I would not recommend this approach at all. Can
you tell us why do you want to do this?
Bye
Shridhar
--
Lieberman's Law: Everybody lies, but it doesn't matter since nobody listens.
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the
most
elegant soluion to achieve our goal in your oppinion ?An after insert trigger springs to mind.
Ouch, this means that for every insert we would have to trigger a
procedure which will:
COUNT
IF > Limit
DELETE OLDEST
This would be pretty much damn ressource intensive on a table with million
of records, would not it ?
Regards
--
Kirill Ponazdyr
Technical Director
Codeangels Solutions GmbH
Tel: +41 (0)43 844 90 10
Fax: +41 (0)43 844 90 12
Web: www.codeangels.com
Or, you could make a view, or all of your queries have the phrase
'ORDER BY the_time_stamp_field
LIMIT the_qty_to_be_deleted'
in them. That'd be the standard way.
Dennis Gearon wrote:
Show quoted text
use a PL/PGSQL function. Just do count(*) to find out how many there
are, calculate how many to be deleted, and put a timestamp field in the
table. NOW, how to select the correct ones to delete is PROBABLY done by:DELETE FROM table_in_question
WHERE some_primary_key_id IN
(SELECT some_primary_key_id
FROM table_in_question
ORDER BY the_time_stamp_field
LIMIT the_qty_to_be_deleted);More than likely, in a concurrent environment, you will oscillate between:
(the maximum number you want)
and
(the maximum number you want - the maximum current connections).
Unless you so some kind of table locking.
Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the
most
elegant soluion to achieve our goal in your oppinion ?Regards
Kirill
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Two possiblities I might try:
1) a view:
create view foo as select * from x order by y limit WHATEVER;
Then, periodically do a physical cleaning of the table.
2) Fake records
Have an extra field, a boolean, called "fake". Create a view which
removes the fakes. Initialize the table with however many fakes you need.
Have a date_inserted field, and simply remove the last entered record for
every insert. Then create a view which filters out the fakes.
Jon
On Wed, 16 Jul 2003, Viorel Dragomir wrote:
Show quoted text
----- Original Message -----
From: "Nigel J. Andrews" <nandrews@investsystems.co.uk>
To: "Kirill Ponazdyr" <softlist@codeangels.com>
Cc: "pg_general" <pgsql-general@postgresql.org>
Sent: Wednesday, July 16, 2003 7:06 PM
Subject: Re: [GENERAL] Postgresql "FIFO" Tables, How-To ?On Wed, 16 Jul 2003, Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be themost
elegant soluion to achieve our goal in your oppinion ?
An after insert trigger springs to mind.
I see that the tables are quite big and I think a procedure launched by cron
at certain time to truncate the tables is a better solution.
If the server runs well with the trigger than choose to create the trigger
otherwise...---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend
On 16 Jul 2003 at 10:13, scott.marlowe wrote:
On Wed, 16 Jul 2003, Shridhar Daithankar wrote:
It would be a bad idea to update the control table itself. You need to release
the lock with transaction commit.( I hope it gets released with the commit) If
you update control table, you would generate a dead row for every insertion in
main table which could be a major performance penalty for sizes you are talking
about.Frankly I would like to know fist why do you want to do this. Unless there are
good enough practical reasons, I would not recommend this approach at all. Can
you tell us why do you want to do this?If he only needs an approximate number of rows (i.e. having max +/- 100
rows is ok...) then maybe just use a sequence and delete any rows that
are current_max_seq - max_records???
Surely there are more than one way to do it depending upon how much strict OP
wants to be. This seems to be a much better solution along with periodic vacuum
analyze if required.
Bye
Shridhar
--
"On a normal ascii line, the only safe condition to detect is a 'BREAK'-
everything else having been assigned functions by Gnu EMACS."(By Tarl
Neustaedter)
On Wed, Jul 16, 2003 at 18:18:09 +0200,
Kirill Ponazdyr <softlist@codeangels.com> wrote:
Ouch, this means that for every insert we would have to trigger a
procedure which will:COUNT
IF > Limit
DELETE OLDESTThis would be pretty much damn ressource intensive on a table with million
of records, would not it ?
If you preload the database with the required number of records, then you
don't need to count. You can just delete the oldest record. You can use
a sequence or timestamp to order the records for this purpose. If you
don't to see the dummy records you can flag them somehow. But if you reach
the limit in a short time you might not really care about that enough
to add the extra overhead.
Frankly I would like to know fist why do you want to do this. Unless there
are
good enough practical reasons, I would not recommend this approach at all.
Can
you tell us why do you want to do this?
It is for a advanced syslog server product we are currently developing.
The very basic idea is to feed all syslog messages into a DB and allow
easy monitoring and even correlation, we use Postgres as our DB Backend,
in big environments the machine would be hit with dozens of syslog
messages in a second and the messaging tables could grow out of controll
pretty soon (We are talking of up to 10mil messages daily).
We do have a "cleansing" logic build in which runs at certain times and
could delete the records over limit. So FIFO is not a requirement, it was
actually rather a theoretical question.
Regards
Kirill
--
Kirill Ponazdyr
Technical Director
Codeangels Solutions GmbH
Tel: +41 (0)43 844 90 10
Fax: +41 (0)43 844 90 12
Web: www.codeangels.com
Ouch, this means that for every insert we would have to trigger a
procedure which will:COUNT
IF > Limit
DELETE OLDESTThis would be pretty much damn ressource intensive on a table with million
of records, would not it ?
You can keep the count in a table on the side, and have it updated by
the same trigger (after insert or delete)...
Dima
OUCH!!! Do a COUNT(*) on a 10M row table???? Forget it.
Here is a simple solution.
Add a SERIAL field to the table.
Set the maximum value for that sequence to the number of records you
want to keep.
Use a before insert trigger to replace the insert with an update if the
key already exist.
No need for a cron.
Dennis Gearon wrote:
Show quoted text
use a PL/PGSQL function. Just do count(*) to find out how many there are, calculate how many to be deleted, and put a timestamp field in the table. NOW, how to select the correct ones to delete is PROBABLY done by:
DELETE FROM table_in_question
WHERE some_primary_key_id IN
(SELECT some_primary_key_id
FROM table_in_question
ORDER BY the_time_stamp_field
LIMIT the_qty_to_be_deleted);More than likely, in a concurrent environment, you will oscillate between:
(the maximum number you want)
and
(the maximum number you want - the maximum current connections).
Unless you so some kind of table locking.
Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the most
elegant soluion to achieve our goal in your oppinion ?Regards
Kirill
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Good idea!
Jean-Luc Lachance wrote:
Show quoted text
OUCH!!! Do a COUNT(*) on a 10M row table???? Forget it.
Here is a simple solution.
Add a SERIAL field to the table.
Set the maximum value for that sequence to the number of records you
want to keep.
Use a before insert trigger to replace the insert with an update if the
key already exist.No need for a cron.
Dennis Gearon wrote:
use a PL/PGSQL function. Just do count(*) to find out how many there are, calculate how many to be deleted, and put a timestamp field in the table. NOW, how to select the correct ones to delete is PROBABLY done by:
DELETE FROM table_in_question
WHERE some_primary_key_id IN
(SELECT some_primary_key_id
FROM table_in_question
ORDER BY the_time_stamp_field
LIMIT the_qty_to_be_deleted);More than likely, in a concurrent environment, you will oscillate between:
(the maximum number you want)
and
(the maximum number you want - the maximum current connections).
Unless you so some kind of table locking.
Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted (Kinda FIFO),
thus keeping a total number of rows at predefined number.The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the most
elegant soluion to achieve our goal in your oppinion ?Regards
Kirill
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
All,
There's a database format called "rrd" or round robin database. this was
specifically designed for wrap-around data storage. since you are trying to
store 10mil+ syslog messages this might not be the right tool. I'm just
mentioning it because it perhaps the way the rrd keeps track of wrap-around
might be a good way to implement this in postgres.
Sincerely,
Leon Oosterwijk
Show quoted text
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of Dennis Gearon
Sent: Wednesday, July 16, 2003 1:14 PM
To: Jean-Luc Lachance
Cc: softlist@codeangels.com; pg_general
Subject: Re: [GENERAL] Postgresql "FIFO" Tables, How-To ?Good idea!
Jean-Luc Lachance wrote:
OUCH!!! Do a COUNT(*) on a 10M row table???? Forget it.
Here is a simple solution.
Add a SERIAL field to the table.
Set the maximum value for that sequence to the number of records you
want to keep.
Use a before insert trigger to replace the insert with an update if the
key already exist.No need for a cron.
Dennis Gearon wrote:
use a PL/PGSQL function. Just do count(*) to find out how many
there are, calculate how many to be deleted, and put a timestamp
field in the table. NOW, how to select the correct ones to
delete is PROBABLY done by:DELETE FROM table_in_question
WHERE some_primary_key_id IN
(SELECT some_primary_key_id
FROM table_in_question
ORDER BY the_time_stamp_field
LIMIT the_qty_to_be_deleted);More than likely, in a concurrent environment, you will
oscillate between:
(the maximum number you want)
and
(the maximum number you want - the maximum current connections).
Unless you so some kind of table locking.
Kirill Ponazdyr wrote:
Hello,
We are currently working on a project where we need to limit number of
records in a table to a certain number. As soon as the number has been
reached, for each new row the oldest row should be deleted(Kinda FIFO),
thus keeping a total number of rows at predefined number.
The actual limits would be anywhere from 250k to 10mil rows per table.
It would be great if this could be achieved by RDBMS engine
itself, does
Postgres supports this kind of tables ? And if not, what would
be the most
elegant soluion to achieve our goal in your oppinion ?
Regards
Kirill
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an indexscan if your
joining column's datatypes do not match
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
On Wed, Jul 16, 2003 at 01:33:35PM -0500, Leon Oosterwijk wrote:
store 10mil+ syslog messages this might not be the right tool. I'm just
mentioning it because it perhaps the way the rrd keeps track of wrap-around
might be a good way to implement this in postgres.
Hmm. Using the cycling feature of a sequence, couldn't you create a
trigger which either inserts (if, e.g., the value of the trigger is
not there) or updates (if the value of the trigger is there)? I'm
not sure how to do it efficiently, but I haven't thought about it
very much.
A
--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
It would be great if this could be achieved by RDBMS engine itself, does
Postgres supports this kind of tables ? And if not, what would be the most
elegant soluion to achieve our goal in your oppinion ?
First question: no. :)
Second question: two ideas.
I. First idea:
Assuming that it is okay for the tables to run over a little bit, the best
way I can think of is to run a periodic cronjob. Assuming you have an
indexed column "id" which increments on new inserts:
DELETE FROM bigtable WHERE id <
(SELECT id FROM bigtable ORDER BY id DESC LIMIT 1 OFFSET 100000);
If you don't have a unique incrementing field but have a timestamp:
DELETE FROM bigtable WHERE oid <
(SELECT oid FROM bigtable ORDER BY in_time DESC LIMIT 1 OFFSET 100000);
This assumes your oid is not about to roll over, and that you have an
index on the oid column, of course.
Running a VACUUM immediately after the DELETE is highly advised.
Even running this job every 10 minutes would probably not be too bad - if
the table is not "full", no harm is done. Deleting and vacuuming for the
rows that have built up in the previous 10 minutes will also not
be too expensive. If it is, decrease the time. Even a cronjob running once
a minute is probably better than a trigger that fires upon very insert,
if the table is very active and getting hundreds or thousands of inserts
per minute.
II. Second idea:
Prepopulate the table, use a wrapping sequence, and a timestamp.
CREATE SEQUENCE mmlog_seq MINVALUE 1 MAXVALUE 500 CYCLE;
CREATE TABLE mmlog (
id INTEGER NOT NULL,
message VARCHAR,
ctime TIMESTAMP WITH TIME ZONE NOT NULL
);
CREATE INDEX mmlog_id ON mmlog(id);
CREATE INDEX mmlog_ctime ON mmlog(ctime);
BEGIN;
- -- Run this next command exactly 500 times:
INSERT INTO mmlog (id,message,ctime) VALUES (nextval('mmlog_seq'),'',now());
COMMIT;
REVOKE INSERT ON mmlog FROM PUBLIC;
To add a new row, run this:
UPDATE mmlog SET message=?, ctime=now() WHERE a = (SELECT nextval('mmlog_seq'));
Voila! A self-limiting table: INSERTING is not allowed, and the oldest record
is always overwritten. Remember to vacuum of course. And ORDER BY on the
ctime field to keep things in the proper order.
- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200307161435
-----BEGIN PGP SIGNATURE-----
Comment: http://www.turnstep.com/pgp.html
iD8DBQE/FaEdvJuQZxSWSsgRAs2TAKCi+ss3cGmwYEWU1zl7c6MpT+5RuACfW/K5
SfkKRslsAqMBLL7wLA0Dt7w=
=kUQE
-----END PGP SIGNATURE-----
store 10mil+ syslog messages this might not be the right tool. I'm
just mentioning it because it perhaps the way the rrd keeps track
of wrap-around might be a good way to implement this in postgres.Hmm. Using the cycling feature of a sequence, couldn't you create a
trigger which either inserts (if, e.g., the value of the trigger is
not there) or updates (if the value of the trigger is there)? I'm
not sure how to do it efficiently, but I haven't thought about it
very much.
I use this very approach.
CREATE SEQUENCE syslog_id_seq INCREMENT 1 MINVALUE 1 MAXVALUE 250000 CYCLE;
CREATE TABLE syslog (
id INT NOT NULL,
msg TEXT NOT NULL
);
CREATE UNIQUE INDEX syslog_id_udx ON syslog(id);
CREATE FUNCTION syslog_ins(TEXT)
RETURNS INT
EXTERNAL SECURITY DEFINER
AS '
DECLARE
a_msg ALIAS FOR $1;
v_id syslog.id%TYPE;
BEGIN
v_id := NEXTVAL(''syslog_id_seq''::TEXT);
PERFORM TRUE FROM syslog WHERE id = v_id;
IF FOUND THEN
UPDATE syslog SET msg = a_msg WHERE id = v_id;
ELSE
INSERT INTO syslog (id,msg) VALUES (id,msg);
END IF;
RETURN v_id;
' LANGUAGE 'plpgsql';
Though this is the inefficient way of doing this. If you wanted to be
really slick about it and incur some upfront disk space, populate the
table with your 250000 rows of bogus data, empty strings, then use the
following instead to save yourself a SELECT (which is only of use for
the first 250000 syslog msgs, then it becomes a given after the
sequence wraps):
CREATE FUNCTION syslog_ins(TEXT)
RETURNS INT
EXTERNAL SECURITY DEFINER
AS '
DECLARE
a_msg ALIAS FOR $1;
v_id syslog.id%TYPE;
BEGIN
v_id := NEXTVAL(''syslog_id_seq''::TEXT);
UPDATE syslog SET msg = a_msg WHERE id = v_id;
RETURN v_id;
' LANGUAGE 'plpgsql';
You may want to add a time component to the table/function, but I'll
leave that as an exercise to the reader. Just make sure you're
VACUUMing on a regular basis. :) -sc
--
Sean Chittenden
store 10mil+ syslog messages this might not be the right tool. I'm
just mentioning it because it perhaps the way the rrd keeps track
of wrap-around might be a good way to implement this in postgres.Hmm. Using the cycling feature of a sequence, couldn't you create a
trigger which either inserts (if, e.g., the value of the trigger is
not there) or updates (if the value of the trigger is there)? I'm
not sure how to do it efficiently, but I haven't thought about it
very much.I use this very approach.
CREATE SEQUENCE syslog_id_seq INCREMENT 1 MINVALUE 1 MAXVALUE 250000 CYCLE;
CREATE TABLE syslog (
id INT NOT NULL,
msg TEXT NOT NULL
);
CREATE UNIQUE INDEX syslog_id_udx ON syslog(id);
CREATE FUNCTION syslog_ins(TEXT)
RETURNS INT
EXTERNAL SECURITY DEFINER
AS '
DECLARE
a_msg ALIAS FOR $1;
v_id syslog.id%TYPE;
BEGIN
v_id := NEXTVAL(''syslog_id_seq''::TEXT);
PERFORM TRUE FROM syslog WHERE id = v_id;
IF FOUND THEN
UPDATE syslog SET msg = a_msg WHERE id = v_id;
ELSE
INSERT INTO syslog (id,msg) VALUES (id,msg);
END IF;RETURN v_id;
END; -- *blush*
' LANGUAGE 'plpgsql';
Though this is the inefficient way of doing this. If you wanted to be
really slick about it and incur some upfront disk space, populate the
table with your 250000 rows of bogus data, empty strings, then use the
following instead to save yourself a SELECT (which is only of use for
the first 250000 syslog msgs, then it becomes a given after the
sequence wraps):CREATE FUNCTION syslog_ins(TEXT)
RETURNS INT
EXTERNAL SECURITY DEFINER
AS '
DECLARE
a_msg ALIAS FOR $1;
v_id syslog.id%TYPE;
BEGIN
v_id := NEXTVAL(''syslog_id_seq''::TEXT);
UPDATE syslog SET msg = a_msg WHERE id = v_id;
RETURN v_id;
END;
' LANGUAGE 'plpgsql';
-sc
--
Sean Chittenden