triggers, views and rules (not instead)

Started by Zeugswetter Andreas SARZalmost 28 years ago9 messages
#1Zeugswetter Andreas SARZ
Andreas.Zeugswetter@telecom.at

Jan wrote:

The only things not working for copy are rewrite rules. But I
think we should restrict rules to the view handling in the
future and move forward by implementing a pure and really
powerful procedural language.

Hm, it looks like you are not really a fan of the rewrite system,
eventhough you seem to have the most insight in these matters. I wonder why?

Why I like the rewrite system is:
1. select rewrite -- select trigger would be no good (optimizer)
2. The client can be really dumb, like MS Access or some other
standard ODBC tool
which does not know anything about funcs procs and the like
(even without using passthrough)
3. it is a lot more powerful than views
4. it allows the optimizer to get involved (this is where triggers
fail per definition)
5. once understood it is very easy to use
easier than trigger with c stored procedure at least

I guess if triggers could also trigger simple select statements, I could do
most of what I want using triggers except of course the select stuff.
But as I said I like the rules system very much, especially after your
recent
fixes Jan :-) So please stick to supporting all 3: triggers, views and
rules. Wow :-)

Andreas

#2Noname
jwieck@debis.com
In reply to: Zeugswetter Andreas SARZ (#1)
Re: [HACKERS] triggers, views and rules (not instead)

Andreas wrote:

Jan wrote:

The only things not working for copy are rewrite rules. But I
think we should restrict rules to the view handling in the
future and move forward by implementing a pure and really
powerful procedural language.

Hm, it looks like you are not really a fan of the rewrite system,
eventhough you seem to have the most insight in these matters. I wonder why?

Confusing - eh? Well I know much about the internals of the
postgres rule system and due to this I know where the limits
are. Especially in the case of qualifications it somtimes
gets totally confused about what to compare against what. Try
to add a delete rule on a view that is simply a select * from
another table and then delete some tuples :-)

Why I like the rewrite system is:
1. select rewrite -- select trigger would be no good (optimizer)

Exactly that's what is done if you create a view. Postgres
creates a regular table (look at pg_class and into the
database directory) and then sets up a relation level instead
rewrite rule on select.

2. The client can be really dumb, like MS Access or some other
standard ODBC tool
which does not know anything about funcs procs and the like
(even without using passthrough)

Yupp - the client must not know why and how and where the
data is left and coming from. But that's true in any case - a
trigger for each row on insert can do anything different and
push the data wherever it wants.

3. it is a lot more powerful than views

As said - views are only one special rule case in Postgres.

4. it allows the optimizer to get involved (this is where triggers
fail per definition)
5. once understood it is very easy to use
easier than trigger with c stored procedure at least

Optimizing again and again. If the rules aren't instead, the
querytree get's additional queries for every rule appended.
Have a table field that references an entry in another table
and this entry should have a refcount. So on update you must
decrease the refcount from the old ref and increase it on the
new. You create two rules so the UPDATE will result in 1
scan and 2 nestloops with scans inside - really optimized if
the referenced value doesn't change. And don't think that a
rule qual of NEW != CURRENT might help - that will result in
2 mergejoins where the scanned tuples are compared.

BTW, this sample doesn't work currently because the rules
queries are appended at the end of the querytree, thus the
decrement scan having the same qual will not find the old
tuple at all because it's already outdated
(command_counter_increment between processing the queries).
Referencing CURRENT in a rule is not what most people think
it is.

The old 4.2 postgres had a second, instance level rule system
(prs2 stubs) that fired the rules actions when actually the
old tuple and the new projected tuple where handy. There you
could have made also things like 'UPDATE NEW SET a = 4' that
really modified the in memory tuple in the executors
expression context. Who the hell removed all that? It was so
nice :-(

A really simple to write trigger can compare old != new and
only if send down the other two queries. This time they wont
be nestloops, they are simple scans. And the trigger can
arrange that the queries it uses are only parsed on it's
first of all calls and store the generated execution plans
permanently for quick execution (look at SPI_prepare).

For the stored C procedures you're totally right. I don't
like the C functions because it requires postgres superuser
rights to develop them and thus I created PL/Tcl where joe
user can hack around without having complete access to the
whole database (look at src/pl/tcl). And someday after 6.3
release I'll really start on a plain PL/pgSQL implementation
that would give a normal user the opportunity to create
functions and triggers on a high level. There is light at the
end of the tunnel - hope that it isn't the coming train :-)

I guess if triggers could also trigger simple select statements, I could do
most of what I want using triggers except of course the select stuff.
But as I said I like the rules system very much, especially after your
recent
fixes Jan :-) So please stick to supporting all 3: triggers, views and
rules. Wow :-)

Well - a trigger cannot build a view. The relation underlying
the view doesn't contain any tuples and a select trigger will
never be fired. As long as there is no possibility to return
tuple sets from non-SQL functions. But a trigger can do
things like the pg_hide_passwd stuff much more powerful. You
could define the trigger so that it checks if the user is a
superuser and overwrite the passwd value only in the case
where he/she isn't. If fired at the right place it would too
work for things like the copy command etc.

We must stay with all 3 features. And I will take a look at
the INSERT ... SELECT view problem really soon as it is a
rule system problem that breaks views. But this is only the
SELECT rewriting part of the rule system which I really like
(optimizable). The other areas (insert, update, delete) of
the rule system are dangerous and I really think a powerful
PL/pgSQL language could make them obsolete.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#3Zeugswetter Andreas SARZ
Andreas.Zeugswetter@telecom.at
In reply to: Noname (#2)
AW: [HACKERS] triggers, views and rules (not instead)

Since we have so little documentation on the rules, I think we should save
every
little word describing them, so could you simply put the following into a
rules.readme
(undigested is still better than not adding it)

Why I like the rewrite system is:
1. select rewrite -- select trigger would be no good (optimizer)

Exactly that's what is done if you create a view. Postgres
creates a regular table (look at pg_class and into the
database directory) and then sets up a relation level instead
rewrite rule on select.

2. The client can be really dumb, like MS Access or some other
standard ODBC tool
which does not know anything about funcs procs and the like
(even without using passthrough)

Yupp - the client must not know why and how and where the
data is left and coming from. But that's true in any case - a
trigger for each row on insert can do anything different and
push the data wherever it wants.

3. it is a lot more powerful than views

As said - views are only one special rule case in Postgres.

4. it allows the optimizer to get involved (this is where triggers
fail per definition)
5. once understood it is very easy to use
easier than trigger with c stored procedure at least

Optimizing again and again. If the rules aren't instead, the
querytree get's additional queries for every rule appended.
Have a table field that references an entry in another table
and this entry should have a refcount. So on update you must
decrease the refcount from the old ref and increase it on the
new. You create two rules so the UPDATE will result in 1
scan and 2 nestloops with scans inside - really optimized if
the referenced value doesn't change. And don't think that a
rule qual of NEW != CURRENT might help - that will result in
2 mergejoins where the scanned tuples are compared.

I fought that like a windmill, I guess it would be better to kill the
CURRENT keyword
with this meaning alltogether, since it only has the same meaning as the
tablename itself.
I have already crossed it out of my mind and don't miss anything.
I think there should instead be an OLD and NEW keyword
like in triggers:
referencing old as <oldname> new as <newname>
that only reference the tuples in memory.

BTW, this sample doesn't work currently because the rules
queries are appended at the end of the querytree, thus the
decrement scan having the same qual will not find the old
tuple at all because it's already outdated
(command_counter_increment between processing the queries).
Referencing CURRENT in a rule is not what most people think
it is.

The old 4.2 postgres had a second, instance level rule system
(prs2 stubs) that fired the rules actions when actually the
old tuple and the new projected tuple where handy. There you
could have made also things like 'UPDATE NEW SET a = 4' that
really modified the in memory tuple in the executors
expression context. Who the hell removed all that? It was so
nice :-(

Absolutely ! I did cry up when that was done, but nobody responded :-(
Well to be honest Vadim did respond with the trigger code, which made me
feel comfortable again.

A really simple to write trigger can compare old != new and
only if send down the other two queries. This time they wont
be nestloops, they are simple scans. And the trigger can
arrange that the queries it uses are only parsed on it's
first of all calls and store the generated execution plans
permanently for quick execution (look at SPI_prepare).

For the stored C procedures you're totally right. I don't
like the C functions because it requires postgres superuser
rights to develop them and thus I created PL/Tcl where joe
user can hack around without having complete access to the
whole database (look at src/pl/tcl). And someday after 6.3
release I'll really start on a plain PL/pgSQL implementation
that would give a normal user the opportunity to create
functions and triggers on a high level. There is light at the
end of the tunnel - hope that it isn't the coming train :-)

I guess if triggers could also trigger simple select statements, I could

do

most of what I want using triggers except of course the select stuff.
But as I said I like the rules system very much, especially after your
recent
fixes Jan :-) So please stick to supporting all 3: triggers, views and
rules. Wow :-)

Well - a trigger cannot build a view. The relation underlying
the view doesn't contain any tuples and a select trigger will
never be fired. As long as there is no possibility to return
tuple sets from non-SQL functions. But a trigger can do
things like the pg_hide_passwd stuff much more powerful. You
could define the trigger so that it checks if the user is a
superuser and overwrite the passwd value only in the case
where he/she isn't. If fired at the right place it would too
work for things like the copy command etc.

We must stay with all 3 features. And I will take a look at
the INSERT ... SELECT view problem really soon as it is a
rule system problem that breaks views. But this is only the
SELECT rewriting part of the rule system which I really like
(optimizable). The other areas (insert, update, delete) of
the rule system are dangerous and I really think a powerful
PL/pgSQL language could make them obsolete.

Jan

Ok, to sum it up:
1. We need and want the select part of the rewrite rules.
2. for the insert/update/delete rules the old instance rules system
was much more appropriate. TODO: dig up the old code
and merge it with the current trigger Implementation
it must be pretty much the wanted functionality (it
supported sql)
3. the CURRENT keyword in the i/u/d rewrite rules is stupid
and should be disabled
destroyed and burned in hell
4. To stick to the mainstream we should enhance the trigger
syntax,
and forget the rule stuff for i/u/d

create trigger passwd_utr
..........
referencing old as o new as n
for each row (statement, statement, statement, procedure,
...... all PL/pgSQL syntax allowed );
-- with a syntax to modify the new tuple in memory

Andreas

#4Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Zeugswetter Andreas SARZ (#3)
Re: AW: [HACKERS] triggers, views and rules (not instead)

Ok, to sum it up:
1. We need and want the select part of the rewrite rules.
2. for the insert/update/delete rules the old instance rules system
was much more appropriate. TODO: dig up the old code
and merge it with the current trigger Implementation
it must be pretty much the wanted functionality (it
supported sql)
3. the CURRENT keyword in the i/u/d rewrite rules is stupid
and should be disabled
destroyed and burned in hell
4. To stick to the mainstream we should enhance the trigger
syntax,
and forget the rule stuff for i/u/d

create trigger passwd_utr
..........
referencing old as o new as n
for each row (statement, statement, statement, procedure,
...... all PL/pgSQL syntax allowed );
-- with a syntax to modify the new tuple in memory

This all sounds good to me. Let's do it soon. I like the removal of
i/u/d rewrite so we can give people something that will work, and not
have all those gray areas of 'it works here, but not here.'

--
Bruce Momjian
maillist@candle.pha.pa.us

#5Vadim B. Mikheev
vadim@sable.krasnoyarsk.su
In reply to: Zeugswetter Andreas SARZ (#3)
Re: AW: [HACKERS] triggers, views and rules (not instead)

Zeugswetter Andreas SARZ wrote:

Ok, to sum it up:
1. We need and want the select part of the rewrite rules.

Agreed.

2. for the insert/update/delete rules the old instance rules system
was much more appropriate. TODO: dig up the old code
and merge it with the current trigger Implementation
it must be pretty much the wanted functionality (it
supported sql)

??? Old instance rules system was removed by Jolly & Andrew and so
it never supported SQL. I hope that Jan will give us PL/pgSQL soon
and it will be used for triggers, without changing current trigger
implementation...

3. the CURRENT keyword in the i/u/d rewrite rules is stupid
and should be disabled, destroyed and burned in hell

Agreed, if standard hasn't it. I know that OLD & NEW are in standard,
for triggers atleast.

4. To stick to the mainstream we should enhance the trigger
syntax, and forget the rule stuff for i/u/d

Yes. Statement level triggers give the same functionality as rewrite
i/u/d rules. We could let them to return something special to skip
user' i/u/d itself, isn't it the same as INSTEAD ?

Vadim

#6Brett McCormick
brett@work.chicken.org
In reply to: Vadim B. Mikheev (#5)
pl/{perl, pgsql} (was Re: AW: [HACKERS] triggers, views and rules (not instead))

On Sun, 22 February 1998, at 18:26:45, Vadim B. Mikheev wrote:

??? Old instance rules system was removed by Jolly & Andrew and so
it never supported SQL. I hope that Jan will give us PL/pgSQL soon
and it will be used for triggers, without changing current trigger
implementation...

Is develemopment being done for PL/pgSQL? What are peoples ideas for
this? I've never used a commercial db before, and the free ones don't
usualle have a stored PL language. What sort of things are you guys
anticipating? In the writing of PL/perl i've been tempted to give
lots of access to the backend internals from perl (why not, it should
have all the facilities C function programmers have!) What do you think?

Also, as far as argument passing goes: strings & numbers get passed as
perl scalars, and most other types get passed as a Posgres::Type
object (with methods for conversion etc). Right now I've got a switch
block on the type oid and I have many case statements and a few bodies
for these conversions.

The conversions are hard-coded in the .c file (via the case
statements). The only reason any particular input type shows up in
perl any particular way (as a scalar (string/int) or Postgres::Type)
is because there's a hardcoded case statement for it. Of course, the
default is a Postgres::Type. Which means new integer types show up as
a Postgres::Type, which could be considered a bad thing. Right now
part of what i'm doing is checking the typbyval and then passing that
type as an integer scalar (excluding selected types that don't make
sense as ints).. Maybe the default case should be the types ouptut
function?

I hope I make sense..

--brett

3. the CURRENT keyword in the i/u/d rewrite rules is stupid
and should be disabled, destroyed and burned in hell

Agreed, if standard hasn't it. I know that OLD & NEW are in standard,
for triggers atleast.

4. To stick to the mainstream we should enhance the trigger
syntax, and forget the rule stuff for i/u/d

Yes. Statement level triggers give the same functionality as rewrite
i/u/d rules. We could let them to return something special to skip
user' i/u/d itself, isn't it the same as INSTEAD ?

Vadim

On Sun, 22 February 1998, at 03:33:07, root@bigfoot.speakeasy.org wrote:

X-VM-v5-Data: ([t nil nil nil nil nil nil nil nil]
["9840" "Sun" "22" "February" "1998" "03:33:07" "-0800" "root@bigfoot.speakeasy.org" "root@bigfoot.speakeasy.org" nil "273" "cron: /etc/dailyback " nil nil nil "2" nil nil (number " " mark "N root@bigfoot.spea Feb 22 273/9840 " thread-indent "\"cron: /etc/dailyback \"\n") nil nil]
nil)
Return-Path: <root@bigfoot.speakeasy.org>
Received: from eve.speakeasy.org (root@eve.speakeasy.org [199.238.226.1])
by abraxas.scene.com (8.8.8/8.8.5) with ESMTP id DAA16378
for <brett@work.chicken.org>; Sun, 22 Feb 1998 03:27:21 -0800
Received: from bigfoot.speakeasy.org (bigfoot.speakeasy.org [199.238.226.54]) by eve.speakeasy.org (8.8.5/8.7.3) with ESMTP id DAA03076; Sun, 22 Feb 1998 03:27:09 -0800 (PST)
From: root@bigfoot.speakeasy.org
Received: (from root@localhost)
by bigfoot.speakeasy.org (8.8.7/8.8.7) id DAA23856;
Sun, 22 Feb 1998 03:33:07 -0800
Date: Sun, 22 Feb 1998 03:33:07 -0800
Message-Id: <199802221133.DAA23856@bigfoot.speakeasy.org>
To: root@bigfoot.speakeasy.org
Subject: cron: /etc/dailyback

ARCHIVING: eve
Begin time: Sun Feb 22 02:14:21 PST 1998

DUMP: Date of this level 7 dump: Sun Feb 22 02:14:21 1998
DUMP: Date of last level 3 dump: Sat Feb 21 04:26:20 1998
DUMP: Dumping /dev/rsd3a (/home) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 127906 blocks (62.45MB) on 0.01 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: level 7 dump on Sun Feb 22 02:14:21 1998
DUMP: Tape rewinding
DUMP: 128490 blocks (62.74MB) on 1 volume
DUMP: DUMP IS DONE
DUMP: Date of this level 7 dump: Sun Feb 22 02:17:28 1998
DUMP: Date of last level 3 dump: Sat Feb 21 04:31:51 1998
DUMP: Dumping /dev/rsd3d (/sp1) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 84136 blocks (41.08MB) on 0.00 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: level 7 dump on Sun Feb 22 02:17:28 1998
DUMP: Tape rewinding
DUMP: 84222 blocks (41.12MB) on 1 volume
DUMP: DUMP IS DONE
DUMP: Date of this level 7 dump: Sun Feb 22 02:20:38 1998
DUMP: Date of last level 3 dump: Sat Feb 21 04:38:31 1998
DUMP: Dumping /dev/rsd2b (/sp2) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 109538 blocks (53.49MB) on 0.01 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: level 7 dump on Sun Feb 22 02:20:38 1998
DUMP: Tape rewinding
DUMP: 109562 blocks (53.50MB) on 1 volume
DUMP: DUMP IS DONE
DUMP: Date of this level 7 dump: Sun Feb 22 02:25:28 1998
DUMP: Date of last level 3 dump: Sat Feb 21 04:48:03 1998
DUMP: Dumping /dev/rsd3e (/sp3) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 121746 blocks (59.45MB) on 0.01 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: level 7 dump on Sun Feb 22 02:25:28 1998
DUMP: Tape rewinding
DUMP: 121760 blocks (59.45MB) on 1 volume
DUMP: DUMP IS DONE
DUMP: Date of this level 7 dump: Sun Feb 22 02:31:30 1998
DUMP: Date of last level 3 dump: Sat Feb 21 05:01:18 1998
DUMP: Dumping /dev/rsd3f (/usr/spool/mail) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 709654 blocks (346.51MB) on 0.04 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: 42.60% done, finished in 0:06
DUMP: 84.00% done, finished in 0:01
DUMP: level 7 dump on Sun Feb 22 02:31:30 1998
DUMP: Tape rewinding
DUMP: 710144 blocks (346.75MB) on 1 volume
DUMP: DUMP IS DONE
DUMP: Date of this level 7 dump: Sun Feb 22 02:45:06 1998
DUMP: Date of last level 3 dump: Sat Feb 21 05:17:49 1998
DUMP: Dumping /dev/rsd1a (/usr/local/etc/httpd) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 12236 blocks (5.97MB) on 0.00 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: level 7 dump on Sun Feb 22 02:45:06 1998
DUMP: Tape rewinding
DUMP: 12454 blocks (6.08MB) on 1 volume
DUMP: DUMP IS DONE
DUMP: Date of this level 7 dump: Sun Feb 22 02:46:59 1998
DUMP: Date of last level 3 dump: Sat Feb 21 05:20:04 1998
DUMP: Dumping /dev/rsd2a (/usr/local) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 576498 blocks (281.49MB) on 0.03 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: 97.17% done, finished in 0:00
DUMP: level 7 dump on Sun Feb 22 02:46:59 1998
DUMP: Tape rewinding
DUMP: 577062 blocks (281.77MB) on 1 volume
DUMP: DUMP IS DONE
DUMP: Date of this level 7 dump: Sun Feb 22 02:54:18 1998
DUMP: Date of last level 3 dump: Sat Feb 21 05:28:41 1998
DUMP: Dumping /dev/rsd0a (/) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 4658 blocks (2.27MB) on 0.00 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: level 7 dump on Sun Feb 22 02:54:18 1998
DUMP: Tape rewinding
DUMP: 4666 blocks (2.28MB) on 1 volume
DUMP: DUMP IS DONE
DUMP: Date of this level 7 dump: Sun Feb 22 02:54:28 1998
DUMP: Date of last level 3 dump: Sat Feb 21 05:29:03 1998
DUMP: Dumping /dev/rsd0g (/usr) to /dev/nrmt0 on host bigfoot
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 4516 blocks (2.21MB) on 0.00 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: level 7 dump on Sun Feb 22 02:54:28 1998
DUMP: Tape rewinding
DUMP: 4512 blocks (2.20MB) on 1 volume
DUMP: DUMP IS DONE
End time: Sun Feb 22 02:54:59 PST 1998
ARCHIVING: gemini
gemini:/

**** bru: execution summary ****

Started: Sun Feb 22 02:57:09 1998
Completed: Sun Feb 22 02:57:50 1998
Archive id: 34f004852588
Messages: 0 warnings, 0 errors
Archive I/O: 6920 blocks (13840Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 18 files (15 regular, 3 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0
gemini:/usr

**** bru: execution summary ****

Started: Sun Feb 22 02:57:50 1998
Completed: Sun Feb 22 02:58:34 1998
Archive id: 34f004ae2595
Messages: 0 warnings, 0 errors
Archive I/O: 10 blocks (20Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 1 files (1 regular, 0 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0
gemini:/usr/local

**** bru: execution summary ****

Started: Sun Feb 22 02:58:34 1998
Completed: Sun Feb 22 03:00:45 1998
Archive id: 34f004da259d
Messages: 0 warnings, 0 errors
Archive I/O: 22980 blocks (45960Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 419 files (419 regular, 0 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0
gemini:/usr/local/apache/sites

**** bru: execution summary ****

Started: Sun Feb 22 03:00:45 1998
Completed: Sun Feb 22 03:14:29 1998
Archive id: 34f0055d25ba
Messages: 0 warnings, 0 errors
Archive I/O: 48900 blocks (97800Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 814 files (814 regular, 0 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0
gemini:/mnt/logs

**** bru: execution summary ****

Started: Sun Feb 22 03:14:29 1998
Completed: Sun Feb 22 03:17:10 1998
Archive id: 34f00895260f
Messages: 0 warnings, 0 errors
Archive I/O: 20 blocks (40Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 5 files (5 regular, 0 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0
ARCHIVING: betty
betty:/usr

**** bru: execution summary ****

Started: Sun Feb 22 03:15:42 1998
Completed: Sun Feb 22 03:16:48 1998
Archive id: 34f008de0465
Messages: 0 warnings, 0 errors
Archive I/O: 3980 blocks (7960Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 8 files (8 regular, 0 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0
betty:/usr/local/samba

**** bru: execution summary ****

Started: Sun Feb 22 03:16:48 1998
Completed: Sun Feb 22 03:17:46 1998
Archive id: 34f00920046c
Messages: 0 warnings, 0 errors
Archive I/O: 3630 blocks (7260Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 55 files (55 regular, 0 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0
ARCHIVING: bigfoot
bigfoot:/var/log0

**** bru: execution summary ****

Started: Sun Feb 22 03:23:13 1998
Completed: Sun Feb 22 03:30:26 1998
Archive id: 34f00aa15d23
Messages: 0 warnings, 0 errors
Archive I/O: 272880 blocks (545760Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 937 files (937 regular, 0 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0
ARCHIVING: ella.pscs.org
ella:/var

**** bru: execution summary ****

Started: Sun Feb 22 03:27:39 1998
Completed: Sun Feb 22 03:28:46 1998
Archive id: 34f00bab08e0
Messages: 0 warnings, 0 errors
Archive I/O: 13830 blocks (27660Kb) written
Archive I/O: 0 blocks (0Kb) read
Files written: 58 files (58 regular, 0 other)
Files read: 0 files (0 regular, 0 other)
Write errors: 0 soft, 0 hard
Read errors: 0 soft, 0 hard
Checksum errors: 0

On Sun, 22 February 1998, at 17:47:07, Vadim B. Mikheev wrote:

X-VM-v5-Data: ([t nil nil nil nil nil nil nil nil]
["1080" "Sun" "22" "February" "1998" "17:47:07" "+0700" "Vadim B. Mikheev" "vadim@sable.krasnoyarsk.su" nil "32" "Re: [HACKERS] How To free resources used by large object Relations?" nil nil nil "2" nil nil (number " " mark "N Vadim B. Mikheev Feb 22 32/1080 " thread-indent "\"Re: [HACKERS] How To free resources used by large object Relations?\"\n") nil nil]
nil)
Return-Path: <owner-pgsql-hackers@hub.org>
Received: from hub.org (hub.org [209.47.148.200])
by abraxas.scene.com (8.8.8/8.8.5) with ESMTP id CAA16288
for <brett@work.chicken.org>; Sun, 22 Feb 1998 02:51:20 -0800
Received: from localhost (majordom@localhost) by hub.org (8.8.8/8.7.5) with SMTP id FAA28144; Sun, 22 Feb 1998 05:45:06 -0500 (EST)
Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Sun, 22 Feb 1998 05:44:57 -0500 (EST)
Received: (from majordom@localhost) by hub.org (8.8.8/8.7.5) id FAA28023 for pgsql-hackers-outgoing; Sun, 22 Feb 1998 05:44:49 -0500 (EST)
Received: from dune.krasnet.ru (dune.krasnet.ru [193.125.44.86]) by hub.org (8.8.8/8.7.5) with ESMTP id FAA27920 for <pgsql-hackers@postgreSQL.org>; Sun, 22 Feb 1998 05:44:29 -0500 (EST)
Received: from sable.krasnoyarsk.su (dune.krasnet.ru [193.125.44.86])
by dune.krasnet.ru (8.8.7/8.8.7) with ESMTP id RAA00809;
Sun, 22 Feb 1998 17:47:19 +0700 (KRS)
(envelope-from vadim@sable.krasnoyarsk.su)
Message-ID: <34F0022B.60E59681@sable.krasnoyarsk.su>
Date: Sun, 22 Feb 1998 17:47:07 +0700
From: "Vadim B. Mikheev" <vadim@sable.krasnoyarsk.su>
Organization: ITTS (Krasnoyarsk)
X-Mailer: Mozilla 4.04 [en] (X11; I; FreeBSD 2.2.5-RELEASE i386)
MIME-Version: 1.0
To: Maurice Gittens <mgittens@gits.nl>
CC: pgsql-hackers@postgreSQL.org
Subject: Re: [HACKERS] How To free resources used by large object Relations?
References: <018001bd3e01$7651be80$fcf3b2c2@caleb..gits.nl>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Sender: owner-pgsql-hackers@hub.org
Precedence: bulk

Maurice Gittens wrote:

Hi,

I've changed the large object memory managment strategy to ensure that
each large object has it's own memory context.
This way I can free all memory for a large object when I lo_close
is called for it's oid. This seems to work.

I've noticed that the heap_close function used to close the heap used
by a large object doesn't really do any thing. (It calls RelationClose
which decrements some reference count).

Somehow I have to free the relation from the cache in the following
situations:
1. In a transaction I must free the stuff when the transaction is
commited/aborted.

Backend does it, don't worry.

2. Otherwise it must happen when lo_close is called.

It seems that you can't remove relation from cache untill
commit/abort, currently: backend uses local cache to unlink
files of relations created in transaction if abort...
We could change relcache.c:RelationPurgeLocalRelation()
to read from pg_class directly...

But how many LO do you create in single xact ?
Is memory allocated for cache so big ?

Vadim

On Sun, 22 February 1998, at 02:49:37, Eve Arden wrote:

X-VM-v5-Data: ([t nil nil nil nil nil nil nil nil]
["129" "Sun" "22" "February" "1998" "02:49:37" "-0800" "Eve Arden" "root@eve.speakeasy.org" nil "2" "" nil nil nil "2" nil nil (number " " mark "N Eve Arden Feb 22 2/129 " thread-indent "\"\"\n") nil nil]
nil)
Return-Path: <root@eve.speakeasy.org>
Received: from eve.speakeasy.org (root@eve.speakeasy.org [199.238.226.1])
by abraxas.scene.com (8.8.8/8.8.5) with ESMTP id CAA16276
for <brett@work.chicken.org>; Sun, 22 Feb 1998 02:49:50 -0800
Received: (from root@localhost) by eve.speakeasy.org (8.8.5/8.7.3) id CAA01899; Sun, 22 Feb 1998 02:49:37 -0800 (PST)
Date: Sun, 22 Feb 1998 02:49:37 -0800 (PST)
From: Eve Arden <root@eve.speakeasy.org>
Message-Id: <199802221049.CAA01899@eve.speakeasy.org>

# Checking accounts from NIS.

--WARN-- [acc001w] Login ID a-ron is disabled, but still has a valid shell (/usr/local/bin/tcsh).

#7Noname
jwieck@debis.com
In reply to: Vadim B. Mikheev (#5)
Re: AW: [HACKERS] triggers, views and rules (not instead)

Vadim wrote:

??? Old instance rules system was removed by Jolly & Andrew and so
it never supported SQL. I hope that Jan will give us PL/pgSQL soon
and it will be used for triggers, without changing current trigger
implementation...

I'll start on PL/pgSQL as soon as the view-aggregate, insert-
select-view and the pg_user-freed-cc_tupdesc problems are
fixed. But due to time I think PL/pgSQL will be an add on
after 6.3 and I'm very sure it will not require any more
changes to the trigger implementation.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#8Noname
jwieck@debis.com
In reply to: Brett McCormick (#6)
Re: pl/{perl, pgsql} (was Re: AW: [HACKERS] triggers, views and rules (not instead))

Brett wrote:

On Sun, 22 February 1998, at 18:26:45, Vadim B. Mikheev wrote:

??? Old instance rules system was removed by Jolly & Andrew and so
it never supported SQL. I hope that Jan will give us PL/pgSQL soon
and it will be used for triggers, without changing current trigger
implementation...

Is develemopment being done for PL/pgSQL? What are peoples ideas for
this? I've never used a commercial db before, and the free ones don't
usualle have a stored PL language. What sort of things are you guys
anticipating? In the writing of PL/perl i've been tempted to give
lots of access to the backend internals from perl (why not, it should
have all the facilities C function programmers have!) What do you think?

No actual development - just have something in mind how I
would implement it. I'll get into details after 6.3 release.
PL/pgSQL will have at least the following capabilities:

- local variable
- local records
- access to the database over SPI
- control structures (if/else/while/loop)
- elog messages
- triggers can modify new tuple
- triggers can skip operation

Why not handing many backend internals through a PL? Just to
let ordinary users use the language without breaking
security. The implementation of PL/Tcl uses a safe Tcl
interpreter for the evaluation of the Tcl functions/trigger-
procedures. A safe Tcl interpreter has very limited command
set. No access to filesystem, no access to networking, no
loading of other packages so nothing dangerous.

If perl doesn't have such a restricted interpreter facility,
then perl might never become a TRUSTED procedural language
like Tcl is. Remember, if the PL/perl implementation gives
an ordinary user the right just to open a file for writing, a
function independent who created it could damage database
files directly. This MUST be restricted to superusers and
this is the reason why the C language isn't TRUSTED.

Also, as far as argument passing goes: strings & numbers get passed as
perl scalars, and most other types get passed as a Posgres::Type
object (with methods for conversion etc). Right now I've got a switch
block on the type oid and I have many case statements and a few bodies
for these conversions.

The conversions are hard-coded in the .c file (via the case
statements). The only reason any particular input type shows up in
perl any particular way (as a scalar (string/int) or Postgres::Type)
is because there's a hardcoded case statement for it. Of course, the
default is a Postgres::Type. Which means new integer types show up as
a Postgres::Type, which could be considered a bad thing. Right now
part of what i'm doing is checking the typbyval and then passing that
type as an integer scalar (excluding selected types that don't make
sense as ints).. Maybe the default case should be the types ouptut
function?

Whatever perl scalars are - don't know. Tcl is happy with
string representation of anything on the evaluation level and
Tcl_Eval() leaves a string in the interpreter result. So I
used the registered input/output functions to convert
anything from/into strings to communicate with the
interpreter.

Whatever is given to or returned from a Tcl function in
PL/Tcl will be in the external representation. This is what
the user sees if he selects from a table in psql and what has
to be given on insert/update. So on the PL/Tcl level,
anything is the external representation as defined by the
types registered input/output functions. This is the values
format humans deal with best.

I know that the way I choose isn't the performance optimized
one. But blowing up the functions code by handling binary
data wouldn't be either.

When implementing perl or Tcl as a procedural language, we
must satisfy the need of perl/Tcl programmers. These
programmers must not have any C skill, but they sure will
know how to read/create a string in the external
representation. For some types (especially for user created
types) it might be hard to figure out what each bit in the
binary Datum is for if you aren't familiar with C.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #

#9Brett McCormick
brett@work.chicken.org
In reply to: Noname (#8)
Re: pl/{perl, pgsql} (was Re: AW: [HACKERS] triggers, views and rules (not instead))

Please don't CC the root users (or the mailing list twice) on this.
I've removed them from the headers. Sorry, I don't know how the
addresses got in there.

Show quoted text

Brett wrote:

On Sun, 22 February 1998, at 18:26:45, Vadim B. Mikheev wrote:

Is develemopment being done for PL/pgSQL? What are peoples ideas for
this? I've never used a commercial db before, and the free ones don't
usualle have a stored PL language. What sort of things are you guys
anticipating? In the writing of PL/perl i've been tempted to give
lots of access to the backend internals from perl (why not, it should
have all the facilities C function programmers have!) What do you think?