Event Triggers: adding information
Hi,
Please find attached v3 of the patch to add information to event
triggers, including command string. This work has mostly been part of
the original goal and its design is the result of much reviewing here,
in particular from Robert and Tom (rewriting a command before is done is
called magic, not code, and that's why we have ddl_command_trace now).
Many thanks.
I hope that Robert will be in a position to continue reviewing and
taking care of that patch series. He's clearly interested but really
busy these days, so if another commiter wants to see for himself, please
don't be shy!
The current patch provides the following features:
- New events: ddl_command_end and ddl_command_trace
- New information in the TG_VARIABLE:
object name, schema name, object id, object kind,
operation (CREATE|ALTER|DROP), context (GENERATED|TOPLEVEL|…)
normalized command string
- New event filtering on CONTEXT (see after sig for an example)
+ create event trigger regress_event_trigger_end on ddl_command_end
+ when context in ('toplevel', 'generated', 'query', 'subcommand')
+ execute procedure test_event_trigger();
- Documentation of listed features
- pg_dump support of the new features (context support)
The command string normalization is a necessary feature and takes most
of this patch footprint. The main use case of this feature is in-core
logical replication support, the main non-core users will be the usual
replication suspects (Slony, Londiste, Bucardo) and the audit systems.
We might be able to also support some advanced Extension capabilities if
we get "command diverting" support in next commit fest, and we could
already deprecate the extension whitelisting code I had to do for 9.1
and 9.2.
As we need to support all of CREATE, ATLER and DROP cases, I can't see
another way than to work from a very loose notion of the parse tree. Up
to now I've seen 3 cases worth reporting:
- simple commands, where you can rely on the parsetree as is, and that
need no transform stage (create extension, drop object, alter
sequence, text search *, most commands really)
- medium complex commands, which transform the parsetree in some ways
but for which it still is easy to use it directly if you take care
of only using the transformed parse tree (create table, create
domain, create view — oh, the work was already done)
- complex commands where you need to hack them to return the actual
parse tree like structure they prepared out of the transformed parse
tree so that you can get to kmow what they've just done, and that's
only ALTER TABLE really, up to now
So my guess is that it will get real easier from now, and I will add
some more de parsing support with stats to have some ideas about that.
I've been told that the maintenance cost is going to be so great as to
limit our abilities to continue adding features to existing DDLs and
coming up with new ones. I have two answers: first, I don't suppose
we're going to include logical replication without DDL support in core
and I don't see another realistic way to implement it, second, let's
have some data to think about the case at hand.
For example, I added CREATE SCHEMA objectid and command normalisation
and here's the footprint:
git diff --stat
src/backend/commands/schemacmds.c | 6 ++++--
src/backend/tcop/utility.c | 4 ++--
src/backend/utils/adt/ddl_rewrite.c | 31 +++++++++++++++++++++++++++++++
src/include/commands/schemacmds.h | 2 +-
4 files changed, 38 insertions(+), 5 deletions(-)
For the whole DefineStmt command set (CREATE AGGREGATE, OPERATOR, TYPE,
TEXT SEARCH *, COLLATION), we're talking about:
11 files changed, 186 insertions(+), 70 deletions(-)
95 lines of those in ddl_rewrite.c
For the normalisation of CREATE CONVERSATION command string, this time
it's
4 files changed, 40 insertions(+), 5 deletions(-)
And CREATE DOMAIN, which is more involved as you need to take care of
rewriting DEFAULT and CHECK constraints, we have
4 files changed, 163 insertions(+), 3 deletions(-)
The other commands I need to be working on if we want more data are
ALTER kind of commands, and I've already worked on the most complex one
of those, namely ALTER TABLE. I don't expect most DROP commands to offer
any difficulty here, as I already have support for DropStmt.
The question for DropStmt is what to do about object specific
information when a single command can have as many targets as you want
to? Current patch is filling-in information for the "main" target which
would be the first object in the sequence, but that's a very weak
position and I would easily agree to just leave the fields NULL in that
case.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
A mildly interesting example that you can have now:
create schema baz
authorization dim
create table distributors
(did serial primary key,
name varchar(40),
f2 text check (upper(f2) = f2),
unique(name) with (fillfactor=70)
)
with (fillfactor=70);
I added the blank lines to help the reader in that console paste:
NOTICE: snitch event: ddl_command_end, context: GENERATED, tag: CREATE SEQUENCE, operation: CREATE, type: SEQUENCE
NOTICE: oid: 41633, schema: baz, name: distributors_did_seq
NOTICE: command: CREATE SEQUENCE baz.distributors_did_seq;
NOTICE: snitch event: ddl_command_end, context: SUBCOMMAND, tag: CREATE TABLE, operation: CREATE, type: TABLE
NOTICE: oid: 41635, schema: baz, name: distributors
NOTICE: command: CREATE TABLE baz.distributors (did integer, name pg_catalog.varchar, f2 text, CHECK ((upper(f2) = f2))) WITH (fillfactor=70);
NOTICE: snitch event: ddl_command_end, context: GENERATED, tag: CREATE INDEX, operation: CREATE, type: INDEX
NOTICE: oid: 41643, schema: baz, name: distributors_pkey
NOTICE: command: CREATE UNIQUE INDEX distributors_pkey ON baz.distributors USING btree (did);
NOTICE: snitch event: ddl_command_end, context: GENERATED, tag: CREATE INDEX, operation: CREATE, type: INDEX
NOTICE: oid: 41645, schema: baz, name: distributors_name_key
NOTICE: command: CREATE UNIQUE INDEX distributors_name_key ON baz.distributors USING btree (name) WITH (fillfactor=70);
NOTICE: snitch event: ddl_command_end, context: GENERATED, tag: ALTER SEQUENCE, operation: ALTER, type: SEQUENCE
NOTICE: oid: 41633, schema: baz, name: distributors_did_seq
NOTICE: command: ALTER SEQUENCE baz.distributors_did_seq OWNED BY baz.distributors.did;
NOTICE: snitch event: ddl_command_end, context: TOPLEVEL, tag: CREATE SCHEMA, operation: CREATE, type: SCHEMA
NOTICE: oid: 41632, schema: <NULL>, name: baz
NOTICE: command: CREATE SCHEMA baz AUTHORIZATION dim;
CREATE SCHEMA
Attachments:
Hi again,
The previously attached patch already needs a rebase since Tom fixed the
single user mode where you're not supposed to access potentially
corrupted system indexes. Please find attached a new version of the
patch that should applies cleanly to master (I came to trust git).
Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
The current patch provides the following features:
- New events: ddl_command_end and ddl_command_trace
- New information in the TG_VARIABLE:
object name, schema name, object id, object kind,
operation (CREATE|ALTER|DROP), context (GENERATED|TOPLEVEL|…)
normalized command string- New event filtering on CONTEXT (see after sig for an example)
+ create event trigger regress_event_trigger_end on ddl_command_end + when context in ('toplevel', 'generated', 'query', 'subcommand') + execute procedure test_event_trigger();- Documentation of listed features
- pg_dump support of the new features (context support)
Still the same, plus some more command rewriting support so that I have
some more data points to offer about the code maintainance burden we all
want to limit on that feature.
For example, I added CREATE SCHEMA objectid and command normalisation
and here's the footprint:
4 files changed, 38 insertions(+), 5 deletions(-)For the whole DefineStmt command set (CREATE AGGREGATE, OPERATOR, TYPE,
TEXT SEARCH *, COLLATION), we're talking about:
11 files changed, 186 insertions(+), 70 deletions(-)
95 lines of those in ddl_rewrite.cFor the normalisation of CREATE CONVERSATION command string, this time
4 files changed, 40 insertions(+), 5 deletions(-)And CREATE DOMAIN, which is more involved as you need to take care of
rewriting DEFAULT and CHECK constraints, we have
4 files changed, 163 insertions(+), 3 deletions(-)
The whole set of RenameStmt commands:
32 files changed, 382 insertions(+), 119 deletions(-)
Then after that, sharing a lot of the previous code, support for both
AlterOwnerStmt and AlterObjectSchemaStmt rewriting:
1 file changed, 102 insertions(+), 17 deletions(-)
As you can guess I separated away the EventTriggerTargetOid tracking:
20 files changed, 96 insertions(+), 57 deletions(-)
Again, those data point are not commented here, it's only to get some
better idea of what we are talking about as the cost of maintaining the
new command normalisation feature that's brewing in this patch.
I would like that we commit to that idea and the current implementation
of it, and include the set of commands I already had time to prepare in
the current submission, knowing that I will continue adding commands
support up to the next commitfest with the goal to have a full coverage.
I also will be working on more features (complete PL support, some kind
of command diverting, a "table_rewrite" event, some ready to use event
triggers functions).
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Attachments:
event_trigger_infos.4.patch.gzapplication/octet-streamDownload+1-1
On Wed, Dec 12, 2012 at 4:47 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
The previously attached patch already needs a rebase since Tom fixed the
single user mode where you're not supposed to access potentially
corrupted system indexes. Please find attached a new version of the
patch that should applies cleanly to master (I came to trust git).
Sorry for the delay - I'm looking at this now.
My first observation (a.k.a. gripe) is this patch touches an awful lot
of code all over the backend. We just did a big old refactoring to
try to get all the rename and alter owner commands to go through the
same code path, but if this patch is any indication it has not done us
a whole lot of good. The whole idea of all that work is that when
people wanted to make systematic changes to those operations (like
involving sepgsql, or returning the OID) there would not be 47 places
to change. Apparently, we aren't there yet. Maybe we need some more
refactoring of that stuff before tackling this?
Does anyone object to the idea of making every command that creates a
new SQL object return the OID of an object, and similarly for RENAME /
ALTER TO? If so, maybe we ought to go through and do those things
first, as separate patches, and then rebase this over those changes
for simplicity. I can probably do that real soon now, based on this
patch, if there are no objections, but I don't want to do the work and
then have someone say, ack, what have you done?
I'm basically implacably opposed to the ddl_rewrite.c part of this
patch. I think that the burden of maintaining reverse-parsing code
for all the DDL we support is an unacceptable level of maintenance
overhead for this feature. It essentially means that every future
change to gram.y that affects any of the supported commands will
require a compensating change to ddl_rewrite.c. That is a whole lot
of work and it is almost guaranteed that future patch authors will
sometimes fail to do it correctly. At an absolute bare minimum I
think we would need some kind of comprehensive test suite for this
functionality, as opposed to the less than 60 lines of new test cases
this patch adds which completely fail to test this code in any way at
all, but really I think we should just not have it. It WILL get
broken (if it isn't already) and it WILL slow down future development
of SQL features. It also WILL have edge cases where it doesn't work
properly, such as where the decision of the actual index name to use
is only decided at execution time and cannot be inferred from the
parse tree. I know that you feel that all of these are tolerable
costs, but I 100% disgaree. I predict that if we commit this, it will
becomes a never-ending and irrecoverable font of trouble.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
Robert Haas <robertmhaas@gmail.com> writes:
Sorry for the delay - I'm looking at this now.
Thanks very much!
My first observation (a.k.a. gripe) is this patch touches an awful lot
of code all over the backend. We just did a big old refactoring to
try to get all the rename and alter owner commands to go through the
same code path, but if this patch is any indication it has not done us
a whole lot of good. The whole idea of all that work is that when
people wanted to make systematic changes to those operations (like
involving sepgsql, or returning the OID) there would not be 47 places
to change. Apparently, we aren't there yet. Maybe we need some more
refactoring of that stuff before tackling this?
It's hard to devise exactly what kind of refactoring we need to do
before trying to write a patch that benefits from it, in my experience.
In some cases the refactoring will make things more complex, not less.
Does anyone object to the idea of making every command that creates a
new SQL object return the OID of an object, and similarly for RENAME /
ALTER TO? If so, maybe we ought to go through and do those things
first, as separate patches, and then rebase this over those changes
for simplicity. I can probably do that real soon now, based on this
patch, if there are no objections, but I don't want to do the work and
then have someone say, ack, what have you done?
That's a good idea. I only started quite late in that patch to work on
the ObjectID piece of information, that's why I didn't split it before
doing so.
We might want to commit the other parts of the new infrastructure and
information first then get back on ObjectID and command string, what do
you think? Do you want me to prepare a patch that way, or would you
rather hack your way around the ObjectID APIs then I would rebase the
current patch?
Also, keep in mind we want the ObjectID in all CREATE, ALTER and DROP
statements, so my current patch is still some bricks shy of a load… (I
added ObjectID only in the commands I added rewrite support for, apart
from DROP).
I'm basically implacably opposed to the ddl_rewrite.c part of this
patch. I think that the burden of maintaining reverse-parsing code
for all the DDL we support is an unacceptable level of maintenance
overhead for this feature. It essentially means that every future
I hear you. That feature is still required for any automatic consumption
of the command string because you want to resolve schema names, index
and constraint names, type name lookups and some more.
I think that this feature is also not an option for in-core logical
replication to support DDL at all. What alternative do you propose?
change to gram.y that affects any of the supported commands will
require a compensating change to ddl_rewrite.c. That is a whole lot
of work and it is almost guaranteed that future patch authors will
sometimes fail to do it correctly. At an absolute bare minimum I
think we would need some kind of comprehensive test suite for this
functionality, as opposed to the less than 60 lines of new test cases
this patch adds which completely fail to test this code in any way at
all, but really I think we should just not have it. It WILL get
I had a more complete test suite last rounds and you did oppose to it
for good reasons. I'm ok to revisit that now, and I think the test case
should look like this:
- install an auditing command trigger that will insert any DDL command
string into a table with a sequence ordering
- select * from audit
- \d… of all objects created
- drop schema cascade
- DO $$ loop for sql in select command from audit do execute sql …
- \d… of all objects created again
Then any whacking of the grammar should alter the output here and any
case that's not supported will fail too.
We might be able to have a better way of testing the feature here, but I
tried to stay into the realms of what I know pg_regress able to do.
broken (if it isn't already) and it WILL slow down future development
of SQL features. It also WILL have edge cases where it doesn't work
properly, such as where the decision of the actual index name to use
is only decided at execution time and cannot be inferred from the
parse tree. I know that you feel that all of these are tolerable
The way to solve that, I think, as Tom said, is to only rewrite the
command string when the objects exist in the catalogs. That's why we now
have ddl_command_trace which is an alias to either start or end
depending on whether we're doing a DROP or a CREATE or ALTER operation.
costs, but I 100% disgaree. I predict that if we commit this, it will
becomes a never-ending and irrecoverable font of trouble.
I'm not saying the cost of doing things that way are easy to swallow,
I'm saying that I don't see any other way to get that feature reliably
enough for in-core logical replication to just work with DDLs. Do you?
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2012-12-21 09:48:22 -0500, Robert Haas wrote:
On Wed, Dec 12, 2012 at 4:47 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:The previously attached patch already needs a rebase since Tom fixed the
single user mode where you're not supposed to access potentially
corrupted system indexes. Please find attached a new version of the
patch that should applies cleanly to master (I came to trust git).Sorry for the delay - I'm looking at this now.
My first observation (a.k.a. gripe) is this patch touches an awful lot
of code all over the backend. We just did a big old refactoring to
try to get all the rename and alter owner commands to go through the
same code path, but if this patch is any indication it has not done us
a whole lot of good. The whole idea of all that work is that when
people wanted to make systematic changes to those operations (like
involving sepgsql, or returning the OID) there would not be 47 places
to change. Apparently, we aren't there yet. Maybe we need some more
refactoring of that stuff before tackling this?
Its hard to do such refactorings without very concrete use-cases, so I
think its not unlikely that this patch points out some things that can
be committed independently.
ISTM that a good part of the "return oid;" changes are actually such a
part.
Does anyone object to the idea of making every command that creates a
new SQL object return the OID of an object, and similarly for RENAME /
ALTER TO? If so, maybe we ought to go through and do those things
first, as separate patches, and then rebase this over those changes
for simplicity. I can probably do that real soon now, based on this
patch, if there are no objections, but I don't want to do the work and
then have someone say, ack, what have you done?
FWIW I am all for it.
I'm basically implacably opposed to the ddl_rewrite.c part of this
patch. I think that the burden of maintaining reverse-parsing code
for all the DDL we support is an unacceptable level of maintenance
overhead for this feature. It essentially means that every future
change to gram.y that affects any of the supported commands will
require a compensating change to ddl_rewrite.c. That is a whole lot
of work and it is almost guaranteed that future patch authors will
sometimes fail to do it correctly.
Thats - unsurprisingly - where I have a different opinion. ruleutils.c
does that for normal SQL and while DDL is a bit bigger in the grammar
than the DQL support I would say the data structures of of the latter
are far more complex than for DDL.
If you look at what changes were made to ruleutils.c in the last years
the rate of bugs isn't, as far as I can see, higher than in other areas
of the code.
The ruleutils.c precedent also means that patch authors *have* to be
aware of it already. Not too many features get introduced that only
change DDL.
At an absolute bare minimum I
think we would need some kind of comprehensive test suite for this
functionality, as opposed to the less than 60 lines of new test cases
this patch adds which completely fail to test this code in any way at
all, but really I think we should just not have it.
Absolutely aggreed.
It WILL get
broken (if it isn't already) and it WILL slow down future development
of SQL features. It also WILL have edge cases where it doesn't work
properly, such as where the decision of the actual index name to use
is only decided at execution time and cannot be inferred from the
parse tree.
That obviousl needs to be solved, but I think the idea of invoking this
command trigger whenever its appropriate (which I think was actually
your idea?) should take care of most of the problems around this.
I know that you feel that all of these are tolerable costs, but I 100%
disgaree.
I think that missing DDL support is one of the major weaknesses of all
potential "logical" replication solutions. Be it slonly, londiste, BDR,
or even something what some day may be in core.
I just don't see any more realistic way to fix that besides supporting
something like this.
I predict that if we commit this, it will
becomes a never-ending and irrecoverable font of trouble.
Why? Its really not that different from whats done currently?
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Dec 21, 2012 at 11:35 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
It's hard to devise exactly what kind of refactoring we need to do
before trying to write a patch that benefits from it, in my experience.
In some cases the refactoring will make things more complex, not less.
Perhaps. I have a feeling there's some more simplification that can
be done here, somehow.
That's a good idea. I only started quite late in that patch to work on
the ObjectID piece of information, that's why I didn't split it before
doing so.
That's fine. I've committed that part. I think the remainder of the
patch is really several different features that ought to be split
apart and considered independently. We may want some but not others,
or some may be ready to go in but not others.
Also, keep in mind we want the ObjectID in all CREATE, ALTER and DROP
statements, so my current patch is still some bricks shy of a load… (I
added ObjectID only in the commands I added rewrite support for, apart
from DROP).
I shall rely on you to provide those bricks which are still missing.
change to gram.y that affects any of the supported commands will
require a compensating change to ddl_rewrite.c. That is a whole lot
of work and it is almost guaranteed that future patch authors will
sometimes fail to do it correctly. At an absolute bare minimum I
think we would need some kind of comprehensive test suite for this
functionality, as opposed to the less than 60 lines of new test cases
this patch adds which completely fail to test this code in any way at
all, but really I think we should just not have it. It WILL getI had a more complete test suite last rounds and you did oppose to it
for good reasons. I'm ok to revisit that now, and I think the test case
should look like this:- install an auditing command trigger that will insert any DDL command
string into a table with a sequence ordering- select * from audit
- \d… of all objects created
- drop schema cascade
- DO $$ loop for sql in select command from audit do execute sql …
- \d… of all objects created again
Then any whacking of the grammar should alter the output here and any
case that's not supported will fail too.We might be able to have a better way of testing the feature here, but I
tried to stay into the realms of what I know pg_regress able to do.
I was thinking that we might need to go beyond what pg_regress can do
in this case. For example, one idea would be to install an audit
trigger that records all the DDL that happens during the regression
tests. Then, afterwards, replay that DDL into a new database. Then
do a schema-only dump of the old and new databases and diff the dump
files. That's a little wacky by current community standards but FWIW
EDB has a bunch of internal tests that we run to check our proprietary
stuff; some of them work along these lines and it's pretty effective
at shaking out bugs.
In this particular case, it would significantly reduce the maintenance
burden of putting a proper test framework in place, because we can
hope that the regression tests create (and leave around) at least one
object of any given type, and that any new DDL features will be
accompanied by similar regression tests. We already rely on the
regression database for pg_upgrade testing, so if it's not complete,
it impacts pg_upgrade test coverage as well.
broken (if it isn't already) and it WILL slow down future development
of SQL features. It also WILL have edge cases where it doesn't work
properly, such as where the decision of the actual index name to use
is only decided at execution time and cannot be inferred from the
parse tree. I know that you feel that all of these are tolerableThe way to solve that, I think, as Tom said, is to only rewrite the
command string when the objects exist in the catalogs. That's why we now
have ddl_command_trace which is an alias to either start or end
depending on whether we're doing a DROP or a CREATE or ALTER operation.
Hmm. I have to study that more, maybe. I certainly agree that if you
can look at the catalogs, you should be able to reliably reconstruct
what happened - which isn't possible just looking at the parse tree.
However, it feels weird to me to do something that's partly based on
the parse tree and partly based on the catalog contents. Moreover,
the current pre-create hook isn't the right place to gather
information from the catalogs anyway as it happens before locks have
been taken.
Now, there's another thing that is bothering me here, too. How
complete is the support you've implemented in this patch? Does it
cover ALL CREATE/ALTER/DROP commands, including all options to and all
variants of those commands? Because while I'm not very sanguine about
doing this at all, it somehow feels like doing it only halfway would
be even worse.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
That's a good idea. I only started quite late in that patch to work on
the ObjectID piece of information, that's why I didn't split it before
doing so.That's fine. I've committed that part. I think the remainder of the
patch is really several different features that ought to be split
apart and considered independently. We may want some but not others,
or some may be ready to go in but not others.
Thank you for this partial commit, and Simon and Andres to fill in the
gaps. I should mention that the missing header parts were all in my
patch, and that headers hacking is proving suprisingly uneasy.
I've been having several rounds of problems with the gcc tool chain when
modifying headers. Worst case has been a successful compiling followed
by random errors. Then gdb was not giving any clue (botched memory when
returned from a function, palloc'ed memory not freed yet). After
spending more time on it that I care to admit, I did a `make
maintainer-clean`, built again and the problem disappeared all by
itself.
The gcc tool chain is not my favorite developer environment.
Also, keep in mind we want the ObjectID in all CREATE, ALTER and DROP
statements, so my current patch is still some bricks shy of a load… (I
added ObjectID only in the commands I added rewrite support for, apart
from DROP).I shall rely on you to provide those bricks which are still missing.
Cool, will prepare a separate patch implementing that API now, and base
the following patches on top of that. My thinking is to do as following:
- API patch to get Oid from all CREATE/ALTER commands
- API patch for getting the Oid(s) in (before) DROP commands
- Event Trigger Infos patch, depending on the previous ones
- Command String Rewrite and its publishing in Event Triggers
Of course for the DROP parts, we'd better have the Oid and use it before
the command runs through completion or it will not be usable from the
event trigger (the target is ddl_command_start in that case).
We might be able to have a better way of testing the feature here, but I
tried to stay into the realms of what I know pg_regress able to do.I was thinking that we might need to go beyond what pg_regress can do
in this case. For example, one idea would be to install an audit
trigger that records all the DDL that happens during the regression
tests. Then, afterwards, replay that DDL into a new database. Then
do a schema-only dump of the old and new databases and diff the dump
files. That's a little wacky by current community standards but FWIW
EDB has a bunch of internal tests that we run to check our proprietary
stuff; some of them work along these lines and it's pretty effective
at shaking out bugs.
Are you in a position to contribute those parts to the community?
Implementing them again then having to support two different variants of
them does not look like the best use of both our times.
Hmm. I have to study that more, maybe. I certainly agree that if you
can look at the catalogs, you should be able to reliably reconstruct
what happened - which isn't possible just looking at the parse tree.
Well in fact we're looking at the parsetree once edited to host the
exact data that goes into the catalogs. In most cases that data is
easily available to further consumption, or it's possible to use the
transform API without touching catalogs again.
In some places I'm doing the same processing twice, which I don't like
very much, but it's only happening if an Event Trigger is going to fire
so I though we could optimize that later. The aim here has been to limit
the changes to review, it looks like we have enough of them already.
Concerned parts are dealing with raw and cooked expressions for CHECK
and DEFAULT and WHERE clauses (create table, create index, alter table,
create constraint, create domain).
However, it feels weird to me to do something that's partly based on
the parse tree and partly based on the catalog contents. Moreover,
the current pre-create hook isn't the right place to gather
information from the catalogs anyway as it happens before locks have
been taken.
So all the information is derived from the parse tree. The trick is that
this information is only valid once it has hit the catalogs (think of a
CHECK constraint in a CREATE TABLE statement, referencing some column
attribute, same with a DEFAULT expression depending on another col).
The case where we should certainly prefer looking at the catalog caches
are when we want to know the actual schema where the object has been
created. The current patch is deriving that information mostly from the
parse tree, using the first entry of the search_path when the schema is
not given in the command. It's ok because we are still in the same
transaction and no command has been able to run in between the user
command and our lookup, but I could easily get convinced to look up the
catalogs instead, even more so once we have the OID of the object easily
available in all places.
Now, there's another thing that is bothering me here, too. How
complete is the support you've implemented in this patch? Does it
cover ALL CREATE/ALTER/DROP commands, including all options to and all
variants of those commands? Because while I'm not very sanguine about
doing this at all, it somehow feels like doing it only halfway would
be even worse.
This patch is implementing complete support for all commands for most of
the information, and partial support for some information that comes
from the parse tree analysis. The goal for this commit fest is to agree
on how to do it, then finish the whole command set for the next one.
As decided at the hackers meeting, now is the time to address anything
controversial, once done we can "return with feedback". So that's my
goal for this commit fest, concerning the Normalized Command String.
Given your review, I think the way forward is pretty clear now, and my
plan is as following:
- commit Oid related API changes in this commit fest
- commit Event Trigger Information changes in this commit fest
- agree on how to tackle exposing the normalized command string
Merry Christmas!
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
Also, keep in mind we want the ObjectID in all CREATE, ALTER and DROP
statements, so my current patch is still some bricks shy of a load… (I
added ObjectID only in the commands I added rewrite support for, apart
from DROP).I shall rely on you to provide those bricks which are still missing.
Please find attached a patch to change most functions called from
standard_ProcessUtility() to return an Oid (passes `make
maintainer-clean; configure; make install check`). Most of them only,
because it only make sense for functions touching an object that exists
in the catalogs and have a distinct Oid.
That completes ALTER and CREATE ObjectID support, I did nothing about
the DROP case in the attached. The way I intend to solve that problem is
using get_object_address() and do an extra lookup from within the Event
Trigger code path.
Yes that's an extra lookup, the only alternative that I see would be
another giant refactoring so that all and any DROP support function is
split in a lookup and lock part first, then the actual DROP. I don't see
that as a particular win either, as the advantage of doing a separate
(extra) lookup in the ddl_command_start Event Trigger is that the actual
DROP code then will check that the target still exists.
The attached patch is known to miss support for ExecCreateTableAs
because I wanted to keep the ball rolling and couldn't figure out where
to find the target relation id in time. If you feel like filling that
gap that would be awesome, if not I will take care of that later, either
in a version v1 of that patch, or in a follow-up patch, or embedded into
the next patch to come, the cleaned up Event Trigger Info patch without
Normalized Command String support. As you see fit.
Merry Christmas All,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Attachments:
Hi,
Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
Cool, will prepare a separate patch implementing that API now, and base
the following patches on top of that. My thinking is to do as following:- API patch to get Oid from all CREATE/ALTER commands
- API patch for getting the Oid(s) in (before) DROP commands
- Event Trigger Infos patch, depending on the previous ones
- Command String Rewrite and its publishing in Event TriggersOf course for the DROP parts, we'd better have the Oid and use it before
the command runs through completion or it will not be usable from the
event trigger (the target is ddl_command_start in that case).
Please find attached two versions of the same patch implementing added
information for Event Triggers, without any support for command strings
normalizing.
- event_trigger_infos.5.patch.gz
applies on top of current's master branch
- event_trigger_infos_partial.5.patch.gz
applies on top of oid-api.0.patch.gz applied to current's master
branch
I guess you're going to use the partial one, but if you want to run a
quick testing the aggregated patch might make sense.
The way I've been adding schema and object names is to do extra lookups
to the system caches (or the catalogs when we don't have caches), so
that the information is know to be correct in all cases. That means I
had to add a CommandCounterIncrement(), and that the information is only
available when the object actually exists (use ddl_command_trace).
Also, in the DROP case, the extra lookup that happens before the DROP
command is done with a ShareUpdateExclusiveLock, so that the object
doesn't disappear from concurrent activity while we run the User Defined
Trigger. Should we be concerned about deadlock possibilities here when
later the transaction will need to upgrade its Lock?
Given lock upgrade concerns, I guess there's not so much we can do apart
from taking an AccessExclusiveLock from the get go in the case of a DROP
Event Trigger at ddl_command_start, right?
Are you in a position to contribute those parts to the community?
Implementing them again then having to support two different variants of
them does not look like the best use of both our times.
I didn't include new testing in that patch version, waiting for your
answer.
As decided at the hackers meeting, now is the time to address anything
controversial, once done we can "return with feedback". So that's my
goal for this commit fest, concerning the Normalized Command String.Given your review, I think the way forward is pretty clear now, and my
plan is as following:- commit Oid related API changes in this commit fest
- commit Event Trigger Information changes in this commit fest
- agree on how to tackle exposing the normalized command string
Including this email now, the first two items are getting addressed
(pending review then commit, at least that's the goal here). Let's not
forget the third point: we need to reach an agreement on how to address
the Command String Normalisation, too.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On 21 December 2012 14:48, Robert Haas <robertmhaas@gmail.com> wrote:
I predict that if we commit this, it will
becomes a never-ending and irrecoverable font of trouble.
Event triggers are a new feature with many powerful uses: replication,
online upgrade, data transport.
New features require extra code. To decide how big a maintenance
burden, we need to look at whether we want the feature, and if we have
it, how much code it would be to maintain it.
pg_dump needs to dump what is there, so dumping "current state"
information is required. That can be accessed from catalog, but
requires significant extra code to bring that all back together into
DDL text. pg_dump is useful for schema backups and pg_upgrade. This
code already exists in core.
Replication needs to see "delta" information, so we can track changes
as they happen. Trying to deduce exact deltas by diffing two "current
state" dumps is close to impossible. The only way is a "delta" stream.
This would be used by replication and on-line upgrade. This code can
be added to core by the patches under discussion.
The code is broadly comparable between "current state" and "delta".
It's basically just assembling the text of DDL statements from
available information. In one case from the catalog, in the other case
from the command parse/execution trees. It's not especially hard to
code, nor will it be fiddly or difficult to understand - just
developing normalised text versions of the change in progress.
Normalisation is important and useful; we spent time in 9.2 doing
things the right way for pg_stat_statements.
In both cases, the code really should live in core. So we can tie the
code directly to each release. Realistically, doing things outside
core means there are often holes or problems that cannot be easily
solved. That seems almost certain to be the case here.
pg_dump maintenance is required but its not a huge burden. pg_dump
hasn't slowed down the addition of new DDL features. It just requires
a little extra code, mostly very simple.
The PostgreSQL project needs rewrite support for DDL, so it will come
to exist inside, or outside core. Knowing it is there, being able to
rely on that, means we'll make faster progress towards the new
features we are building. The approach proposed here has been
discussed over many years, minuted in detail at cluster hackers
meetings: "DDL Triggers".
My thoughts are that the approach here has been well thought through
and is exactly what we need. It *is* extra code, but not complex code,
and it is code designed to bring us closer to our goals, not to hamper
the development of new DDL statements. With the right tests, the
changes required would be easy to spot and add code for them. It looks
to me that we already have a good suggestion on exactly how to do that
(from Robert).
So overall, it certainly is worth thinking hard about whether we
should include the rewrite code. Balancing the costs and the benefits,
I think we should include.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Dec 25, 2012 at 10:34 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
Thank you for this partial commit, and Simon and Andres to fill in the
gaps. I should mention that the missing header parts were all in my
patch, and that headers hacking is proving suprisingly uneasy.
Apologies to all about the missed bits. I believe that I had them in
my version of the patch as well, but I think the changes ended up
unstaged in my repository rather than folded into the patch. I'm
usually smarter than that, but, obviously, not this time.
I was thinking that we might need to go beyond what pg_regress can do
in this case. For example, one idea would be to install an audit
trigger that records all the DDL that happens during the regression
tests. Then, afterwards, replay that DDL into a new database. Then
do a schema-only dump of the old and new databases and diff the dump
files. That's a little wacky by current community standards but FWIW
EDB has a bunch of internal tests that we run to check our proprietary
stuff; some of them work along these lines and it's pretty effective
at shaking out bugs.Are you in a position to contribute those parts to the community?
Implementing them again then having to support two different variants of
them does not look like the best use of both our times.
If I thought there were some useful code, I would try to see if we
could contribute it, but I'm pretty sure there isn't. We have a bunch
of pg_dump tests, but their intended purpose is to verify that our
proprietary stuff dumps and reloads properly, which is not what we
need here. The reason I mentioned it was just to make the point that
it is possible to have a large suite of tests that goes beyond what is
possible with pg_regress, and that this can be a useful way of finding
out whether you've broken things. In our case, we actually cheat
pretty hard by using things like \! pg_dump in the .sql files, so it
actually ends up going through pg_regress after all, technically
anyway. I'm not quite sure what to think about that approach from a
community point of view. It certainly ends up reducing the number of
new things that you need to invent in order to do new kinds of
testing, but it's still not as flexible as I would like - for example,
I don't see a way to do the exact kind of testing we need here in a
cross-platform way using that infrastructure.
I'm tempted to propose that we define a Perl-based testing API for
"additional regression tests" that can't be handled using pg_regress.
Like: we put a bunch of Perl modules in a certain directory, and then
write a script that goes through and do
require $module;
$module->regress;
...on each one, based on some schedule file. If a test passes or
fails, it calls some API that we provide to report the result. That
way, whatever tests we write for this effort can potentially become
the core of a larger test suite, and by basing it on Perl we add no
new tool dependencies and can (hopefully) run on both Linux and
Windows. I'm open up to other ideas, of course.
The case where we should certainly prefer looking at the catalog caches
are when we want to know the actual schema where the object has been
created. The current patch is deriving that information mostly from the
parse tree, using the first entry of the search_path when the schema is
not given in the command. It's ok because we are still in the same
transaction and no command has been able to run in between the user
command and our lookup, but I could easily get convinced to look up the
catalogs instead, even more so once we have the OID of the object easily
available in all places.
I haven't looked at the code, but it seems to me that there have got
to be edge cases where that will fail. For one thing, we only do
pretty minimal locking on namespaces, so I'm not at all sure that
someone couldn't (for example) rename the old namespace out of the
way, thus causing one search_path lookup to find the first schema in
the search_path and the other lookup to find the second schema in the
search_path. Frankly, I think we have some hazards of that type in
the DDL code as it stands, so it's not like nobody's ever made that
mistake before, but it would be nice not to propagate it.
Really, the only safe way to do these things is to have any given name
lookup done in one and only one place. We're some distance from there
and there and I don't know that this patch should be required to carry
the burden of nailing down all the loose ends in that area, but I
think it's reasonable to insist that it shouldn't do anything which
makes it impossible for someone else to nail down those loose ends,
which is still fairly high on my personal to-do list.
The other danger here is - what exactly do you mean by "no command has
been able to run between the user command and our lookup"? Because
you can do stupid things with functions like set_config(). This would
only be safe if no user-provided expressions can possibly get
evaluated between point A and point B, and that strikes me as the sort
of thing that could easily be false unless this is all done VERY close
to the start of processing.
A belated Merry Christmas to you as well.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Dec 25, 2012 at 1:42 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
Also, keep in mind we want the ObjectID in all CREATE, ALTER and DROP
statements, so my current patch is still some bricks shy of a load… (I
added ObjectID only in the commands I added rewrite support for, apart
from DROP).I shall rely on you to provide those bricks which are still missing.
Please find attached a patch to change most functions called from
standard_ProcessUtility() to return an Oid (passes `make
maintainer-clean; configure; make install check`). Most of them only,
because it only make sense for functions touching an object that exists
in the catalogs and have a distinct Oid.
OK, I committed this.
That completes ALTER and CREATE ObjectID support, I did nothing about
the DROP case in the attached. The way I intend to solve that problem is
using get_object_address() and do an extra lookup from within the Event
Trigger code path.
An extra lookup that occurs always, or only when event triggers are in
use? A re-resolution of the name, or some other kind of lookup?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
If I thought there were some useful code, I would try to see if we
could contribute it, but I'm pretty sure there isn't. We have a bunch
Oh. Too bad.
[... description of the tool ...]
I don't see a way to do the exact kind of testing we need here in a
cross-platform way using that infrastructure.
Well, do we have \! in psql in windows at all? does it work in a similar
way to the unix one? I just don't know.
I'm tempted to propose that we define a Perl-based testing API for
"additional regression tests" that can't be handled using pg_regress.
I know that our build requirements are pretty stable for very good
reasons and I appreciate that. Now, I don't do perl. I'll happily use
whatever you come up with, though.
command and our lookup, but I could easily get convinced to look up the
catalogs instead, even more so once we have the OID of the object easily
available in all places.I haven't looked at the code, but it seems to me that there have got
to be edge cases where that will fail. For one thing, we only do
In between that email and the latest patch I've removed any processing
of the parse tree, because without the ddl_rewrite module it didn't make
much sense anyway, and I still want to be sure we have ObjectID, Name
and Schema as a base information set.
Really, the only safe way to do these things is to have any given name
lookup done in one and only one place. We're some distance from there
and there and I don't know that this patch should be required to carry
the burden of nailing down all the loose ends in that area, but I
think it's reasonable to insist that it shouldn't do anything which
makes it impossible for someone else to nail down those loose ends,
which is still fairly high on my personal to-do list.
The current version of the patch now does its own lookup, and only does
so at a time when the object exists in the catalogs. So I think we're
now good on that front in the current Event Triggers patch, even if it's
not refactoring all and any DDL to be able to add hooks in between
lookups and locks, for example.
Also, several cases appear to need an extra lookup anyway, such as ALTER
OBJECT … RENAME TO … or ALTER OBJECT … SET SCHEMA … when you want to
feed information to a ddl_command_end Event Trigger…
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
OK, I committed this.
Thanks. Please find attached a rebased patch, v6. I think it's looking
more like what you would expect now:
COLUMNS=70 git diff --stat postgres/master..
doc/src/sgml/event-trigger.sgml | 196 +++-
doc/src/sgml/plpgsql.sgml | 95 +-
doc/src/sgml/ref/create_event_trigger.sgml | 20 +-
src/backend/catalog/objectaddress.c | 22 +-
src/backend/commands/event_trigger.c | 870 ++++++++++++++++-
src/backend/commands/trigger.c | 39 +
src/backend/commands/typecmds.c | 2 +-
src/backend/tcop/utility.c | 385 +++++---
src/backend/utils/cache/evtcache.c | 15 +
src/bin/pg_dump/pg_dump.c | 21 +-
src/bin/pg_dump/pg_dump.h | 1 +
src/bin/psql/describe.c | 9 +-
src/include/catalog/objectaddress.h | 21 +
src/include/catalog/pg_event_trigger.h | 2 +
src/include/commands/event_trigger.h | 47 +-
src/include/commands/trigger.h | 1 +
src/include/utils/evtcache.h | 6 +-
src/pl/plpgsql/src/pl_comp.c | 48 +
src/pl/plpgsql/src/pl_exec.c | 57 +-
src/pl/plpgsql/src/plpgsql.h | 6 +
src/test/regress/expected/event_trigger.out | 53 +-
src/test/regress/sql/event_trigger.sql | 51 +-
22 files changed, 1700 insertions(+), 267 deletions(-)
That completes ALTER and CREATE ObjectID support, I did nothing about
the DROP case in the attached. The way I intend to solve that problem is
using get_object_address() and do an extra lookup from within the Event
Trigger code path.An extra lookup that occurs always, or only when event triggers are in
use? A re-resolution of the name, or some other kind of lookup?
Only when event triggers are in use, and when the object is known to
exists in the catalogs (ddl_command_start for a DROP operation, or
ddl_command_end for a CREATE or ALTER operation). And it's only a name
resolution using the catcache when it exists, or a catalog index scan
when we have to. The key we have is the OID.
The OID is filled in from utility.c in all CREATE and ALTER operations
that we are interested into, and in the case of a DROP operation we do
either a RangeVarGetRelid for relations or get_object_address() for the
other object kinds.
Given the OID, we use the ObjectProperty[] structure to know which cache
or catalog to scan in order to get the name and namespace of the object.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Attachments:
On 12/29/2012 08:41 AM, Dimitri Fontaine wrote:
Well, do we have \! in psql in windows at all? does it work in a similar
way to the unix one? I just don't know.
Of course we do, Why ever would you think we don't? The command must be
written in the Windows shell language, but the behaviour is the same. If
system() and popen() didn't work on Windows we'd be in serious trouble
having a Windows port at all.
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
The other danger here is - what exactly do you mean by "no command has
been able to run between the user command and our lookup"? Because
you can do stupid things with functions like set_config(). This would
only be safe if no user-provided expressions can possibly get
evaluated between point A and point B, and that strikes me as the sort
of thing that could easily be false unless this is all done VERY close
to the start of processing.
To me, the largest single risk of the whole event triggers feature is
precisely that it will result in random user-provided code getting
executed in fairly random places, thus breaking assumptions of this type
that may be hard or impossible to fix. But given that allowing that
is more or less exactly the point of the feature, I'm not sure why
you're blaming the patch for it. It should have been rejected on day
one if you're not willing to have that type of risk.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, 2012-12-29 at 08:11 -0500, Robert Haas wrote:
On Tue, Dec 25, 2012 at 1:42 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:Robert Haas <robertmhaas@gmail.com> writes:
Also, keep in mind we want the ObjectID in all CREATE, ALTER and DROP
statements, so my current patch is still some bricks shy of a load… (I
added ObjectID only in the commands I added rewrite support for, apart
from DROP).I shall rely on you to provide those bricks which are still missing.
Please find attached a patch to change most functions called from
standard_ProcessUtility() to return an Oid (passes `make
maintainer-clean; configure; make install check`). Most of them only,
because it only make sense for functions touching an object that exists
in the catalogs and have a distinct Oid.OK, I committed this.
There is a new compiler warning coming from this, I believe:
copy.c: In function ‘DoCopy’:
copy.c:835:2: error: ‘relid’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
return relid;
^
copy.c:753:6: note: ‘relid’ was declared here
Oid relid;
^
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
Peter Eisentraut <peter_e@gmx.net> writes:
There is a new compiler warning coming from this, I believe:
I don't get this warning here, at least not in -O2, and I did forget to
make maintainer-clean then rebuild with -O0 just in case gcc is now
complaining about something else. I wish this situation could be fixed.
Now, this warning needs to be fixed, too, and here's a patch for that.
Thanks, regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Attachments:
copy-warning-fix.patchtext/x-patchDownload+1-1
On Sun, 2012-12-30 at 13:18 +0100, Dimitri Fontaine wrote:
Peter Eisentraut <peter_e@gmx.net> writes:
There is a new compiler warning coming from this, I believe:
I don't get this warning here, at least not in -O2, and I did forget to
make maintainer-clean then rebuild with -O0 just in case gcc is now
complaining about something else. I wish this situation could be fixed.
I get this warning in Debian squeeze and wheezy with fairly standard
build options. I don't think -O0 will give you any more warnings.
Now, this warning needs to be fixed, too, and here's a patch for that.
Thanks, I fixed it slightly differently.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Dimitri Fontaine escribió:
Robert Haas <robertmhaas@gmail.com> writes:
OK, I committed this.
Thanks. Please find attached a rebased patch, v6. I think it's looking
more like what you would expect now:
I gave this patch a look. I'm not done with it; I mostly gave the
utility.c changes some love so that the end result is not as cluttered.
I did this by creating a couple of macros that call the Start() thing,
then set the OID, then call the End() thing. This made pretty obvious
the places that were missing to set the object OID; I changed the CREATE
TABLE AS code to return it, for instance. The patch I attach applies on
top of Dimitri's v6 patch. With these changes, it's much easier to
review the big standard_ProcessUtility() switch and verify cases that
are being handled correctly. (A few places cannot use the macros I
defined because they use more complex arrangements. This is fine, I
think, because they are few enough that can be verified manually.)
I also noticed that ALTER DEFAULT PRIVILEGES was trying to fire event
triggers, even though we don't support GRANT and REVOKE; it seems to me
that the three things out to be supported together. I'm told David
Fetter would call this "DCL support", and we're not supporting DCL at
this stage. So DEFAULT PRIVILEGES ought to be not supported.
I also observed that the SECURITY LABEL commands does not fire event
triggers; I think it should. But then maybe this can be rightly called
DCL as well, so perhaps it's all right to leave it out.
DROP OWNED is not firing event triggers. There is a comment therein
that says "we don't fire them for shared objects", but this is wrong:
this command is dropping local objects, not shared, so it seems
necessary to me that triggers are fired.
I also noticed that some cases such as DROP <whatever> do not report the
OIDs of all the objects that are being dropped, only one of them. This
seems bogus to me; if you do "DROP TABLE foo,bar" then both tables
should be reported.
Given the OID, we use the ObjectProperty[] structure to know which cache
or catalog to scan in order to get the name and namespace of the object.
The changes to make ObjectPropertyType visible to code outside
objectaddress.c look bogus to me. I think we should make this new code
use the interfaces we already have.
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers