Extension Facility
Hi,
The same mail as before in a new thread, per Robert comment. Including
the body rather than an archive link for various reasons, including
making it easy to comment here rather than there.
Le 22 juil. 09 à 02:56, Robert Haas a écrit :
On Tue, Jul 21, 2009 at 7:25 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote:
Or maybe we should think about having two versions of hstore. This
is all tied up in the problem of having a decent module
infrastructure
(which I hope somebody is working on for 8.5).
I indeed still intend to provide some patch in the 8.5 cycle. While
the user design issue didn't receive any push back, some big items
remain to be solved. So here's my current TODO about it:
- get some more familiar and involved in backend code by being one of
the RRR
- consider the proposed syntax ok for a first stab at it
- make the pg_catalog.pg_extension entry and the associated commands
with version as text, one thing at a time please
- bootstrap core components in pg_extension for dependancy on them
(plpgsql, ...)
- implement a backend function pg_execute_commands_from_file('path/to/
file.sql');
reserved to superuser, file in usual accepted places
- implement INSTALL EXTENSION with the previous function
- adding a static backend local variable installing_extension (oid)
- modifying each SQL object create statement to add entries in
pg_depend
- add an specific type for handling version numbers and operators
comparing them
Here are from memory the problems we don't have a solution for yet:
- how to give user the ability to install the extension's objects in
another schema than the pg_extension default one
- how to provide extension author a way to have major PG version
dependant code without having to implement and maintain a specific
function in their install.sql file
Please go comment on this other thread if you think the syntax is
awful or for helping me through the big tickets:
http://archives.postgresql.org/pgsql-hackers/2009-06/msg01281.php
A decent module infrastructure is probably not going to fix this
problem unless it links with -ldwiw. There are really only two
options here:
I beg to defer. The way for a decent *extension* facility to handle
the case is by providing an upgrade function which accepts too
arguments: old and new version of the module. Then the module author
is able to run custom code from within the module upgrade transaction,
where migrating on disk data representation is entirely possible.
pg_depend would have to allow for easy finding of a given datatype
column I guess.
(I am also not aware that anyone is working on the module
infrastructure problem, though of course that doesn't mean that no-one
is; but the point is that's neither here nor there as respects the
present problem. The module infrastructure is just a management layer
around the same underlying issues.)
Of course if anyone wants to join, I'd appreciate. Some have offered
help and I've been failing to provide them with my todo list... but
getting a first patch for next commit fest is a goal.
Regards,
--
dim
On Jul 22, 2009, at 11:46 AM, Dimitri Fontaine wrote:
Here are from memory the problems we don't have a solution for yet:
- how to give user the ability to install the extension's objects in
another schema than the pg_extension default one
Was that not a part of your original proposal, or the ensuing
discussion? Hrm, perhaps not. So I suggest that we take your proposed
syntax:
create extension foo ...
And just allow it to take a schema-qualified argument like any other
SQL command:
create extension myschema.foo ...
- how to provide extension author a way to have major PG version
dependant code without having to implement and maintain a specific
function in their install.sql file
For a lot of extensions this may not be necessary. So I don't think
I'd hold up an initial implementation waiting for this to be figured
out. My $0.02.
Best,
David
Hi,
"David E. Wheeler" <david@kineticode.com> writes:
On Jul 22, 2009, at 11:46 AM, Dimitri Fontaine wrote:
- how to give user the ability to install the extension's objects in
another schema than the pg_extension default oneAnd just allow it to take a schema-qualified argument like any other SQL
command:create extension myschema.foo ...
The problem is to allow extension code to refer to other extension code
without security problems related to search_path: in short, as an
extension author you want to be able to schema qualify your function
calls or even the PROCEDURE attached to your operators.
Now how to be able to refer to the extension schema in the install.sql
file if user is allowed to install where he wants?
Easy answer for first version: don't allow user to install extension in
another place than what we think will better suit him, and that's the
new schema pg_extension, which always lies just before pg_catalog in the
search_path.
- how to provide extension author a way to have major PG version dependant
code without having to implement and maintain a specific function in
their install.sql fileFor a lot of extensions this may not be necessary. So I don't think I'd hold
up an initial implementation waiting for this to be figured out. My $0.02.
Yes. I came up with the beginning of something (major version dependant
additional install.sql files) but then you need to control ordering, so
maybe pre and post install files with major version dependant
derivatives. "Over engineered" is certainly the comment I'll hear about
it.
Regards,
--
dim
P.S: the best way to help me with the extension stuff as of now would be
to confirm the syntax proposal (separating extension metadata creation
from installation step) is sound for you, and possibly giving hint about
the proposed completion plan up in this thread.
http://archives.postgresql.org/pgsql-hackers/2009-06/msg01281.php
http://archives.postgresql.org/pgsql-hackers/2009-07/msg01425.php
Tom, in particular, what do you think about implementing a general
purpose backend function similar to psql's \i (except without support
for \commands and :variables):
SELECT pg_execute_commands_from_file('path/ to/file.sql');
Your recent work about having a re-entrant parser should make it
possible to implement, by either "extending" or copy/pasting the
postgres.c:exec_simple_query, right?
(Difference is about not overriding current unnamed portal and maybe
forcing PortalRunMulti() usage, and that there's already a started
transaction (but start_xact_command() is a noop in this case))
On Jul 23, 2009, at 1:08 AM, Dimitri Fontaine wrote:
Easy answer for first version: don't allow user to install extension
in
another place than what we think will better suit him, and that's the
new schema pg_extension, which always lies just before pg_catalog in
the
search_path.
Well, I think that it's reasonable to allow an extension to be in any
schema, with the default being pg_extension, but all of the objects in
a single extension should assume that they're all in the same schema,
at least to start. I mean, I can see the need for secondary schemas
(or sub-schemas?) for encapsulation, but do we really need to go there
in the first rev?
Yes. I came up with the beginning of something (major version
dependant
additional install.sql files) but then you need to control ordering,
so
maybe pre and post install files with major version dependant
derivatives. "Over engineered" is certainly the comment I'll hear
about
it.
Yeah, so omit it for now, I say. Start with what's widely agreed-upon
and relatively simple. We can iterate this pony over time.
Best,
David
"David E. Wheeler" <david@kineticode.com> writes:
On Jul 23, 2009, at 1:08 AM, Dimitri Fontaine wrote:
Easy answer for first version: don't allow user to install extension in
another place than what we think will better suit him, and that's the
new schema pg_extension, which always lies just before pg_catalog in the
search_path.Well, I think that it's reasonable to allow an extension to be in any
schema, with the default being pg_extension, but all of the objects in a
single extension should assume that they're all in the same schema, at
least to start. I mean, I can see the need for secondary schemas (or
sub-schemas?) for encapsulation, but do we really need to go there in the
first rev?
Well the problem with that is if for example I define foo() and bar()
functions in my extension, and the user also has a foo() function in his
own stuff (possibly lying in public, say).
Now if in my extenion in function bar() I call foo(), how do I make sure
I'm calling my extension's foo()?
--
dim
On Jul 23, 2009, at 2:11, Dimitri Fontaine <dfontaine@hi-media.com>
wrote:
Well the problem with that is if for example I define foo() and bar()
functions in my extension, and the user also has a foo() function in
his
own stuff (possibly lying in public, say).Now if in my extenion in function bar() I call foo(), how do I make
sure
I'm calling my extension's foo()?
Part of the behavior of CREATE EXTENSION would be to automatically
schema-qualify references to objects in the extension. Or perhaps
extension authors would need to use some sort of variable for the
schema that would be properly resolved when CREATE EXTENSION installed
an extension.
Those are the first ideas that come to kind for me, anyway.
Best,
David
"David E. Wheeler" <david@kineticode.com> writes:
Part of the behavior of CREATE EXTENSION would be to automatically
schema-qualify references to objects in the extension. Or perhaps
extension authors would need to use some sort of variable for the schema
that would be properly resolved when CREATE EXTENSION installed an
extension.
What about embedded calls in, say, plperl functions.
--
dim
On Jul 23, 2009, at 8:09 AM, Dimitri Fontaine wrote:
What about embedded calls in, say, plperl functions.
Hence the variable suggestion. In fact, it might go back to the idea
of subschemas, perhaps the name of the extension should be part of the
qualifying? I dunno, I'm just kind of throwing ideas out there, but
it's starting to remind me of packages or classes. Inside a class, a
call to a method without an invocant automatically delegates to the
method in the class. That sort of thing. But I'm wary of over-
designing here, so I'm not sure what the right thing to do is, unless
it's to punt.
Best,
David