Inline Extension
Hi,
The extension mechanism we added in 9.1 is aimed at allowing a fully
integrated contrib management, which was big enough a goal to preclude
doing anything else in its first release.
Now we have it and we can think some more about what features we want
covered, and a pretty obvious one that's been left out is the ability to
define and update an extension without resorting to file system support
for those extensions that do not need a shared object library. We could
have been calling that “SQL ONLY” extensions, but to simplify the
grammar support I did use the “inline” keyword so there we go.
Please find attached a WIP patch implementing that. Note that the main
core benefit to integrating this feature is the ability to easily add
regression tests for extension related features. Which is not done yet
in the attached.
I'm sending this quite soon because of the pg_dump support. When an
extension is inline, we want to dump its content, as we currently do in
the binary dump output. I had in mind that we could output a full
CREATE EXTENSION INLINE script in between some dollar-quoting rather
than adding each extension's object with a ALTER EXTENSION ... ADD line
like what pg_upgrade compatibility is currently doing.
It seems like much more work though, and I'd appreciate input about how
exactly to do that (it looks like making pg_dump reentrant, somehow).
Or some reason not to bother and just rename and share the binary
upgrade facility that Bruce already has put into pg_dump.
Here's a usage example that will certainly end up in the docs:
create extension pair inline version '1.0' schema pair not relocatable as
$pair$
CREATE TYPE pair AS ( k text, v text );
CREATE OR REPLACE FUNCTION pair(anyelement, text)
RETURNS pair LANGUAGE SQL AS 'SELECT ROW($1, $2)::pair';
CREATE OR REPLACE FUNCTION pair(text, anyelement)
RETURNS pair LANGUAGE SQL AS 'SELECT ROW($1, $2)::pair';
CREATE OR REPLACE FUNCTION pair(anyelement, anyelement)
RETURNS pair LANGUAGE SQL AS 'SELECT ROW($1, $2)::pair';
CREATE OR REPLACE FUNCTION pair(text, text)
RETURNS pair LANGUAGE SQL AS 'SELECT ROW($1, $2)::pair;';
CREATE OPERATOR ~> (LEFTARG = text, RIGHTARG = anyelement, PROCEDURE = pair);
CREATE OPERATOR ~> (LEFTARG = anyelement, RIGHTARG = text, PROCEDURE = pair);
CREATE OPERATOR ~> (LEFTARG = anyelement, RIGHTARG = anyelement, PROCEDURE = pair);
CREATE OPERATOR ~> (LEFTARG = text, RIGHTARG = text, PROCEDURE = pair);
$pair$;
alter extension pair update from '1.0' to '1.1' with
$pair$
CREATE OR REPLACE FUNCTION key(pair)
RETURNS text LANGUAGE SQL AS 'SELECT ($1).k;';
CREATE OR REPLACE FUNCTION value(pair)
RETURNS text LANGUAGE SQL AS 'SELECT ($1).v;';
CREATE OPERATOR %% (RIGHTARG = pair, PROCEDURE = key);
CREATE OPERATOR %# (RIGHTARG = pair, PROCEDURE = value);
$pair$;
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Attachments:
extension-inline.v0.patchtext/x-patchDownload+323-22
On 08.01.2012 22:36, Dimitri Fontaine wrote:
The extension mechanism we added in 9.1 is aimed at allowing a fully
integrated contrib management, which was big enough a goal to preclude
doing anything else in its first release.
Hooray!
Now we have it and we can think some more about what features we want
covered, and a pretty obvious one that's been left out is the ability to
define and update an extension without resorting to file system support
for those extensions that do not need a shared object library. We could
have been calling that “SQL ONLY” extensions, but to simplify the
grammar support I did use the “inline” keyword so there we go.
Frankly I don't see the point of this. If the extension is an
independent piece of (SQL) code, developed separately from an
application, with its own lifecycle, a .sql file seems like the best way
to distribute it. If it's not, ie. if it's an integral part of the
database schema, then why package it as an extension in the first place?
Please find attached a WIP patch implementing that. Note that the main
core benefit to integrating this feature is the ability to easily add
regression tests for extension related features. Which is not done yet
in the attached.
I'm not sure I buy that argument. These inline extensions are
sufficiently different from regular extensions that I think you'd need
to have regression tests for both kinds, anyway.
I'm sending this quite soon because of the pg_dump support. When an
extension is inline, we want to dump its content, as we currently do in
the binary dump output. I had in mind that we could output a full
CREATE EXTENSION INLINE script in between some dollar-quoting rather
than adding each extension's object with a ALTER EXTENSION ... ADD line
like what pg_upgrade compatibility is currently doing.
I thought the main point of extensions is that that they're not included
in pg_dump. Again, if the extension is an integral part of the database,
then it probably shouldn't be an extension in the first place.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
Frankly I don't see the point of this. If the extension is an independent
piece of (SQL) code, developed separately from an application, with its own
lifecycle, a .sql file seems like the best way to distribute it. If it's
not, ie. if it's an integral part of the database schema, then why package
it as an extension in the first place?
It allows to easily deploy an extension to N databases (my current use
case has 256 databases) and knowing which version is installed on each
server. It's easier to QA your procedures and upgrades when they are
packaged as extensions, too.
Now, for the dependency on a SQL file hosting the content, it's easier
to just connect to the databases and get them the script in the SQL
command rather than deploying a set of files: that means OS level
packaging, either RPM or debian or some other variant. Or some other
means of easily deploying the files. An SQL connection is all you need
if you're not shipping .so.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
Frankly I don't see the point of this. If the extension is an independent
piece of (SQL) code, developed separately from an application, with its own
lifecycle, a .sql file seems like the best way to distribute it. If it's
not, ie. if it's an integral part of the database schema, then why package
it as an extension in the first place?
It allows to easily deploy an extension to N databases (my current use
case has 256 databases) and knowing which version is installed on each
server. It's easier to QA your procedures and upgrades when they are
packaged as extensions, too.
Now, for the dependency on a SQL file hosting the content, it's easier
to just connect to the databases and get them the script in the SQL
command rather than deploying a set of files: that means OS level
packaging, either RPM or debian or some other variant. Or some other
means of easily deploying the files. An SQL connection is all you need
if you're not shipping .so.
I'm with Heikki on not believing that this is a good idea. If you are
trying to do careful versioning of a set of object definitions, you want
to stick the things in a file, you don't want them just flying by in
submitted SQL. Also, a large part of the point of the extension
facility is to be able to do uninstall/reinstall and version
upgrades/downgrades, none of which are possible unless the extension
scripts are stored somewhere.
ISTM your distribution concern would be much better addressed by
installing contrib/adminpack and then just using pg_file_write()
to put the new extension script into the remote server's library.
regards, tom lane
On Jan 19, 2012, at 7:21 AM, Dimitri Fontaine wrote:
Now, for the dependency on a SQL file hosting the content, it's easier
to just connect to the databases and get them the script in the SQL
command rather than deploying a set of files: that means OS level
packaging, either RPM or debian or some other variant. Or some other
means of easily deploying the files. An SQL connection is all you need
if you're not shipping .so.
ISTM that if you are managing 256 servers, you’re likely already using a packaging system for the deployment of application dependencies. In which case, to keep things consistent, you ought to distribute your extensions in exactly the same way.
Best,
David
Tom Lane <tgl@sss.pgh.pa.us> writes:
I'm with Heikki on not believing that this is a good idea. If you are
trying to do careful versioning of a set of object definitions, you want
to stick the things in a file, you don't want them just flying by in
submitted SQL.
I'm trying to open the extension facilities (versions being the first of
them, think \dx) to application PL code, and to hosted environments
where you're not granted access to the server's file system.
I think I would agree that the use case is not existing if the target is
traditional in-house deployments where the sys admins are your
colleagues. I've been told that's a smaller and smaller part of the
database world though.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Thu, Jan 19, 2012 at 3:42 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
Tom Lane <tgl@sss.pgh.pa.us> writes:
I'm with Heikki on not believing that this is a good idea. If you are
trying to do careful versioning of a set of object definitions, you want
to stick the things in a file, you don't want them just flying by in
submitted SQL.I'm trying to open the extension facilities (versions being the first of
them, think \dx) to application PL code, and to hosted environments
where you're not granted access to the server's file system.
I guess the question is: for what purpose?
As you recognized in your original email, if the extension is inline,
then the objects will need to be dumped, because a simple CREATE
EXTENSION command is bound to fail. But my understanding was that a
major part of the reason - if not the entire reason - was to get
pg_dump to emit CREATE EXTENSION bob instead of the component SQL
commands. If we take that away, what's the remaining benefit of
packaging those objects inside an extension instead of just dumping
them "loose" into the database?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
On Thu, Jan 19, 2012 at 3:42 PM, Dimitri Fontaine
I'm trying to open the extension facilities (versions being the first of
them, think \dx) to application PL code, and to hosted environments
where you're not granted access to the server's file system.
I guess the question is: for what purpose?
As you recognized in your original email, if the extension is inline,
then the objects will need to be dumped, because a simple CREATE
EXTENSION command is bound to fail. But my understanding was that a
major part of the reason - if not the entire reason - was to get
pg_dump to emit CREATE EXTENSION bob instead of the component SQL
commands. If we take that away, what's the remaining benefit of
packaging those objects inside an extension instead of just dumping
them "loose" into the database?
Indeed, it seems like such a thing is not an extension at all anymore,
or at least it gives up many of the useful properties of extensions.
Given the entire lack of demand from the field for such a cut-down
concept of extension, I think we should not be in a hurry to introduce
it. Maybe in a year or two when we have a clearer idea of how people
are actually using extensions, there will be a better argument for it.
Right now I'm afraid that we might foreclose our options for other
future extension features because these things would be incompatible
with such ideas.
regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> writes:
Robert Haas <robertmhaas@gmail.com> writes:
I guess the question is: for what purpose?
Indeed, it seems like such a thing is not an extension at all anymore,
or at least it gives up many of the useful properties of extensions.
I'm thinking that a common name and version number tracked in the
database for a set of related functions (that usually form an API) is
useful enough a property to be wanting to have extension support more
use cases than contrib-like “module centric” extensions (meaning, C
coded and shipped with a .so).
Given the entire lack of demand from the field for such a cut-down
concept of extension, I think we should not be in a hurry to introduce
it. Maybe in a year or two when we have a clearer idea of how people
are actually using extensions, there will be a better argument for it.
Fair enough I guess (or at least I'm understanding how alone I am here),
let's hear from the field first.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Fri, Jan 20, 2012 at 8:52 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
Tom Lane <tgl@sss.pgh.pa.us> writes:
Robert Haas <robertmhaas@gmail.com> writes:
I guess the question is: for what purpose?
Indeed, it seems like such a thing is not an extension at all anymore,
or at least it gives up many of the useful properties of extensions.I'm thinking that a common name and version number tracked in the
database for a set of related functions (that usually form an API) is
useful enough a property to be wanting to have extension support more
use cases than contrib-like “module centric” extensions (meaning, C
coded and shipped with a .so).
I see that there is some usefulness there, but I'm not sure that this
is the best way to get our hands around it. For one thing, people can
and do schema versioning and schema upgrade scripts entirely in
userland. My last implementation worked by keeping a schema_versions
table on the server with one column, a UUID. The deployment tarball
contained a file with a list of UUIDs in it, each one associated to an
SQL script. At install time, the install script ran through that file
in order and ran any scripts whose UUID didn't yet appear in the
table, and then added the UUIDs of the run scripts to the table. This
might not be what any given person wants, but there's a lot of
flexibility to do things like that without any particular support from
the DBMS. (Incidentally, this experience is what convinced me that
CREATE TABLE IF EXISTS and ALTER TABLE IF EXISTS are good things to
have; my system could have been a lot simpler if I'd had those.)
One random design idea I had related to providing this functionality
in the DB core is to have a command that creates an empty extension,
maybe just "CREATE EXTENSION foo EMPTY", and an ALTER command that
forcibly changes the DB's notion of what version is installed, like
"ALTER EXTENSION foo FORCE VERSION TO '1.1'". That would allow the
same sort of thing you're after here by using those two features plus
ALTER EXTENSION ADD/DROP, and could also be used to do other things.
For example, suppose you have DB servers A and B. A is running an old
version of some extension and is in a shared hosting environment where
you can't get access to the box. B, on the other hand, is your
brand-new, dedicated server. You could upgrade the extension on the
old machine "manually", by issuing SQL commands to forcibly change its
state, and then do a dump and reload onto B. This might be useful if,
for example, B is also running a newer DB server version that won't
support the very old version of the extension running on A. This is
probably an unusual situation, but maybe there's some value in
allowing users who want to do such things a cleaner way to do it than
direct catalog hackery.
Anyway, I'm just thinking out loud here - I don't actually have a very
strong feeling that I know what all of the solutions are in this area,
or even all the problems. I'm interested in hearing about your
experiences with the system, and other people's, because I certainly
do agree that there's room for improvement. One of my personal pet
peeves is that the system doesn't know how to do an install of v1.1 by
running the v1.0 script followed by the 1.0-1.1 upgrade script, which
I fear is going to lead to a rapid uptick in the number of copies of
almost-identical scripts in our git repository.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
peeves is that the system doesn't know how to do an install of v1.1 by
running the v1.0 script followed by the 1.0-1.1 upgrade script, which
Did you try
CREATE EXTENSION foo FROM 1.0;
Maybe you would want the system to be able to determine the oldest
version to start from to reach the current default_version given in the
control file, but I guess it would be better to add another property
like default_full_version or such (last_stop?).
Looks like a simple enough project, the fact that it helps our own
shipping and script maintenance could maybe allow us to work on that
this late, dunno.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Fri, Jan 20, 2012 at 11:29 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
peeves is that the system doesn't know how to do an install of v1.1 by
running the v1.0 script followed by the 1.0-1.1 upgrade script, whichDid you try
CREATE EXTENSION foo FROM 1.0;
Well, yes, that works, but I'm going for what you wrote next:
Maybe you would want the system to be able to determine the oldest
version to start from to reach the current default_version given in the
control file, but I guess it would be better to add another property
like default_full_version or such (last_stop?).
Possibly that might be a better design, yes. Especially since we
don't order version numbers intrinsically.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
Maybe you would want the system to be able to determine the oldest
version to start from to reach the current default_version given in the
control file, but I guess it would be better to add another property
like default_full_version or such (last_stop?).Possibly that might be a better design, yes. Especially since we
don't order version numbers intrinsically.
It's quite simple to implement too, see attached. As for the version
number ordering, that's a giant rat hole I don't want to be digging in,
and I think we can bypass that problem entirely.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Attachments:
extension-default-full-version.v0.patchtext/x-patchDownload+57-544
On Thu, Jan 19, 2012 at 8:15 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
Frankly I don't see the point of this. If the extension is an independent
piece of (SQL) code, developed separately from an application, with its own
lifecycle, a .sql file seems like the best way to distribute it. If it's
not, ie. if it's an integral part of the database schema, then why package
it as an extension in the first place?I'm with Heikki on not believing that this is a good idea. If you are
trying to do careful versioning of a set of object definitions, you want
to stick the things in a file, you don't want them just flying by in
submitted SQL. Also, a large part of the point of the extension
facility is to be able to do uninstall/reinstall and version
upgrades/downgrades, none of which are possible unless the extension
scripts are stored somewhere.ISTM your distribution concern would be much better addressed by
installing contrib/adminpack and then just using pg_file_write()
to put the new extension script into the remote server's library.
I think this is somewhat rube-goldberg-esque, and denies non-superuser
roles the ability to get more version management of schema and
operators. As-is many organizations are submitting "migrations" via
plain SQL that include committing to a version management table that
is maintained by convention, and as-is that is considered a modern-day
best-practice.
Unless someone comes up with something fundamentally new in how to
handle the problem being solved in user-land via 'migrations' and
version tables in Postgres, I think tying completely safe, sandboxed,
non-superuser operators to super-user-only (and awkward besides) file
management via FEBE on the server-side is not going to do anything but
redelegate the problem of soundness to every migration/change
management regime, and decrease reuse of useful operators that do not
require superuser.
The ship has sailed. Encouraging use of files and .sql buy no
soundness, because everyone is moving towards is overlaying version
management via pure FEBE anyway. At best, this means to me that
Postgres is completely neutral about what version management regime
you want to use for operators and schemas. At worst, it means
Postgres frustratingly unhelpful in common use cases. And somewhere
in the middle is "this may have value, but not enough to be worth
maintaining." Given the lack of a fast and widely deployed trusted PL
integrations, I my interpretation of the truth falls somewhere between
the latter two interpretations.
--
fdr
On 21.01.2012 00:00, Daniel Farina wrote:
I think this is somewhat rube-goldberg-esque, and denies non-superuser
roles the ability to get more version management of schema and
operators. As-is many organizations are submitting "migrations" via
plain SQL that include committing to a version management table that
is maintained by convention, and as-is that is considered a modern-day
best-practice.
Even if you give the version number in the CREATE EXTENSION command,
it's by convention that people actually maintain a sane versioning
policy. If people don't take version management seriously, you will
quickly end up with five different versions of an extension, all with
version number 0.1.
Another approach is to use comments on the objects saying "version
1.23". Those generally move together with the objects themselves; they
are included in pg_dump schema-only dump, for example, while the
contents of a table are not.
The ship has sailed. Encouraging use of files and .sql buy no
soundness, because everyone is moving towards is overlaying version
management via pure FEBE anyway.
What is FEBE?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Fri, Jan 20, 2012 at 2:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
Even if you give the version number in the CREATE EXTENSION command, it's by
convention that people actually maintain a sane versioning policy. If people
don't take version management seriously, you will quickly end up with five
different versions of an extension, all with version number 0.1.
Projects are taking it seriously, and invest a lot of effort in it.
There is no shortage of schema versioning frameworks, of varying
levels of maturity....but some are quite complete by the standards of
their users. However, there is little knowledge shared between them,
and the no database gives them much support, so idiosyncrasy becomes
inevitable.
Unless one makes loading regular, unpackaged operators via SQL
impossible (which is, on the face of it, doesn't sound like a good
idea to me), the only people who will bother with CREATE EXTENSION
with inline content will be those who care and want to maintain
version control.
I stick to my original assessment: the ship has sailed, or perhaps,
more accurately, is sailing. Perhaps one could gain wisdom by
thinking of this problem differently: "I'm going to write a migration
tool to help developers in my framework or workflow deal with change,
and I really would like it if my database helped me by....".
I don't think we should fool ourselves that we'll be letting people
shoot themselves in the foot with regard to versioning if we give them
versions. People have been painfully and assiduously trying to avoid
version problems for some time now, with no help from the database,
and they do it entirely via pure SQL statements from Postgres' point
of view.
Another approach is to use comments on the objects saying "version 1.23".
Those generally move together with the objects themselves; they are included
in pg_dump schema-only dump, for example, while the contents of a table are
not.
Idiosyncratic, but perhaps useful for some people. Again, workarounds
are possible, and per my previous statements, even pervasive. That
doesn't make it a desirable thing to avoid assisting with.
What is FEBE?
FrontEnd BackEnd Protocol, I thought? The one libpq speaks. Is there a
better name?
Here are some reasons not do this feature, in my mind:
* Is the world ready yet to think about versioning operators in a
database the same way software is versioned?
* Related to that: If the feature is written, it will have to be
supported. Does the feature have enough impact at this time?
* Do we know how to design the feature correctly so it will attract
use, especially as a Postgres-ism, which is related in some part to
the decaying importance (my perception) of 'database agnostic'
* Are the awkward user-land versioning strategies good enough? Would
it be better to focus elsewhere where things are even *more* awkward?
My assessment is: I know some people who would use this feature, but a
broad swathe are not loading too many operators into their database.
It's hard to know if that's because packaging operators in databases
has been so abysmally bad in the industry at large, or because the
choice of sandboxed languages are just not appealing for, say, a
Python or Ruby developer to write things that are not a series of SQL
statements, ala pgsql, and a lot of that (sans triggers) can be done
over the wire anyway. Maybe embedded Javascript can help with this,
but it's not the Here and Now.
If I had to invest with complexity with regard to versioning, I'd
rather invest in Postgres being able to tell me a hashed version of
all or a controllable subset of the tables, types, attributes in the
database, as to quickly pick out drift between an freshly created
database (produced by the migrations on-disk against development) and
any cruft that sneaks into a production server that is thought to be
the same as the others. That would have huge impact with common use
cases at this very moment in time, so it provides a reasonable
backdrop to evaluate the cost/impact of the proposed feature.
--
fdr
On Fri, Jan 20, 2012 at 3:33 PM, Daniel Farina <daniel@heroku.com> wrote:
On Fri, Jan 20, 2012 at 2:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:Even if you give the version number in the CREATE EXTENSION command, it's by
convention that people actually maintain a sane versioning policy. If people
don't take version management seriously, you will quickly end up with five
different versions of an extension, all with version number 0.1.Projects are taking it seriously, and invest a lot of effort in it.
There is no shortage of schema versioning frameworks, of varying
levels of maturity....but some are quite complete by the standards of
their users. However, there is little knowledge shared between them,
and the no database gives them much support, so idiosyncrasy becomes
inevitable.
Speak of the devil. Someone just posted use of extension versioning to
manage schemas (using the existing plain-old-files mechanism):
http://philsorber.blogspot.com/2012/01/deploy-schemata-like-boss.html
He also links to a -hackers post Dimitri wrote last December.
A few anecdotes does not constitute evidence, but it does look like
some people pay attention to any additional versioning foothold they
can get.
--
fdr
On Sun, Jan 22, 2012 at 3:20 PM, Daniel Farina <daniel@heroku.com> wrote:
A few anecdotes does not constitute evidence, but it does look like
some people pay attention to any additional versioning foothold they
can get.
Sure, but just because some people do it doesn't make it a good idea.
I can personally attest to having done many things over the years
which initially appeared to be good ideas, but later turned out to be
mind-bogglingly dumb. It's true that if we make it easy for people to
use the extension mechanism as a way of storing versioning information
for user code, then people will use it for that purpose, but the
question is whether that's really what we want. I don't see what
advantage it gives us. Dimitri's proposal was to neuter the pg_dump
support that is the raison d'être of the extension mechanism. That's
clearly necessary if you don't want to end up with an unreloadable
database, but it begs the question (which no one's really answered
AFAICT) of what good the extension mechanism is without that feature.
There are certainly easier ways to remember a version number than
building support for it into core. If people create their own
versioning mechanisms, they can create something which is tailor-made
for their particular requirements, rather than relying on decisions
which we made in core that may or may not be right for them (e.g. the
lack of version ordering, or even that we have versions rather than
some more general type of control table). I don't want to prejudge
the outcome and say, oh, we should never have support for this concept
in core - but neither do I want to embark on that project without a
detailed understanding of where and how it is adding value. If people
are doing management via "pure FEBE", good for them: but that doesn't
explain why it shoudn't be done all in userspace, with all of the
flexibility that gives.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
... If people
are doing management via "pure FEBE", good for them: but that doesn't
explain why it shoudn't be done all in userspace, with all of the
flexibility that gives.
On reflection it seems like this patch is simply offering the wrong
solution for the problem. I agree that it could be useful to install
extensions without having direct access to the server's filesystem,
but it doesn't seem to follow that we must lobotomize existing extension
features in order to have that. I pointed out earlier that you could
get such functionality via contrib/adminpack, though people not
unreasonably complained that that was pretty ugly and low-level.
But couldn't we define some SQL-level operations to allow installing
extension control and script files?
Probably the worst issue with that is that in typical installations,
the share/extension/ directory would be read-only to the server, and a
lot of people might be uncomfortable with making it writable. Not sure
whether we should consider inventing another place to keep
SQL-command-installed extensions, or just say "if you want this
functionality you have to make share/extension/ writable".
regards, tom lane
On Sun, Jan 22, 2012 at 8:42 PM, Robert Haas <robertmhaas@gmail.com> wrote:
On Sun, Jan 22, 2012 at 3:20 PM, Daniel Farina <daniel@heroku.com> wrote:
A few anecdotes does not constitute evidence, but it does look like
some people pay attention to any additional versioning foothold they
can get.Sure, but just because some people do it doesn't make it a good idea.
True, I did not mean to suggest that this is clearly the best
mechanism, only to express support for the general idea that people
are appreciative and willing to experiment with any way to make
version management better, and people who use those features are
*probably* going to try to use them correctly. It's up to us to make
the right-thing easy, too, otherwise I fear we will see too many lousy
version numbers creeping about in the wild.
Dimitri's proposal was to neuter the pg_dump
support that is the raison d'être of the extension mechanism. That's
clearly necessary if you don't want to end up with an unreloadable
database, but it begs the question (which no one's really answered
AFAICT) of what good the extension mechanism is without that feature.
Oh, no, non-reloadability is a really bad thing -- I'd say a pretty
bad deal-breaker -- but as Tom wrote, it does seem like it should
somehow be a tractable problem.
Is it such a bad idea to store the literal text of the extension's
pieces (control file and corresponding SQL program) in catalogs? I'm
not sure if I understand why everyone is so interested in a special
interaction with the file system in some way. By the same token,
extensions can be dumped in the literal syntax -- even the ones that
were installed from a file.
There are certainly easier ways to remember a version number than
building support for it into core. If people create their own
versioning mechanisms, they can create something which is tailor-made
for their particular requirements, rather than relying on decisions
which we made in core that may or may not be right for them (e.g. the
lack of version ordering, or even that we have versions rather than
some more general type of control table).
I understand the desire to avoid investing in something that is not
what people want. However, in the interest of scoping the discussion
to the inline extension support, I can't seem to understand the
objection to supporting what is basically a different transport for
precisely the same semantic operation as having to ssh into a machine
and untar some files, except available without the bizarre
side-channel of ssh and fie system mangling when one is loading
trustable operators, itself a raft of usability issues if one wishes
to enable more software reuse.
--
fdr