Want to add to contrib.... xmldbx
I have a fairly simple extension I want to add to contrib. It is an XML
parser that is designed to work with a specific dialect.
I have a PHP extension called xmldbx, it allows the PHP system to
serialize its web session data to an XML stream. (or just serialize
variables) PHP's normal serializer is a non-standard home grown format
that it impossile to read. The xmldbx format may not be too much easier,
but I have a format document for it. I also have a PostgreSQL extension
that can take serialized data and use it in a query, like:
select xmldbx('data.userdata.id', sessions.data);
If the PHP variable has $userdata['id'] set to some varible in its session
data, it will be returned.
This allows a lot of flexability and a tight integration between
PostgreSQL and PHP.
For more information:
http://www.mohawksoft.org -- (The open source end of mohawk softare :-)
What do you all think?
[removing -patches since no patch was attached]
Mark Woodward wrote:
I have a fairly simple extension I want to add to contrib. It is an XML
parser that is designed to work with a specific dialect.I have a PHP extension called xmldbx, it allows the PHP system to
serialize its web session data to an XML stream. (or just serialize
variables) PHP's normal serializer is a non-standard home grown format
that it impossile to read. The xmldbx format may not be too much easier,
but I have a format document for it. I also have a PostgreSQL extension
that can take serialized data and use it in a query, like:select xmldbx('data.userdata.id', sessions.data);
If the PHP variable has $userdata['id'] set to some varible in its session
data, it will be returned.This allows a lot of flexability and a tight integration between
PostgreSQL and PHP.
This sounds highly specialised, and probably more appropriate for a
pgfoundry project.
In any case, surely the whole point about XML is that you shouldn't need
to contruct custom parsers. Should we include a specialised parser for
evey XML dialect someone might want to store in the database?
cheers
andrew
On Sun, Jan 29, 2006 at 02:04:47PM -0500, Mark Woodward wrote:
[removing -patches since no patch was attached] This sounds highly
specialised, and probably more appropriate for a pgfoundry
project.XML is not really much more than a language, it says virtually
nothing about content. Content requires custom parsers.
<aol>I also think this would make a great pgfoundry project :)</aol>
Cheers,
D
--
David Fetter david@fetter.org http://fetter.org/
phone: +1 415 235 3778
Remember to vote!
Import Notes
Reply to msg id not found: 18738.24.91.171.78.1138561487.squirrel@mail.mohawksoft.com
[removing -patches since no patch was attached]
This sounds highly specialised, and probably more appropriate for a
pgfoundry project.In any case, surely the whole point about XML is that you shouldn't need
to contruct custom parsers. Should we include a specialised parser for
evey XML dialect someone might want to store in the database?
XML is not really much more than a language, it says virtually nothing
about content. Content requires custom parsers.
While I understand the notion that we don't want to have a custom parser
for every XML data spec, I did commit the xmldbx extension into the PHP
extensions.
MySQL has great street cred because it is very well integrated with PHP.
This extension may appeal to PHP users and make it more enticing to
PostgreSQL instead.
David Fetter <david@fetter.org> writes:
<aol>I also think this would make a great pgfoundry project :)</aol>
Yeah ... unless there's some reason that it needs to be tied to PG
server releases, it's better to put it on pgfoundry where you can
have your own release cycle.
regards, tom lane
Mark Woodward wrote:
XML is not really much more than a language, it says virtually nothing
about content. Content requires custom parsers.
Really? Strange I've been dealing with it all this time without having
to contruct a parser. What you do need is to provide event handlers to a
stream parser such as SAX, or a use an API such as DOM to insert/extract
data to/from the XML.
While I understand the notion that we don't want to have a custom parser
for every XML data spec, I did commit the xmldbx extension into the PHP
extensions.MySQL has great street cred because it is very well integrated with PHP.
This extension may appeal to PHP users and make it more enticing to
PostgreSQL instead.
Postgres generally seems to favor extensibility over integration, and I
generally agree with that approach.
Before we even consider it I would want to see how much traction your
PHP extension gets.
Also, what are its dependencies? If it's dependent on PHP headers to
build or libraries to run that would be unfortunate - we might need to
invent some sort of configure flag to include/exclude some contrib
modules in that case.
cheers
andrew
On 1/29/06, Mark Woodward <pgsql@mohawksoft.com> wrote:
I generally agree as well, but.....
I think there is always a balance between "out of the box" vs
"extensibility." I think integration and extensibility is fantastic for
addaptation of your product, but "oobe" (out of box experience) is
important for those you want to target.
Unlike MySQL, PostgreSQL does not target one audience (i.e. the "low-end").
PostgreSQL targets various groups (scientific, enterprise, hobby, OLTP, ...)
which means that providing a nice OOBE for each of these targets would bloat
the main distribution. No one feels that your project isn't a good one, we
just suggest that moving it to a pgfoundy project would be a better idea.
By all practical measure PostgreSQL is miles ahead of MySQL, but MySQL
wins because it is the defacto PHP database. PostgreSQL does not target
PHP in any real sense, I am proposing adding this extension to change
that.
In most PHP applications I've seen, MySQL wins only because it is the
developer's default database. Likewise, most PHP/MySQL-only applications
are generally poorly designed and developed. To this day, I know of no
specific features in MySQL that make it a nicer database for PHP.
Having a PHP serializer that hooks up to the database in a usable way,
IMHO, makes a strong link between the two.
I agree that it's a nice project, but how many people use PostgreSQL with
languages other than PHP? A language-specific extension would, IMHO, need
to have a real good reason to be included in the main distribution.
This is the classic chicken and the egg thing that kills a potentially
great idea.
Unfortunately, this can be said for everything. However, in the end this
thinking does tend to kill more bad ideas than good ones.
Step back and ask yourself "is it something you see as valuable." If you
answer no, then there is no point in dicussing it anymore. If you answer
"yes," then consider the PHP admin who wants to be able to accomplish what
this provides. The PHP guy will be used to MySQL and not know anything
about PostgreSQL, wouldn't making the barrier to entry lower make sense?
Yes, the project is a great idea. I would surely like to see it in a
pgfoundry project. I also wish we also had a better PostgreSQL developer
tools area on the website which would list these types of projects for PHP,
C, C++, .NET, Java, ...
If it is a "good idea" then, making it easier to do makes sense.
If I were you, I'd just create a pgfoundry project which includes the
contrib module and a few examples.
I wrote it so it could be in the contrib with nothing but expat (which
most systems have.)
We've encountered issues like this with readline, crypt, etc. and I know of
several Linux distros which do not include expat. Likewise, I don't recall
ever seeing expat on Solaris or Windows in a non-application-specific
installation.
Import Notes
Reply to msg id not found: 18674.24.91.171.78.1138565706.squirrel@mail.mohawksoft.com
Mark Woodward wrote:
XML is not really much more than a language, it says virtually nothing
about content. Content requires custom parsers.Really? Strange I've been dealing with it all this time without having
to contruct a parser. What you do need is to provide event handlers to a
stream parser such as SAX, or a use an API such as DOM to insert/extract
data to/from the XML.
Yes, those are applications that allow you to create a data handler, expat
is basically everything else is a custom parser.
While I understand the notion that we don't want to have a custom parser
for every XML data spec, I did commit the xmldbx extension into the PHP
extensions.MySQL has great street cred because it is very well integrated with PHP.
This extension may appeal to PHP users and make it more enticing to
PostgreSQL instead.Postgres generally seems to favor extensibility over integration, and I
generally agree with that approach.
I generally agree as well, but.....
I think there is always a balance between "out of the box" vs
"extensibility." I think integration and extensibility is fantastic for
addaptation of your product, but "oobe" (out of box experience) is
important for those you want to target.
By all practical measure PostgreSQL is miles ahead of MySQL, but MySQL
wins because it is the defacto PHP database. PostgreSQL does not target
PHP in any real sense, I am proposing adding this extension to change
that.
Having a PHP serializer that hooks up to the database in a usable way,
IMHO, makes a strong link between the two.
Before we even consider it I would want to see how much traction your
PHP extension gets.
This is the classic chicken and the egg thing that kills a potentially
great idea.
Step back and ask yourself "is it something you see as valuable." If you
answer no, then there is no point in dicussing it anymore. If you answer
"yes," then consider the PHP admin who wants to be able to accomplish what
this provides. The PHP guy will be used to MySQL and not know anything
about PostgreSQL, wouldn't making the barrier to entry lower make sense?
If it is a "good idea" then, making it easier to do makes sense.
Also, what are its dependencies? If it's dependent on PHP headers to
build or libraries to run that would be unfortunate - we might need to
invent some sort of configure flag to include/exclude some contrib
modules in that case.
I wrote it so it could be in the contrib with nothing but expat (which
most systems have.)
David Fetter <david@fetter.org> writes:
<aol>I also think this would make a great pgfoundry project :)</aol>
Yeah ... unless there's some reason that it needs to be tied to PG
server releases, it's better to put it on pgfoundry where you can
have your own release cycle.
I don't need pfoundry, per se' but I was hoping that it could be part of
the core distribution.
My personal feelings are these: I use PHP a lot and I use PostgreSQL a lot
for what I do. PHP plays favorites, they actively promote MySQL even
though it is a bad database in the database sense.
I would like to see PostgreSQL, at least passivly, include niceties for
PHP users. In quaint terms, "show them you care."
On a side note, PostgreSQL is a little too self centric. It is a great
project that doesn't get the credit it deserves and I think that it is
because the project, as a whole, doesn't try to actively court or support
popular applications.
The pgfountry and formally gborg are reat, but unless you actively promote
them people can't find them. There should e a big huge button and/or link
to pgfountry that shows how much is availale to PostgreSQL.
Mark Woodward wrote:
There should e a big huge button and/or link
to pgfountry that shows how much is availale to PostgreSQL.
While there are links to 'em mentioned on the web site, I agree that
making (particularly) Pgfoundry more prominent would be a good idea.
Mark, do you want to suggest that on -advocacy (since I suspect that's
the place to get it to happen)?
Cheers
(the other) Mark
On Jan 30, 2006, at 12:23 , Andrew Dunstan wrote:
A nicer idea would be something like a utility could we ship that will
download, build and install module foo for you.
CPAN modules, Ruby gems, PgFoundry ingots? :)
Michael Glaesemann
grzm myrealbox com
Import Notes
Reply to msg id not found: 1101.24.211.165.134.1138591399.squirrel@www.dunslane.net
Mark Kirkwood said:
Mark Woodward wrote:
There should e a big huge button and/or link
to pgfountry that shows how much is availale to PostgreSQL.While there are links to 'em mentioned on the web site, I agree that
making (particularly) Pgfoundry more prominent would be a good idea.Mark, do you want to suggest that on -advocacy (since I suspect that's
the place to get it to happen)?
A nicer idea would be something like a utility could we ship that will
download, build and install module foo for you. Then we could publish many
many modules on pgfoundry, their authors could look after them, and
installing them would be trivial. pgxs should make such a thing a lot
simpler in many cases.
Of course, building it would be quite a bit of work :-)
cheers
andrew
On Mon, Jan 30, 2006 at 12:20:25PM +0900, Michael Glaesemann wrote:
On Jan 30, 2006, at 12:23 , Andrew Dunstan wrote:
A nicer idea would be something like a utility could we ship that will
download, build and install module foo for you.CPAN modules, Ruby gems, PgFoundry ingots? :)
Tusks? (Extensions of the elephant.)
--
Michael Fuhr
On Sunday 29 January 2006 22:23, Andrew Dunstan wrote:
Mark Kirkwood said:
Mark Woodward wrote:
There should e a big huge button and/or link
to pgfountry that shows how much is availale to PostgreSQL.While there are links to 'em mentioned on the web site, I agree that
making (particularly) Pgfoundry more prominent would be a good idea.Mark, do you want to suggest that on -advocacy (since I suspect that's
the place to get it to happen)?
I hate to repeat this sad story, but we're stalling due to the need for server
upgrades on the foundry to make it more robust. Eventually it will get more
play I imagine.
A nicer idea would be something like a utility could we ship that will
download, build and install module foo for you. Then we could publish many
many modules on pgfoundry, their authors could look after them, and
installing them would be trivial. pgxs should make such a thing a lot
simpler in many cases.Of course, building it would be quite a bit of work :-)
Actually I don't think it would be all that hard. You just need to have each
project produce an xml file with bits of package information (name,
dependencies, version info, etc...) which could then be combined with all the
other packages to produce a complete list of available packages. You then
just need a binary installed with postgresql that can grab the latest copy of
the xml list so it can present a list of packages to install. It then
downloads the packages from pgfoundry directly. The biggest issue is probably
getting the various packages to provide a similar structure for downloading,
but if you got the base system working I think they would be willing to
comply.
--
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
Andrew,
A nicer idea would be something like a utility could we ship that
will
download, build and install module foo for you. Then we could publish
many
many modules on pgfoundry, their authors could look after them, and
installing them would be trivial. pgxs should make such a thing a lot
simpler in many cases.Of course, building it would be quite a bit of work :-)
Yeah, just ask the Perl folks. I believe that CPAN took about 4 years
to get working.
--Josh
Robert Treat wrote:
On Sunday 29 January 2006 22:23, Andrew Dunstan wrote:
Mark Kirkwood said:
...
A nicer idea would be something like a utility could we ship that will
download, build and install module foo for you. Then we could publish many
many modules on pgfoundry, their authors could look after them, and
installing them would be trivial. pgxs should make such a thing a lot
simpler in many cases.Of course, building it would be quite a bit of work :-)
Actually I don't think it would be all that hard. You just need to have each
project produce an xml file with bits of package information (name,
dependencies, version info, etc...) which could then be combined with all the
other packages to produce a complete list of available packages.
While I'm all for the idea, I don't think the effort should be underestimated. At least it
must be *very* well scoped. Chances are, it becomes extremely huge and complex task. Here
are some thoughts that might be worth considering:
The version-info and dependencies tend to become quite complex very fast, especially if you
have an arbitrary depth in component dependencies. You need to define how "soft" your
versioned dependencies are for instance, i.e. is a dependency to 1.0.4 of some component
automatically supplanted when a new bug fix (1.0.5) is published? If not, can I still get
the source for 1.0.4 should I require it? Will there be a source repository where all old
bundles are found? How is that repository maintained and structured? The meta-data that
describes versions of a component and the dependencies that each version have (they may
vary), where does that live? What happens if, when you resolve the dependencies for a
certain configuration, end up with a graph where you have two versions of the same component?
You then
just need a binary installed with postgresql that can grab the latest copy of
the xml list so it can present a list of packages to install. It then
downloads the packages from pgfoundry directly.
You have a binary so I guess that installing other packages means installing other binaries?
Are they presumed to be found in a binary form at PgFoundry or must they be compiled? If
it's the former, then maintaining binaries is a huge undertaking. Matching binary
dependencies can be a really complex task, even if the platform is one and the same. If it's
the latter, you impose a development environment on the end user. In the windows world,
thats' probably equal to a Msys/MinGW installation (shrug).
Some packages will perhaps not require compilation but must attach to some other
prerequisite software that must be installed on your machine (a specific version of Perl or
Java VM for instance). How would the modules express such dependencies? Perhaps they can be
fulfilled in a multitude of ways? To cope with that, you must introduce virtual components
(interfaces with multiple implementations).
Are you thinking a generic package-install program that will function on all platforms or do
you suggest something that platform specific installers can hook into? For instance, how
does the Windows installer fit in?
The biggest issue is probably
getting the various packages to provide a similar structure for downloading,
but if you got the base system working I think they would be willing to
comply.
Some packages have dependencies to packages that already provide such a structure (CPAN,
Ruby, Maven, to name a few). A packaging tool that make things easy to install must be able
to cope with that too, which makes things even worse.
Kind regards,
Thomas Hallgren
On Mon, Jan 30, 2006 at 10:25:39AM +0100, Thomas Hallgren wrote:
Actually I don't think it would be all that hard. You just need to have
each project produce an xml file with bits of package information (name,
dependencies, version info, etc...) which could then be combined with all
the other packages to produce a complete list of available packages.While I'm all for the idea, I don't think the effort should be
underestimated. At least it must be *very* well scoped. Chances are, it
becomes extremely huge and complex task. Here are some thoughts that might
be worth considering:
<snip>
To be honest, I think XML is way overkill. Simply provide a makefile
that has the targets install-check, build and install and maybe also
check. Provide a standard way for people to download projects.
CPAN is a nice example, but really it's mostly a frontend to makefiles.
IMHO, stuff on PgFoundry it not going to become popular because we put
links there. It's going to become popular if/when other distributors
can write things like:
for i in <list of projects> ; do
download package
unpack
make install-check (check if dependacies are good)
make build
make install
build package from installed stuff
done
If the makefiles support DESTDIR and a few other such variables,
something like Debian could make a postgresql-goodies. Currently Debian
provides contrib because it's got a standard method of compiling. A
good test is just unpacking the project in the contrib directory of the
postgresql source and running make. If it produces something that
works, you've got the problem licked.
Similarly for RPMs, if a standard top-level spec file can make a
working package, it's going to be easier for other people to
incorporate.
You then
just need a binary installed with postgresql that can grab the latest copy
of the xml list so it can present a list of packages to install. It then
downloads the packages from pgfoundry directly.
I suppose the only thing that really needs to happen is some kind of
index that can be downloaded so people can find stuff. And so an
automated program can actually download the right file. I think
worrying about dependancies at this stage is overkill. Let get things
to the stage where people can say:
$ make install-check
*** Sorry, foomatic 1.07 must be installed
And then we can worry about automatically resolving them.
Some packages have dependencies to packages that already provide such a
structure (CPAN, Ruby, Maven, to name a few). A packaging tool that make
things easy to install must be able to cope with that too, which makes
things even worse.
I don't think that's a real issue. For example, I use Debian. If
something on PgFoundry has been packaged as a Debian package, I'm ten
times more likely to install it than otherwise. What we should be
aiming for is making stuff easy enough to install so that someone in an
afternoon can download 10 projects and create 10 Debian packages, 10
Redhat packages or 10 MSI installer packages.
This is a much easier task for us because only have to provide the
mechanism but not all the handholding. It also means the end result is
something that integrates with the users system better because the
packager can tweak it for the environment, and the author doesn't need
to care.
Indeed, most of the popular pgfoundry projects have already been
packaged. Which was first, the popularity or the packaging?
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.
Josh Berkus wrote:
Andrew,
A nicer idea would be something like a utility could we ship that
will
download, build and install module foo for you. Then we could publish
many
many modules on pgfoundry, their authors could look after them, and
installing them would be trivial. pgxs should make such a thing a lot
simpler in many cases.Of course, building it would be quite a bit of work :-)
Yeah, just ask the Perl folks. I believe that CPAN took about 4 years
to get working.
They were kinda blazing a trail.
I don't think Ruby Gems took anything like that long to flesh out.
I agree with most of what Martijn said elsewhere. We shouldn't try to
overengineer something like this.
cheers
andrew
Michael Fuhr wrote:
On Mon, Jan 30, 2006 at 12:20:25PM +0900, Michael Glaesemann wrote:
On Jan 30, 2006, at 12:23 , Andrew Dunstan wrote:
A nicer idea would be something like a utility could we ship that will
download, build and install module foo for you.CPAN modules, Ruby gems, PgFoundry ingots? :)
Tusks? (Extensions of the elephant.)
Trunks?
cheers
andrew
Andrew Dunstan wrote:
Michael Fuhr wrote:
On Mon, Jan 30, 2006 at 12:20:25PM +0900, Michael Glaesemann wrote:
On Jan 30, 2006, at 12:23 , Andrew Dunstan wrote:
A nicer idea would be something like a utility could we ship that will
download, build and install module foo for you.CPAN modules, Ruby gems, PgFoundry ingots? :)
Tusks? (Extensions of the elephant.)
Trunks?
Dung?