Do we need use more meaningful variables to replace 0 in catalog head files?
Hi guys,
Although, usually, we do not change the system catalog or modify the
catalog schema, or adding a new system catalog, but in these system catalog
head files, such as pg_xxx.h, i think we should use more meaningful
variables. As we known, in pg_xxx.h files, we insert some initial values
into system catalog, as following shown in pg_class.h.
DATA(insert OID = 1247 ( pg_type PGNSP 71 0 PGUID 0 0 0 0 0 0 0 f f p r 30
0 t f f f f f f t n 3 1 _null_ _null_ ));
DESCR("");
DATA(insert OID = 1249 ( pg_attribute PGNSP 75 0 PGUID 0 0 0 0 0 0 0 f f p
r 21 0 f f f f f f f t n 3 1 _null_ _null_ ));
DESCR("");
It's a tedious work to figure out these numbers real meaning. for example,
if i want to know the value of '71' represent what it is. I should go back
to refer to definition of pg_class struct. It's a tedious work and it's not
maintainable or readable. I THINK WE SHOULD USE a meaningful variable
instead of '71'. For Example:
#define PG_TYPE_RELTYPE 71
Regards,
Hom.
On Tue, Nov 8, 2016 at 10:57 AM, Hao Lee <mixtrue@gmail.com> wrote:
It's a tedious work to figure out these numbers real meaning. for example,
if i want to know the value of '71' represent what it is. I should go back
to refer to definition of pg_class struct. It's a tedious work and it's not
maintainable or readable. I THINK WE SHOULD USE a meaningful variable
instead of '71'. For Example:#define PG_TYPE_RELTYPE 71
You'd need to make genbki.pl smarter regarding the way to associate
those variables with the defined variables, greatly increasing the
amount of work it is doing as well as its maintenance (see for PGUID
handling for example). I am not saying that this is undoable, just
that the complexity may not be worth the potential readability gains.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Nov 7, 2016 at 9:10 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:
On Tue, Nov 8, 2016 at 10:57 AM, Hao Lee <mixtrue@gmail.com> wrote:
It's a tedious work to figure out these numbers real meaning. for example,
if i want to know the value of '71' represent what it is. I should go back
to refer to definition of pg_class struct. It's a tedious work and it's not
maintainable or readable. I THINK WE SHOULD USE a meaningful variable
instead of '71'. For Example:#define PG_TYPE_RELTYPE 71
You'd need to make genbki.pl smarter regarding the way to associate
those variables with the defined variables, greatly increasing the
amount of work it is doing as well as its maintenance (see for PGUID
handling for example). I am not saying that this is undoable, just
that the complexity may not be worth the potential readability gains.
Most of these files don't have that many entries, and they're not
modified that often. The elephant in the room is pg_proc.h, which is
huge, frequently-modified, and hard to decipher. But I think that's
going to need more surgery than just introducing named constants -
which would also have the downside of making the already-long lines
even longer.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
yes, i agree with you. These catalogs are not modified often. As your said,
the pg_proc modified often, therefore, there are another issues, the
dependency between these system catalogs and system views. it's hard to
gain maintenance the consistency between these catalogs and views. It's
need more cares when do modifying. So that i think that whether there are
some more smarter approaches to make it smarter or not.
On Wed, Nov 9, 2016 at 6:33 AM, Robert Haas <robertmhaas@gmail.com> wrote:
Show quoted text
On Mon, Nov 7, 2016 at 9:10 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:On Tue, Nov 8, 2016 at 10:57 AM, Hao Lee <mixtrue@gmail.com> wrote:
It's a tedious work to figure out these numbers real meaning. for
example,
if i want to know the value of '71' represent what it is. I should go
back
to refer to definition of pg_class struct. It's a tedious work and it's
not
maintainable or readable. I THINK WE SHOULD USE a meaningful variable
instead of '71'. For Example:#define PG_TYPE_RELTYPE 71
You'd need to make genbki.pl smarter regarding the way to associate
those variables with the defined variables, greatly increasing the
amount of work it is doing as well as its maintenance (see for PGUID
handling for example). I am not saying that this is undoable, just
that the complexity may not be worth the potential readability gains.Most of these files don't have that many entries, and they're not
modified that often. The elephant in the room is pg_proc.h, which is
huge, frequently-modified, and hard to decipher. But I think that's
going to need more surgery than just introducing named constants -
which would also have the downside of making the already-long lines
even longer.--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 9 November 2016 at 10:20, Hao Lee <mixtrue@gmail.com> wrote:
yes, i agree with you. These catalogs are not modified often. As your said,
the pg_proc modified often, therefore, there are another issues, the
dependency between these system catalogs and system views. it's hard to gain
maintenance the consistency between these catalogs and views. It's need more
cares when do modifying. So that i think that whether there are some more
smarter approaches to make it smarter or not.On Wed, Nov 9, 2016 at 6:33 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Nov 7, 2016 at 9:10 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:On Tue, Nov 8, 2016 at 10:57 AM, Hao Lee <mixtrue@gmail.com> wrote:
It's a tedious work to figure out these numbers real meaning. for
example,
if i want to know the value of '71' represent what it is. I should go
back
to refer to definition of pg_class struct. It's a tedious work and it's
not
maintainable or readable. I THINK WE SHOULD USE a meaningful variable
instead of '71'. For Example:#define PG_TYPE_RELTYPE 71
You'd need to make genbki.pl smarter regarding the way to associate
those variables with the defined variables, greatly increasing the
amount of work it is doing as well as its maintenance (see for PGUID
handling for example). I am not saying that this is undoable, just
that the complexity may not be worth the potential readability gains.Most of these files don't have that many entries, and they're not
modified that often. The elephant in the room is pg_proc.h, which is
huge, frequently-modified, and hard to decipher. But I think that's
going to need more surgery than just introducing named constants -
which would also have the downside of making the already-long lines
even longer.
I'd be pretty happy to see pg_proc.h in particular replaced with some
pg_proc.h.in with something sane doing the preprocessing. It's a
massive pain right now.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
Most of these files don't have that many entries, and they're not
modified that often. The elephant in the room is pg_proc.h, which is
huge, frequently-modified, and hard to decipher. But I think that's
going to need more surgery than just introducing named constants -
which would also have the downside of making the already-long lines
even longer.
I don't think we need "named constants", especially not
manually-maintained ones. The thing that would help in pg_proc.h is for
numeric type OIDs to be replaced by type names. We talked awhile back
about introducing some sort of preprocessing step that would allow doing
that --- ie, it would look into some precursor file for pg_type.h and
extract the appropriate OID automatically. I'm too tired to go find the
thread right now, but it was mostly about building the long-DATA-lines
representation from something easier to edit.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Nov 9, 2016 at 1:44 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I don't think we need "named constants", especially not
manually-maintained ones. The thing that would help in pg_proc.h is for
numeric type OIDs to be replaced by type names. We talked awhile back
about introducing some sort of preprocessing step that would allow doing
that --- ie, it would look into some precursor file for pg_type.h and
extract the appropriate OID automatically. I'm too tired to go find the
thread right now, but it was mostly about building the long-DATA-lines
representation from something easier to edit.
You mean that I guess:
/messages/by-id/4d191a530911041228v621286a7q6a98d9ab8a2ed734@mail.gmail.com
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Michael Paquier <michael.paquier@gmail.com> writes:
On Wed, Nov 9, 2016 at 1:44 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I don't think we need "named constants", especially not
manually-maintained ones. The thing that would help in pg_proc.h is for
numeric type OIDs to be replaced by type names. We talked awhile back
about introducing some sort of preprocessing step that would allow doing
that --- ie, it would look into some precursor file for pg_type.h and
extract the appropriate OID automatically. I'm too tired to go find the
thread right now, but it was mostly about building the long-DATA-lines
representation from something easier to edit.
You mean that I guess:
/messages/by-id/4d191a530911041228v621286a7q6a98d9ab8a2ed734@mail.gmail.com
Hmm, that's from 2009. I thought I remembered something much more recent,
like last year or so.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Nov 9, 2016 at 11:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Michael Paquier <michael.paquier@gmail.com> writes:
On Wed, Nov 9, 2016 at 1:44 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I don't think we need "named constants", especially not
manually-maintained ones. The thing that would help in pg_proc.h is for
numeric type OIDs to be replaced by type names. We talked awhile back
about introducing some sort of preprocessing step that would allow doing
that --- ie, it would look into some precursor file for pg_type.h and
extract the appropriate OID automatically. I'm too tired to go find the
thread right now, but it was mostly about building the long-DATA-lines
representation from something easier to edit.You mean that I guess:
/messages/by-id/4d191a530911041228v621286a7q6a98d9ab8a2ed734@mail.gmail.comHmm, that's from 2009. I thought I remembered something much more recent,
like last year or so.
This perhaps:
* Re: Bootstrap DATA is a pita *
/messages/by-id/CAOjayEfKBL-_Q9m3Jsv6V-mK1q8h=ca5Hm0fecXGxZUhPDN9BA@mail.gmail.com
Thanks,
Amit
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Amit Langote <amitlangote09@gmail.com> writes:
On Wed, Nov 9, 2016 at 11:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Hmm, that's from 2009. I thought I remembered something much more recent,
like last year or so.
This perhaps:
* Re: Bootstrap DATA is a pita *
/messages/by-id/CAOjayEfKBL-_Q9m3Jsv6V-mK1q8h=ca5Hm0fecXGxZUhPDN9BA@mail.gmail.com
Yeah, that's the thread I remembered. I think the basic conclusion was
that we needed a Perl script that would suck up a bunch of data from some
representation that's more edit-friendly than the DATA lines, expand
symbolic representations (regprocedure etc) into numeric OIDs, and write
out the .bki script from that. I thought some people had volunteered to
work on that, but we've seen no results ...
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Nov 9, 2016 at 10:47 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Yeah, that's the thread I remembered. I think the basic conclusion was
that we needed a Perl script that would suck up a bunch of data from some
representation that's more edit-friendly than the DATA lines, expand
symbolic representations (regprocedure etc) into numeric OIDs, and write
out the .bki script from that. I thought some people had volunteered to
work on that, but we've seen no results ...
If there are no barriers to adding it to our toolchain, could that
more-edit-friendly representation be a SQLite database?
I'm not suggesting we store a .sqlite file in our repo. I'm suggesting that
we store the dump-restore script in our repo, and the program that
generates the .bki script would query the generated SQLite db.
From that initial dump, any changes to pg_proc.h would be appended to the
dumped script
...
/* add new frombozulation feature */
ALTER TABLE pg_proc_template ADD frombozulator text;
/* bubbly frombozulation is the default for volatile functions */
UPDATE pg_proc_template SET frombozulator = 'bubbly' WHERE provolatile =
'v';
/* proposed new function */
INSERT INTO pg_proc_template(proname,proleakproof) VALUES ("new_func",'f');
That'd communicate the meaning of our changes rather nicely. A way to eat
our own conceptual dogfood.
Eventually it'd get cluttered and we'd replace the populate script with a
fresh ".dump". Maybe we do that as often as we reformat our C code.
I think Stephen Frost suggested something like this a while back, but I
couldn't find it after a short search.
Corey Huinker <corey.huinker@gmail.com> writes:
On Wed, Nov 9, 2016 at 10:47 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Yeah, that's the thread I remembered. I think the basic conclusion was
that we needed a Perl script that would suck up a bunch of data from some
representation that's more edit-friendly than the DATA lines, expand
symbolic representations (regprocedure etc) into numeric OIDs, and write
out the .bki script from that. I thought some people had volunteered to
work on that, but we've seen no results ...
If there are no barriers to adding it to our toolchain, could that
more-edit-friendly representation be a SQLite database?
I think you've fundamentally missed the point here. A data dump from a
table would be semantically indistinguishable from the lots-o-DATA-lines
representation we have now. What we want is something that isn't that.
In particular I don't see how that would let us have any extra level of
abstraction that's not present in the finished form of the catalog tables.
(An example that's already there is FLOAT8PASSBYVAL for the value of
typbyval appropriate to float8 and allied types.)
I'm not very impressed with the suggestion of making a competing product
part of our build dependencies, either. If we wanted to get into build
dependency circularities, we could consider using a PG database in this
way ... but I prefer to leave such headaches to compiler authors for whom
it comes with the territory.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-11-09 10:47 AM, Tom Lane wrote:
Amit Langote <amitlangote09@gmail.com> writes:
On Wed, Nov 9, 2016 at 11:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Hmm, that's from 2009. I thought I remembered something much more recent,
like last year or so.This perhaps:
* Re: Bootstrap DATA is a pita *
/messages/by-id/CAOjayEfKBL-_Q9m3Jsv6V-mK1q8h=ca5Hm0fecXGxZUhPDN9BA@mail.gmail.comYeah, that's the thread I remembered. I think the basic conclusion was
that we needed a Perl script that would suck up a bunch of data from some
representation that's more edit-friendly than the DATA lines, expand
symbolic representations (regprocedure etc) into numeric OIDs, and write
out the .bki script from that. I thought some people had volunteered to
work on that, but we've seen no results ...regards, tom lane
Would a python script converting something like json or yaml be
acceptable? I think right now only perl is used, so it would be a new
build chain tool, albeit one that's in my (very humble) opinion much
better suited to the task.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Nov 10, 2016 at 6:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I think you've fundamentally missed the point here. A data dump from a
table would be semantically indistinguishable from the lots-o-DATA-lines
representation we have now. What we want is something that isn't that.
In particular I don't see how that would let us have any extra level of
abstraction that's not present in the finished form of the catalog tables.
I was thinking several tables, with the central table having column values
which we find semantically descriptive, and having lookup tables to map
those semantically descriptive keys to the value we actually want in the
pg_proc column. It'd be a tradeoff of macros for entries in lookup tables.
I'm not very impressed with the suggestion of making a competing product
part of our build dependencies, either.
I don't see the products as competing, nor did the presenter of
https://www.pgcon.org/2014/schedule/events/736.en.html (title: SQLite:
Protégé of PostgreSQL). That talk made the case that SQLite's goal is to be
the foundation of file formats, not an RDBMS. I do understand wanting to
minimize build dependencies.
If we wanted to get into build
dependency circularities, we could consider using a PG database in this
way ... but I prefer to leave such headaches to compiler authors for whom
it comes with the territory.
Agreed, bootstrapping builds aren't fun. This suggestion was a way to have
a self-contained format that uses concepts (joining a central table to
lookup tables) already well understood in our community.
On Nov 11, 2016 00:53, "Jan de Visser" <jan@de-visser.net> wrote:
On 2016-11-09 10:47 AM, Tom Lane wrote:
Amit Langote <amitlangote09@gmail.com> writes:
On Wed, Nov 9, 2016 at 11:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Hmm, that's from 2009. I thought I remembered something much more
recent,
like last year or so.
This perhaps:
* Re: Bootstrap DATA is a pita *
/messages/by-id/CAOjayEfKBL-_Q9m3Jsv6V-mK1q8h=ca5Hm0fecXGxZUhPDN9BA@mail.gmail.com
Yeah, that's the thread I remembered. I think the basic conclusion was
that we needed a Perl script that would suck up a bunch of data from some
representation that's more edit-friendly than the DATA lines, expand
symbolic representations (regprocedure etc) into numeric OIDs, and write
out the .bki script from that. I thought some people had volunteered to
work on that, but we've seen no results ...regards, tom lane
Would a python script converting something like json or yaml be
acceptable? I think right now only perl is used, so it would be a new build
chain tool, albeit one that's in my (very humble) opinion much better
suited to the task.
Python or perl is not what matters here really. For something as simple as
this (for the script) it doesn't make a real difference. I personally
prefer python over perl in most cases, but our standard is perl so we
should stick to that.
The issues is coming up with a format that people like and think is an
improvement.
If we have that and a python script for our, someone would surely volunteer
to convert that part. But we need to start by solving the actual problem.
/Magnus
On 11/11/2016 03:03 AM, Magnus Hagander wrote:
On Nov 11, 2016 00:53, "Jan de Visser" <jan@de-visser.net
<mailto:jan@de-visser.net>> wrote:On 2016-11-09 10:47 AM, Tom Lane wrote:
Amit Langote <amitlangote09@gmail.com
<mailto:amitlangote09@gmail.com>> writes:
On Wed, Nov 9, 2016 at 11:47 PM, Tom Lane <tgl@sss.pgh.pa.us
<mailto:tgl@sss.pgh.pa.us>> wrote:
Hmm, that's from 2009. I thought I remembered something much
more recent,
like last year or so.
This perhaps:
* Re: Bootstrap DATA is a pita */messages/by-id/CAOjayEfKBL-_Q9m3Jsv6V-mK1q8h=ca5Hm0fecXGxZUhPDN9BA@mail.gmail.com
Yeah, that's the thread I remembered. I think the basic conclusion was
that we needed a Perl script that would suck up a bunch of datafrom some
representation that's more edit-friendly than the DATA lines, expand
symbolic representations (regprocedure etc) into numeric OIDs, andwrite
out the .bki script from that. I thought some people had
volunteered to
work on that, but we've seen no results ...
regards, tom lane
Would a python script converting something like json or yaml be
acceptable? I think right now only perl is used, so it would be a new
build chain tool, albeit one that's in my (very humble) opinion much
better suited to the task.Python or perl is not what matters here really. For something as
simple as this (for the script) it doesn't make a real difference. I
personally prefer python over perl in most cases, but our standard is
perl so we should stick to that.The issues is coming up with a format that people like and think is an
improvement.If we have that and a python script for our, someone would surely
volunteer to convert that part. But we need to start by solving the
actual problem.
+1. If we come up with an agreed format I will undertake to produce the
requisite perl script. So let's reopen the debate on the data format. I
want something that doesn't consume large numbers of lines per entry. If
we remove defaults in most cases we should be able to fit a set of
key/value pairs on just a handful of lines.
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andrew Dunstan <andrew@dunslane.net> writes:
+1. If we come up with an agreed format I will undertake to produce the
requisite perl script. So let's reopen the debate on the data format. I
want something that doesn't consume large numbers of lines per entry. If
we remove defaults in most cases we should be able to fit a set of
key/value pairs on just a handful of lines.
The other reason for keeping the entries short is to prevent patch
misapplications: you want three or less lines of context to be enough
to uniquely identify which line you're changing. So something with,
say, a bunch of <tag></tag> overhead, with that markup split onto
separate lines, would be a disaster. This may mean that we can't
get too far away from the DATA-line approach :-(.
Or maybe what we need to do is ensure that there's identification info on
every line, something like (from the first entry in pg_proc.h)
boolin: OID=1242 proname=boolin proargtypes="cstring" prorettype=bool
boolin: prosrc=boolin provolatile=i proparallel=s
(I'm imagining the prefix as having no particular semantic significance,
except that identical values on successive lines denote fields for a
single catalog row.)
With this approach, even if you had blocks of boilerplate-y lines
that were the same for many successive functions, the prefixes would
keep them looking unique to "patch".
On the other hand, Andrew might be right that with reasonable defaults
available, the entries would mostly be short enough that there wouldn't
be much of a problem anyway. This example certainly looks that way.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 11/11/2016 11:10 AM, Tom Lane wrote:
boolin: OID=1242 proname=boolin proargtypes="cstring" prorettype=bool
boolin: prosrc=boolin provolatile=i proparallel=s
I have written a little perl script to turn the pg_proc DATA lines into
something like the format suggested. In order to keep the space used as
small as possible, I used a prefix based on the OID. See attached result.
Still plenty of work to go, e.g. grabbing the DESCR lines, and turning
this all back into DATA/DESCR lines, but I wanted to get this out there
before going much further.
The defaults I used are below (commented out keys are not defaulted,
they are just there for completeness).
my %defaults = (
# oid =>
# name =>
namespace => 'PGNSP',
owner => 'PGUID',
lang => '12',
cost => '1',
rows => '0',
variadic => '0',
transform => '0',
isagg => 'f',
iswindow => 'f',
secdef => 'f',
leakproof => 'f',
isstrict => 'f',
retset => 'f',
volatile => 'v',
parallel => 'u',
# nargs =>
nargdefaults => '0',
# rettype =>
# argtypes =>
allargtypes => '_null_',
argmodes => '_null_',
argnames => '_null_',
argdefaults => '_null_',
trftypes => '_null_',
# src =>
bin => '_null_',
config => '_null_',
acl => '_null_',
);
cheers
andrew
Attachments:
proc_data_lines.txttext/plain; charset=UTF-8; name=proc_data_lines.txtDownload
On 11/11/16 11:10 AM, Tom Lane wrote:
boolin: OID=1242 proname=boolin proargtypes="cstring" prorettype=bool
boolin: prosrc=boolin provolatile=i proparallel=s
Then we're not very far away from just using CREATE FUNCTION SQL commands.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-11-13 00:20:22 -0500, Peter Eisentraut wrote:
On 11/11/16 11:10 AM, Tom Lane wrote:
boolin: OID=1242 proname=boolin proargtypes="cstring" prorettype=bool
boolin: prosrc=boolin provolatile=i proparallel=sThen we're not very far away from just using CREATE FUNCTION SQL commands.
Well, those do a lot of syscache lookups, which in turn do lookups for
functions...
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers