pg_get_constraintdef() doesn't always give an equal constraint
create table pt(a int, b int, c float8 check (c < '10.1'));
create table ct(a int, b int, c float8);
select pg_get_constraintdef(oid) from pg_constraint where conname='pt_c_check';
pg_get_constraintdef
--------------------------------------
CHECK ((c < 10.1::double precision))
(1 row)
-- copy constraint to "ct" using syntax given above
alter table ct add constraint pt_c_check CHECK ((c < 10.1::double precision));
select pg_get_constraintdef(oid) from pg_constraint where conname='pt_c_check';
pg_get_constraintdef
----------------------------------------
CHECK ((c < 10.1::double precision))
CHECK ((c < (10.1)::double precision))
(2 rows)
-- notice extra parenthesis above
-- now, we can't attach "ct" as an inheritance child of "pt"
alter table ct inherit pt;
ERROR: child table "ct" has different definition for check constraint
"pt_c_check"
Also, the pg_dump output is different for pt and ct.
Strangely, the "\d" output is the same, so I tried using
pg_get_constraintdef with pretty-printing mode, which works fine. But
that's contrary to the docs, which say:
"The pretty-printed format is more readable, but the default format is
more likely to be interpreted the same way by future versions of
PostgreSQL; avoid using pretty-printed output for dump purposes."
Regards,
Jeff Davis
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Jeff Davis <pgsql@j-davis.com> writes:
create table pt(a int, b int, c float8 check (c < '10.1'));
create table ct(a int, b int, c float8);
select pg_get_constraintdef(oid) from pg_constraint where conname='pt_c_check';
pg_get_constraintdef
--------------------------------------
CHECK ((c < 10.1::double precision))
(1 row)
-- copy constraint to "ct" using syntax given above
alter table ct add constraint pt_c_check CHECK ((c < 10.1::double precision));
select pg_get_constraintdef(oid) from pg_constraint where conname='pt_c_check';
pg_get_constraintdef
----------------------------------------
CHECK ((c < 10.1::double precision))
CHECK ((c < (10.1)::double precision))
(2 rows)
What's evidently happening here is that ruleutils.c thinks it can dump a
float8 constant using the syntax "10.1::double precision", but the parser
will interpret that as a numeric constant with a separate cast step.
I don't see any simple way around that except to dump using the syntax
'10.1'::double precision
which is ugly as sin, but perhaps we have no choice. A longer-term
solution might be to get the parser to interpret
10.1::double precision
as a float8 literal to start with, but that seems like it could have
unexpected side-effects? Not sure.
OTOH, you could argue that existing dump files already have the
unquoted-literal syntax so it behooves us to try to get the parser
to read them as they were meant.
A larger issue is that I have a nasty feeling that this isn't the
only place where ruleutils.c output might be read in a way that's
functionally equivalent to the original, but not the exact same
parsetree.
regards, tom lane
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
On Mon, 23 Mar 2015, Tom Lane wrote:
Jeff Davis <pgsql@j-davis.com> writes:
create table pt(a int, b int, c float8 check (c < '10.1'));
create table ct(a int, b int, c float8);
select pg_get_constraintdef(oid) from pg_constraint where conname='pt_c_check';
pg_get_constraintdef
--------------------------------------
CHECK ((c < 10.1::double precision))
(1 row)-- copy constraint to "ct" using syntax given above
alter table ct add constraint pt_c_check CHECK ((c < 10.1::double precision));
select pg_get_constraintdef(oid) from pg_constraint where conname='pt_c_check';
pg_get_constraintdef
----------------------------------------
CHECK ((c < 10.1::double precision))
CHECK ((c < (10.1)::double precision))
(2 rows)A larger issue is that I have a nasty feeling that this isn't the
only place where ruleutils.c output might be read in a way that's
functionally equivalent to the original, but not the exact same
parsetree.
Jeff pointed this out to me and as we discussed it, it seems like a good
time to add a regression test to make us aware of the current parse +
deparse behavior, and call attention to any changes in that behavior
later.
Attached is a Perl script that generates many combinations of CHECK
constraints like Jeff's, but with various types, and the result of running
it in the regression test suite.
Would something like this be welcome in the regression test suite?
Jon
--
Jon Jensen
End Point Corporation
https://www.endpoint.com/
[forgot to copy this to pgsql-bugs]
On Mon, Mar 23, 2015 at 7:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I don't see any simple way around that except to dump using the syntax
'10.1'::double precision
There is a similar problem related to NUMERIC:
'1'::numeric is dumped as
1::numeric
which introduces a cast as well. There's also a problem with negative
constants, because it introduces parenthesis instead of single quotes:
'-1'::numeric
is dumped as
(-1)::numeric
which is ugly as sin, but perhaps we have no choice. A longer-term
solution might be to get the parser to interpret
10.1::double precision
as a float8 literal to start with, but that seems like it could have
unexpected side-effects? Not sure.OTOH, you could argue that existing dump files already have the
unquoted-literal syntax so it behooves us to try to get the parser
to read them as they were meant.
Hmm. I'm not sure how you intend to parse that, but it seems like it
would be hard to differentiate between:
10.1::double precision
and
10.1::text
in the parser, so I think one side effect would be that the latter
expression would be a text literal with no cast, which is a little
strange (though not necessarily bad). I guess you could hard-code it
to recognize double precision (and other float types and aliases?).
This bug is pretty old and nobody has complained about it before.
Let's just figure out a good unambiguous representation of float and
numeric literals, and then backport it. Anyone who cares enough to fix
this issue can upgrade to the latest point release on the old version,
and then dump/reload.
For numeric, I think appending ".0" (if it's an integral value) is the
easiest, prettiest, and least-surprising. For floats, we can either
use the single-quotes and type annotation, or we can come up with
something new like appending an "f" to the value.
A larger issue is that I have a nasty feeling that this isn't the
only place where ruleutils.c output might be read in a way that's
functionally equivalent to the original, but not the exact same
parsetree.
I'm concerned about other cases as well, but fixing this seems like a
step in the right direction.
Regards,
Jeff Davis
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Jeff Davis <pgsql@j-davis.com> writes:
On Mon, Mar 23, 2015 at 7:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I don't see any simple way around that except to dump using the syntax
'10.1'::double precision
There is a similar problem related to NUMERIC:
'1'::numeric is dumped as
1::numeric
which introduces a cast as well. There's also a problem with negative
constants, because it introduces parenthesis instead of single quotes:
'-1'::numeric
is dumped as
(-1)::numeric
Yeah. In general, this code was trying to produce something nice-looking
and semantically equivalent, but not necessarily something that would
re-parse as the exact same Const node.
This bug is pretty old and nobody has complained about it before.
Let's just figure out a good unambiguous representation of float and
numeric literals, and then backport it. Anyone who cares enough to fix
this issue can upgrade to the latest point release on the old version,
and then dump/reload.
For numeric, I think appending ".0" (if it's an integral value) is the
easiest, prettiest, and least-surprising.
... and wrong, because that would affect the dscale. I don't think we
want this code editorializing on the printed format in any case.
For floats, we can either
use the single-quotes and type annotation, or we can come up with
something new like appending an "f" to the value.
I cannot see back-porting anything as invasive as changing the lexer.
Basically, I think we have to change ruleutils so that it quotes anything
that wouldn't be seen as a simple integer or numeric constant by the
lexer+grammar.
The attached patch does this; the regression test changes illustrate
what's going to happen to the output if we do this.
Looking at the changes, I'm not 100% convinced we want to back-patch.
As you say, nobody's complained of this problem before, and I'm worried
that people will see the output changes as a bigger deal than the issue
we're trying to fix.
Thoughts?
regards, tom lane
(PS: I've not checked contrib or pl tests, so this patch may be incomplete
as far as expected-output changes go.)
Attachments:
fix-ruleutils-const-dumping.patchtext/x-diff; charset=us-ascii; name=fix-ruleutils-const-dumping.patchDownload+176-166
Jon Jensen <jon@endpoint.com> writes:
On Mon, 23 Mar 2015, Tom Lane wrote:
A larger issue is that I have a nasty feeling that this isn't the
only place where ruleutils.c output might be read in a way that's
functionally equivalent to the original, but not the exact same
parsetree.
Jeff pointed this out to me and as we discussed it, it seems like a good
time to add a regression test to make us aware of the current parse +
deparse behavior, and call attention to any changes in that behavior
later.
Attached is a Perl script that generates many combinations of CHECK
constraints like Jeff's, but with various types, and the result of running
it in the regression test suite.
Would something like this be welcome in the regression test suite?
I can't really see adding something like this to the regression tests;
the value per cycle expended, over the long term, just isn't there IMO.
We could possibly use this approach as a one-shot test for vetting a
proposed patch ... but as you've got it set up, it seems like it requires
manual inspection of each output to see if it's OK or not, which isn't
all that helpful. I wonder whether there isn't a more direct way of
testing whether each output re-parses to the same node tree.
regards, tom lane
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
On Sat, 28 Mar 2015, Tom Lane wrote:
Attached is a Perl script that generates many combinations of CHECK
constraints like Jeff's, but with various types, and the result of
running it in the regression test suite.Would something like this be welcome in the regression test suite?
I can't really see adding something like this to the regression tests;
the value per cycle expended, over the long term, just isn't there IMO.We could possibly use this approach as a one-shot test for vetting a
proposed patch ... but as you've got it set up, it seems like it
requires manual inspection of each output to see if it's OK or not,
which isn't all that helpful.
Well, as part of the standard regression test suite it's comparing stored
expected output with newly-generated output, like all the other tests. I
must be misunderstanding what you mean because nothing manual about that,
is there?
I wonder whether there isn't a more direct way of testing whether each
output re-parses to the same node tree.
That would be nice: Perhaps a function to parse into a node tree. But I'm
not aware of anything like that existing and it seems it would require
creating a node tree datatype to put the parse result into, to then feed
to a new deparse function variant.
Jon
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Jon Jensen <jon@endpoint.com> writes:
On Sat, 28 Mar 2015, Tom Lane wrote:
We could possibly use this approach as a one-shot test for vetting a
proposed patch ... but as you've got it set up, it seems like it
requires manual inspection of each output to see if it's OK or not,
which isn't all that helpful.
Well, as part of the standard regression test suite it's comparing stored
expected output with newly-generated output, like all the other tests. I
must be misunderstanding what you mean because nothing manual about that,
is there?
My point is that the expected output has to be manually checked initially
to see if it's right or not; and since it often doesn't look exactly like
the original SQL, that checking is painful and not foolproof.
It's interesting to consider something like the COPY_PARSE_PLAN_TREES
#define, which inserts code into the backend to see whether copyfuncs.c
and equalfuncs.c are sane or not. Running the regression tests with
that turned on provides pretty thorough coverage of those modules.
regards, tom lane
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
On Sat, 2015-03-28 at 15:08 -0400, Tom Lane wrote:
Basically, I think we have to change ruleutils so that it quotes anything
that wouldn't be seen as a simple integer or numeric constant by the
lexer+grammar.The attached patch does this; the regression test changes illustrate
what's going to happen to the output if we do this.
This fixes my problem, thank you.
There are two switch statements in that function, and they have
overlapping purposes. Can't we just always set needlabel in the first
switch statement, and remove the second switch statement?
When I refactored it to do that (attached), I noticed that if showtype
== -1, it returns before appending the collation. I don't see any reason
those should be related, so I removed the early return (I don't think
there's an actual bug there, though).
There are more diffs in the contrib tests, also included in my version.
Looking at the changes, I'm not 100% convinced we want to back-patch.
As you say, nobody's complained of this problem before, and I'm worried
that people will see the output changes as a bigger deal than the issue
we're trying to fix.
Fine with me.
Regards,
Jeff Davis
Attachments:
fix-ruleutils-jeff.patchtext/x-patch; charset=UTF-8; name=fix-ruleutils-jeff.patchDownload+207-215
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Jon Jensen <jon@endpoint.com> writes:
Well, as part of the standard regression test suite it's comparing stored
expected output with newly-generated output, like all the other tests. I
must be misunderstanding what you mean because nothing manual about that,
is there?My point is that the expected output has to be manually checked initially
to see if it's right or not; and since it often doesn't look exactly like
the original SQL, that checking is painful and not foolproof.It's interesting to consider something like the COPY_PARSE_PLAN_TREES
#define, which inserts code into the backend to see whether copyfuncs.c
and equalfuncs.c are sane or not. Running the regression tests with
that turned on provides pretty thorough coverage of those modules.
Do we have any buildfarm members which are running with that? If not,
would anyone object to doing so?
Thanks,
Stephen
Stephen Frost <sfrost@snowman.net> writes:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
It's interesting to consider something like the COPY_PARSE_PLAN_TREES
#define, which inserts code into the backend to see whether copyfuncs.c
and equalfuncs.c are sane or not. Running the regression tests with
that turned on provides pretty thorough coverage of those modules.
Do we have any buildfarm members which are running with that? If not,
would anyone object to doing so?
Yeah, it would be a fine idea to have a critter or two using that.
regards, tom lane
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
On Sun, 29 Mar 2015, Tom Lane wrote:
Jon Jensen <jon@endpoint.com> writes:
On Sat, 28 Mar 2015, Tom Lane wrote:
We could possibly use this approach as a one-shot test for vetting a
proposed patch ... but as you've got it set up, it seems like it
requires manual inspection of each output to see if it's OK or not,
which isn't all that helpful.Well, as part of the standard regression test suite it's comparing stored
expected output with newly-generated output, like all the other tests. I
must be misunderstanding what you mean because nothing manual about that,
is there?My point is that the expected output has to be manually checked initially
to see if it's right or not; and since it often doesn't look exactly like
the original SQL, that checking is painful and not foolproof.
Oh, I see what you mean. I'm going from the assumption that a new
regression test is useful even if it starts life documenting only the
current behavior, without any idea of whether that was or is correct by
some outside standard.
I think we should know if the parse/deparse round-trip changes behavior
due to some change, regardless of whether it's better or worse. The
regression test should call our attention to it and then we can spend the
human cycles to investigate whether it's better now, or worse. In any
case, the difference could cause trouble for people who depended on the
old behavior, even if the new is ostensibly better.
It's interesting to consider something like the COPY_PARSE_PLAN_TREES
#define, which inserts code into the backend to see whether copyfuncs.c
and equalfuncs.c are sane or not. Running the regression tests with
that turned on provides pretty thorough coverage of those modules.
Maybe that's the way to go, then, if it avoids the necessity of creating
all those tables with CHECK constraints, since we don't have a
user-accessible parse function.
Jon
--
Jon Jensen
End Point Corporation
https://www.endpoint.com/
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Jeff Davis <pgsql@j-davis.com> writes:
On Sat, 2015-03-28 at 15:08 -0400, Tom Lane wrote:
Basically, I think we have to change ruleutils so that it quotes anything
that wouldn't be seen as a simple integer or numeric constant by the
lexer+grammar.The attached patch does this; the regression test changes illustrate
what's going to happen to the output if we do this.
This fixes my problem, thank you.
OK. Given the lack of other suggestions, let's press forward with
fixing it this way.
There are two switch statements in that function, and they have
overlapping purposes. Can't we just always set needlabel in the first
switch statement, and remove the second switch statement?
Meh. The two switches are considering different things and don't have
exactly the same set of switch labels, so I don't really see that as
an improvement ...
When I refactored it to do that (attached), I noticed that if showtype
== -1, it returns before appending the collation. I don't see any reason
those should be related, so I removed the early return (I don't think
there's an actual bug there, though).
... especially since this change is outright wrong. showtype == -1
implies that the caller is going to print a cast; we cannot insert a
COLLATE clause in the middle of that without breaking things. (It may
be that this case is unreachable because a vanilla Const would never
have nondefault collation, but that doesn't make it a good change.)
Looking at the changes, I'm not 100% convinced we want to back-patch.
As you say, nobody's complained of this problem before, and I'm worried
that people will see the output changes as a bigger deal than the issue
we're trying to fix.
Fine with me.
Agreed, we'll fix HEAD only.
regards, tom lane
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
It's interesting to consider something like the COPY_PARSE_PLAN_TREES
#define, which inserts code into the backend to see whether copyfuncs.c
and equalfuncs.c are sane or not. Running the regression tests with
that turned on provides pretty thorough coverage of those modules.Do we have any buildfarm members which are running with that? If not,
would anyone object to doing so?Yeah, it would be a fine idea to have a critter or two using that.
Apologies for not having gotten to this, and thanks for setting one up.
Stephen