The serial pseudotypes
Is there a reason why the serial pseudotypes still behave as they did
pre-v10 and don't map to GENERATED BY DEFAULT AS IDENTITY these days?
I'm hoping it's just an oversight and I can help "fix" it, but maybe
there is an actual reason for it to be this way?
--
Vik Fearing
Vik Fearing <vik.fearing@2ndquadrant.com> writes:
Is there a reason why the serial pseudotypes still behave as they did
pre-v10 and don't map to GENERATED BY DEFAULT AS IDENTITY these days?
Backwards compatibility?
regards, tom lane
On 25/08/2019 18:59, Tom Lane wrote:
Vik Fearing <vik.fearing@2ndquadrant.com> writes:
Is there a reason why the serial pseudotypes still behave as they did
pre-v10 and don't map to GENERATED BY DEFAULT AS IDENTITY these days?Backwards compatibility?
With what? We don't support downgrading and I wouldn't expect this
change to be backported.
--
Vik Fearing
Vik Fearing <vik.fearing@2ndquadrant.com> writes:
On 25/08/2019 18:59, Tom Lane wrote:
Vik Fearing <vik.fearing@2ndquadrant.com> writes:
Is there a reason why the serial pseudotypes still behave as they did
pre-v10 and don't map to GENERATED BY DEFAULT AS IDENTITY these days?
Backwards compatibility?
With what?
Applications that expect declaring a serial column to result in the same
catalog side-effects as before. The default expressions look different,
and the dependencies look different. For instance, an app that expected
atthasdef to tell it something about what happens when a column's value
is omitted would be surprised. An app that thought it could alter the
default expression for a column originally declared serial would be even
more surprised.
Admittedly, many of these things look a lot like the sort of system
catalog changes we make routinely and expect applications to cope.
But I don't think this would be a cost-free change. Serials have acted
the way they do for a pretty long time.
regards, tom lane
On 25/08/2019 19:42, Tom Lane wrote:
Vik Fearing <vik.fearing@2ndquadrant.com> writes:
On 25/08/2019 18:59, Tom Lane wrote:
Vik Fearing <vik.fearing@2ndquadrant.com> writes:
Is there a reason why the serial pseudotypes still behave as they did
pre-v10 and don't map to GENERATED BY DEFAULT AS IDENTITY these days?Backwards compatibility?
With what?
Applications that expect declaring a serial column to result in the same
catalog side-effects as before. The default expressions look different,
and the dependencies look different. For instance, an app that expected
atthasdef to tell it something about what happens when a column's value
is omitted would be surprised. An app that thought it could alter the
default expression for a column originally declared serial would be even
more surprised.Admittedly, many of these things look a lot like the sort of system
catalog changes we make routinely and expect applications to cope.
Indeed.
But I don't think this would be a cost-free change. Serials have acted
the way they do for a pretty long time.
I guess I'll keep telling people serials are obsolete then.
--
Vik Fearing
On Mon, 26 Aug 2019 at 01:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Vik Fearing <vik.fearing@2ndquadrant.com> writes:
On 25/08/2019 18:59, Tom Lane wrote:
Vik Fearing <vik.fearing@2ndquadrant.com> writes:
Is there a reason why the serial pseudotypes still behave as they did
pre-v10 and don't map to GENERATED BY DEFAULT AS IDENTITY these days?Backwards compatibility?
With what?
Applications that expect declaring a serial column to result in the same
catalog side-effects as before. The default expressions look different,
and the dependencies look different. For instance, an app that expected
atthasdef to tell it something about what happens when a column's value
is omitted would be surprised. An app that thought it could alter the
default expression for a column originally declared serial would be even
more surprised.
Right. I'd be very leery of changing this w/o a lot of checking and/or a
some BC to help apps that expect traditional SERIALs to work.
Hibernate is a rather widely used example. Not a good example mind you, but
a very widely used one.
Its PostgreSQL dialect directly calls nextval(...) and expects the value
returned to be the next value off the sequence based on the sequence's
increment counter. This is true of many other ORMs etc too. To reduce
round-trips and to give them control of when they flush dirty objects from
their internal writeback cache to the DB, they tend to preallocate object
IDs from an internal pool they populate periodically from the database
sequence generator. This design isn't even totally unreasonable given the
access patterns of these tools.
Hibernate also has an existing (IMO awful) defect of defaulting to a
sequence increment value is 50 in its schema configuration/generation
tools. It fails to check the increment on pre-existing schemas and blindly
assumes 50 so if nextval returns 500 it will merrily return values
(450..500] from its internal sequence value assignment pool. Much mess
ensues. So lets not pretend it is good, it's kind of horrible, but like
ActiveRecord and other monstrosities it's also a very widely used product
we need to care about. Unfortunately ;)
This reminds me though, I want to revisit my nextval(regclass, n_ids) patch
I have lying around somewhere. Apps should be able to request non-default
chunk allocations from nextval so we can avoid dealing with these
unpleasant assumptions about increment size...
... or worse, the apps that try to "fix" this by calling nextval then
setval to jump the sequence to the value they think it should have next.
And yes, I've seen this. In production code.
--
Craig Ringer http://www.2ndQuadrant.com/
2ndQuadrant - PostgreSQL Solutions for the Enterprise