pg_sequence catalog

Started by Peter Eisentrautover 9 years ago43 messageshackers
Jump to latest
#1Peter Eisentraut
peter_e@gmx.net

While I was hacking around sequence stuff, I felt the urge to look into
an old peeve: That sequence metadata is not stored in a proper catalog.
Right now in order to find out some metadata about sequences (min, max,
increment, etc.), you need to look into the sequence. That is like
having to query a table in order to find out about its schema.

There are also known issues with the current storage such as that we
can't safely update the sequence name stored in the sequence when we
rename, so we just don't.

This patch introduces a new catalog pg_sequence that stores the sequence
metadata. The sequences themselves now only store the counter and
supporting information.

I don't know if this is a net improvement. Maybe this introduces as
many new issues as it removes. But I figured I'll post this, so that at
least we can discuss it.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

pg_sequence.patchtext/x-patch; name=pg_sequence.patchDownload+358-244
#2Craig Ringer
craig@2ndquadrant.com
In reply to: Peter Eisentraut (#1)
Re: pg_sequence catalog

On 31 August 2016 at 21:17, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:

While I was hacking around sequence stuff, I felt the urge to look into
an old peeve: That sequence metadata is not stored in a proper catalog.
Right now in order to find out some metadata about sequences (min, max,
increment, etc.), you need to look into the sequence. That is like
having to query a table in order to find out about its schema.

There are also known issues with the current storage such as that we
can't safely update the sequence name stored in the sequence when we
rename, so we just don't.

... and don't have a comment warning the poor confused reader about
that, either. As I discovered when doing my first pass at sequence
decoding.

I don't know if this is a net improvement. Maybe this introduces as
many new issues as it removes. But I figured I'll post this, so that at
least we can discuss it.

This will change behaviour subtly. Probably not in ways we care much
about, but I'd rather that be an informed decision than an implicit
"oops we didn't think about that" one.

Things stored in the Form_pg_sequence are affected immediately,
non-transactionally, by ALTER SEQUENCE. If you:

BEGIN;
ALTER SEQUENCE myseq INTERVAL 10;
ROLLBACK;

then the *next call to nextval* after the ALTER will step by 10. Well,
roughly, there's some slush there due to caching etc. Rolling back has
no effect. Yet other effects of ALTER SEQUENCE, notably RENAME, are
transactional and are subject to normal locking, visibility and
rollback rules.

Even more fun, ALTER SEQUENCE ... RESTART is immediate and
non-transactional .... but TRUNCATE ... RESTART IDENTITY *is*
transactional and takes effect only at commit. ALTER SEQUENCE writes a
new Form_pg_sequence with the new value to the existing relfilenode.
TRUNCATE instead updates pg_class for the sequence with a new
relfilenode and writes its changes to the new relfilenode. So even two
operations that seem the same are very different.

If understand the proposed change correctly, this change will make
previously non-transactional ALTER SEQUENCE operations transactional
and subject to normal rules, since the relevant information is now in
a proper catalog.

Personally I think that's a big improvement. The current situation is
warts upon warts.

Prior proposals to move sequences away from a
one-relation-and-one-filenode-per-sequence model have fallen down in
early planning stages. This seems like a simpler, more sensible step
to take, and it won't block later cleanups of how we store sequences
if we decide to go there.

It'll also make it slightly easier to handle logical decoding of
sequence advances, since it'll be possible to send *just* the new
sequence value. Right now we can't tell if interval, etc, also got
changed, and have to send most of the Form_pg_sequence on the wire
for every sequence advance, which is a little sucky. This change won't
solve the problem I outlined in the other thread though, that
sequences are transactional sometimes and not other times.

So +1 for the idea from me. It'll just need relnotes warning of the
subtle behaviour change.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Craig Ringer (#2)
Re: pg_sequence catalog

Craig Ringer <craig@2ndquadrant.com> writes:

On 31 August 2016 at 21:17, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:

I don't know if this is a net improvement. Maybe this introduces as
many new issues as it removes. But I figured I'll post this, so that at
least we can discuss it.

This will change behaviour subtly.

Uh, not as subtly as all that, because "select * from sequence" will
now return a different set of columns, which will flat out break a
lot of clients that use that method to get sequence properties.

In previous discussions of this idea, we had speculated about turning
sequences into, effectively, views, so that we could still return all
the same columns --- only now some of them would be coming from a
catalog rather than the non-transactional per-sequence storage.
I'm not sure how feasible that is, but it's worth thinking about.

Personally, my big beef with the current approach to sequences is that
we eat a whole relation (including a whole relfilenode) per sequence.
I wish that we could reduce a sequence to just a single row in a
catalog, including the nontransactional state. Not sure how feasible
that is either, but accomplishing it would move the benefits of making
a change out of the "debatable whether it's worth it" category, IMO.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#3)
Re: pg_sequence catalog

I wrote:

Personally, my big beef with the current approach to sequences is that
we eat a whole relation (including a whole relfilenode) per sequence.
I wish that we could reduce a sequence to just a single row in a
catalog, including the nontransactional state. Not sure how feasible
that is either, but accomplishing it would move the benefits of making
a change out of the "debatable whether it's worth it" category, IMO.

BTW, another thing to keep in mind here is the ideas that have been
kicked around in the past about alternative sequence implementations
managed through a "sequence AM API". I dunno whether now is the time
to start creating that API abstraction, but let's at least consider
it if we're whacking the catalog representation around.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Craig Ringer
craig@2ndquadrant.com
In reply to: Tom Lane (#3)
Re: pg_sequence catalog

On 31 August 2016 at 22:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Personally, my big beef with the current approach to sequences is that
we eat a whole relation (including a whole relfilenode) per sequence.
I wish that we could reduce a sequence to just a single row in a
catalog, including the nontransactional state.

I'd be happy to see incremental improvement in this space as Peter has
suggested, though I certainly see the value of something like seqam
too.

It sounds like you're thinking of something like a normal(ish) heap
tuple where we just overwrite some fields in-place without fiddling
xmin/xmax and making a new row version. Right? Like we currently
overwrite the lone Form_pg_sequence on the 1-page sequence relations.

I initially thought that TRUNCATE ... RESTART IDENTITY would be
somewhat of a problem with this. We effectively have a temporary
"timeline" fork in the sequence value where it's provisionally
restarted and we start using values from the restarted sequence within
the xact that restarted it. But actually, it'd fit pretty well.
TRUNCATE ... RESTART IDENTITY would write a new row version with a new
xmin, and set xmax on the old sequence row. nextval(...) within the
truncating xact would update the new row's non-transactional fields
when it allocated new sequence chunks. On commit, everyone starts
using the new row due to normal transactional visibility rules. On
rollback everyone ignores it like they would any other dead tuple from
an aborted act and uses the old tuple's nontransactional fields. It
Just Works(TM).

nextval(...) takes AccessShareLock on a sequence relation. TRUNCATE
... RESTART IDENTITY takes AccessExclusiveLock. So we can never have
nextval(...) advancing the "old" timeline in other xacts at the same
time as we consume values on the restarted sequence inside the xact
that did the restarting. We still need the new "timeline" though,
because we have to retain the old value for rollback.

It feels intuitively pretty gross to effectively dirty-read and write
a few fields of a tuple. But that's what we do all the time with
xmin/xmax etc, it's not really that different.

It'd certainly make sequence decoding easier too. A LOT easier.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Petr Jelinek
petr@2ndquadrant.com
In reply to: Tom Lane (#4)
Re: pg_sequence catalog

On 31/08/16 16:10, Tom Lane wrote:

I wrote:

Personally, my big beef with the current approach to sequences is that
we eat a whole relation (including a whole relfilenode) per sequence.
I wish that we could reduce a sequence to just a single row in a
catalog, including the nontransactional state. Not sure how feasible
that is either, but accomplishing it would move the benefits of making
a change out of the "debatable whether it's worth it" category, IMO.

BTW, another thing to keep in mind here is the ideas that have been
kicked around in the past about alternative sequence implementations
managed through a "sequence AM API". I dunno whether now is the time
to start creating that API abstraction, but let's at least consider
it if we're whacking the catalog representation around.

FWIW if I was going to continue with the sequence AM API, the next patch
would have included split of sequence metadata and sequence state into
separate catalogs, so from that point this actually seems like an
improvement (I didn't look at the code though).

As a side note, I don't plan to resurrect the seqam patch at least until
we have reasonable built-in logical replication functionality.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Craig Ringer (#5)
Re: pg_sequence catalog

Craig Ringer <craig@2ndquadrant.com> writes:

On 31 August 2016 at 22:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Personally, my big beef with the current approach to sequences is that
we eat a whole relation (including a whole relfilenode) per sequence.
I wish that we could reduce a sequence to just a single row in a
catalog, including the nontransactional state.

It sounds like you're thinking of something like a normal(ish) heap
tuple where we just overwrite some fields in-place without fiddling
xmin/xmax and making a new row version. Right? Like we currently
overwrite the lone Form_pg_sequence on the 1-page sequence relations.

That would be what to do with the nontransactional state. If I recall
previous discussions correctly, there's a stumbling block if you want
to treat ALTER SEQUENCE changes as transactional --- but maybe that
doesn't make sense anyway. If we did want to try that, maybe we need
two auxiliary catalogs, one for the transactionally-updatable sequence
fields and one for the nontransactional fields.

It feels intuitively pretty gross to effectively dirty-read and write
a few fields of a tuple. But that's what we do all the time with
xmin/xmax etc, it's not really that different.

True. I think two rows would work around that, but maybe we don't
have to.

Another issue is what is the low-level interlock between nextvals
in different processes. Right now it's the buffer lock on the
sequence's page. With a scheme like this, if we just kept doing
that, we'd have a single lock covering probably O(100) different
sequences which might lead to contention problems. We could probably
improve on that with some thought.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#7)
Re: pg_sequence catalog

On 2016-08-31 11:23:27 -0400, Tom Lane wrote:

Another issue is what is the low-level interlock between nextvals
in different processes. Right now it's the buffer lock on the
sequence's page. With a scheme like this, if we just kept doing
that, we'd have a single lock covering probably O(100) different
sequences which might lead to contention problems. We could probably
improve on that with some thought.

I was thinking of forcing the rows to be spread to exactly one page per
sequence...

Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andres Freund (#8)
Re: pg_sequence catalog

Andres Freund wrote:

On 2016-08-31 11:23:27 -0400, Tom Lane wrote:

Another issue is what is the low-level interlock between nextvals
in different processes. Right now it's the buffer lock on the
sequence's page. With a scheme like this, if we just kept doing
that, we'd have a single lock covering probably O(100) different
sequences which might lead to contention problems. We could probably
improve on that with some thought.

I was thinking of forcing the rows to be spread to exactly one page per
sequence...

I was thinking that nextval could grab a shared buffer lock and release
immediately, just to ensure no one holds exclusive buffer lock
concurrently (which would be used for things like dropping one seq tuple
from the page, when a sequence is dropped); then control access to each
sequence tuple using LockDatabaseObject. This is a HW lock, heavier
than a buffer's LWLock, but it seems better than wasting a full 8kb for
each sequence.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Andres Freund
andres@anarazel.de
In reply to: Alvaro Herrera (#9)
Re: pg_sequence catalog

On 2016-08-31 12:56:45 -0300, Alvaro Herrera wrote:

I was thinking that nextval could grab a shared buffer lock and release
immediately, just to ensure no one holds exclusive buffer lock
concurrently (which would be used for things like dropping one seq tuple
from the page, when a sequence is dropped); then control access to each
sequence tuple using LockDatabaseObject. This is a HW lock, heavier
than a buffer's LWLock, but it seems better than wasting a full 8kb for
each sequence.

That's going to go be a *lot* slower, I don't think that's ok. I've a
hard time worrying about the space waste here; especially considering
where we're coming from.

Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#10)
Re: pg_sequence catalog

Andres Freund <andres@anarazel.de> writes:

On 2016-08-31 12:56:45 -0300, Alvaro Herrera wrote:

I was thinking that nextval could grab a shared buffer lock and release
immediately, just to ensure no one holds exclusive buffer lock
concurrently (which would be used for things like dropping one seq tuple
from the page, when a sequence is dropped); then control access to each
sequence tuple using LockDatabaseObject. This is a HW lock, heavier
than a buffer's LWLock, but it seems better than wasting a full 8kb for
each sequence.

That's going to go be a *lot* slower, I don't think that's ok. I've a
hard time worrying about the space waste here; especially considering
where we're coming from.

Improving on the space wastage is exactly the point IMO. If it's still
going to be 8k per sequence on disk (*and* in shared buffers, remember),
I'm not sure it's worth all the work to change things at all.

We're already dealing with taking a heavyweight lock for each sequence
(the relation AccessShareLock). I wonder whether it'd be possible to
repurpose that effort somehow.

Another idea would be to have nominally per-sequence LWLocks (or
spinlocks?) controlling nextval's nontransactional accesses to the catalog
rows, but to map those down to some fixed number of locks in a way similar
to the current fallback implementation for spinlocks, which maps them onto
a fixed number of semaphores. You'd trade off shared memory against
contention while choosing the underlying number of locks.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#11)
Re: pg_sequence catalog

Hi,

On 2016-08-31 12:53:30 -0400, Tom Lane wrote:

Improving on the space wastage is exactly the point IMO. If it's still
going to be 8k per sequence on disk (*and* in shared buffers, remember),
I'm not sure it's worth all the work to change things at all.

A separate file is a heck lot more heavyweight than another 8 kb in an
existing file.

Another idea would be to have nominally per-sequence LWLocks (or
spinlocks?) controlling nextval's nontransactional accesses to the catalog
rows, but to map those down to some fixed number of locks in a way similar
to the current fallback implementation for spinlocks, which maps them onto
a fixed number of semaphores. You'd trade off shared memory against
contention while choosing the underlying number of locks.

If we could rely on spinlocks to actually be spinlocks, we could just
put the spinlock besides the actual state data... Not entirely pretty,
but probably pretty simple.

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andres Freund (#12)
Re: pg_sequence catalog

Andres Freund wrote:

Hi,

On 2016-08-31 12:53:30 -0400, Tom Lane wrote:

Improving on the space wastage is exactly the point IMO. If it's still
going to be 8k per sequence on disk (*and* in shared buffers, remember),
I'm not sure it's worth all the work to change things at all.

A separate file is a heck lot more heavyweight than another 8 kb in an
existing file.

Yes, sure, we're still improving even if we stick to one-seq-per-bufpage,
but while we're at it, we could as well find a way to make it as best as
we can. And allowing multiple seqs per page seems a much better
situation, so let's try to get there.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Andres Freund
andres@anarazel.de
In reply to: Alvaro Herrera (#13)
Re: pg_sequence catalog

On 2016-08-31 14:25:43 -0300, Alvaro Herrera wrote:

Andres Freund wrote:

Hi,

On 2016-08-31 12:53:30 -0400, Tom Lane wrote:

Improving on the space wastage is exactly the point IMO. If it's still
going to be 8k per sequence on disk (*and* in shared buffers, remember),
I'm not sure it's worth all the work to change things at all.

A separate file is a heck lot more heavyweight than another 8 kb in an
existing file.

Yes, sure, we're still improving even if we stick to one-seq-per-bufpage,
but while we're at it, we could as well find a way to make it as best as
we can. And allowing multiple seqs per page seems a much better
situation, so let's try to get there.

It's not really that simple. Having independent sequence rows closer
together will have its own performance cost. Suddenly independent
sequences will compete for the same page level lock, NUMA systems will
have to transfer the page/cacheline around even if it's independent
sequences being accessed in different backends, we'll have to take care
about cacheline padding...

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#14)
Re: pg_sequence catalog

andres@anarazel.de (Andres Freund) writes:

On 2016-08-31 14:25:43 -0300, Alvaro Herrera wrote:

Yes, sure, we're still improving even if we stick to one-seq-per-bufpage,
but while we're at it, we could as well find a way to make it as best as
we can. And allowing multiple seqs per page seems a much better
situation, so let's try to get there.

It's not really that simple. Having independent sequence rows closer
together will have its own performance cost.

You are ignoring the performance costs associated with eating 100x more
shared buffer space than necessary.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#15)
Re: pg_sequence catalog

On 2016-08-31 13:59:48 -0400, Tom Lane wrote:

andres@anarazel.de (Andres Freund) writes:

On 2016-08-31 14:25:43 -0300, Alvaro Herrera wrote:

Yes, sure, we're still improving even if we stick to one-seq-per-bufpage,
but while we're at it, we could as well find a way to make it as best as
we can. And allowing multiple seqs per page seems a much better
situation, so let's try to get there.

It's not really that simple. Having independent sequence rows closer
together will have its own performance cost.

You are ignoring the performance costs associated with eating 100x more
shared buffer space than necessary.

I doubt that's measurable in any real-world scenario. You seldomly have
hundreds of thousands of sequences that you all select from at a high
rate.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#16)
Re: pg_sequence catalog

Andres Freund <andres@anarazel.de> writes:

On 2016-08-31 13:59:48 -0400, Tom Lane wrote:

You are ignoring the performance costs associated with eating 100x more
shared buffer space than necessary.

I doubt that's measurable in any real-world scenario. You seldomly have
hundreds of thousands of sequences that you all select from at a high
rate.

If there are only a few sequences in the database, cross-sequence
contention is not going to be a big issue anyway. I think most of
the point of making this change at all is to make things work better
when you do have a boatload of sequences.

Also, we could probably afford to add enough dummy padding to the sequence
tuples so that they're each exactly 64 bytes, thereby having only one
or two active counters per cache line.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#17)
Re: pg_sequence catalog

On 2016-08-31 14:23:41 -0400, Tom Lane wrote:

Andres Freund <andres@anarazel.de> writes:

On 2016-08-31 13:59:48 -0400, Tom Lane wrote:

You are ignoring the performance costs associated with eating 100x more
shared buffer space than necessary.

I doubt that's measurable in any real-world scenario. You seldomly have
hundreds of thousands of sequences that you all select from at a high
rate.

If there are only a few sequences in the database, cross-sequence
contention is not going to be a big issue anyway.

Isn't that *precisely* when it's going to matter? If you have 5 active
tables & sequences where the latter previously used independent locks,
and they'd now be contending on a single lock. If you have hundreds of
thousands of active sequences, I doubt individual page locks would
become a point of contention.

Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#18)
Re: pg_sequence catalog

On Wed, Aug 31, 2016 at 3:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Uh, not as subtly as all that, because "select * from sequence" will
now return a different set of columns, which will flat out break a
lot of clients that use that method to get sequence properties.

So? Clients expect changes like this between major releases surely.
Subtle changes that cause silent breakage for end-users seems scarier
than unsubtle breakage that tool authors can fix.

On Wed, Aug 31, 2016 at 7:30 PM, Andres Freund <andres@anarazel.de> wrote:

Isn't that *precisely* when it's going to matter? If you have 5 active
tables & sequences where the latter previously used independent locks,
and they'd now be contending on a single lock. If you have hundreds of
thousands of active sequences, I doubt individual page locks would
become a point of contention.

I think even two sequences could be a point of contention if you, for
example, are using COPY to load data into two otherwise completely
independent tables in two separate processes.

But that just means the row needs to be padded out to a cache line,
no? Or was the concern about things like trying to pin the same page,
parse the same page header, follow nearby line pointers...? I'm not
sure how effective all that caching is today but it doesn't seem
impossible to imagine getting that all optimized away.

--
greg

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Simon Riggs
simon@2ndQuadrant.com
In reply to: Bruce Momjian (#19)
Re: pg_sequence catalog

On 4 September 2016 at 23:17, Greg Stark <stark@mit.edu> wrote:

On Wed, Aug 31, 2016 at 3:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Uh, not as subtly as all that, because "select * from sequence" will
now return a different set of columns, which will flat out break a
lot of clients that use that method to get sequence properties.

So? Clients expect changes like this between major releases surely.
Subtle changes that cause silent breakage for end-users seems scarier
than unsubtle breakage that tool authors can fix.

Agreed; some change in the behaviour if SELECT * FROM sequence is
effectively part of this proposal. I was going to make the same
comment myself.

I think we should balance the problems caused by not having a sequence
catalog against the problems caused by changing the currently somewhat
broken situation.

I vote in favour of applying Peter's idea/patch, though if that is not
acceptable I don't think its worth spending further time on for any of
us, so if thats the case lets Return with Feedback and move on.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#20)
#22Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#21)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#22)
#24Peter Eisentraut
peter_e@gmx.net
In reply to: Tom Lane (#23)
#25Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#18)
#26Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#25)
#27Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#26)
#28Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#27)
#29Peter Eisentraut
peter_e@gmx.net
In reply to: Peter Eisentraut (#24)
#30Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Peter Eisentraut (#24)
#31Peter Eisentraut
peter_e@gmx.net
In reply to: Andreas Karlsson (#30)
#32Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Peter Eisentraut (#31)
#33Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Andreas Karlsson (#32)
#34Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Peter Eisentraut (#29)
#35Peter Eisentraut
peter_e@gmx.net
In reply to: Andreas Karlsson (#33)
#36Peter Eisentraut
peter_e@gmx.net
In reply to: Andreas Karlsson (#34)
#37Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Peter Eisentraut (#36)
#38Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Andreas Karlsson (#37)
#39Peter Eisentraut
peter_e@gmx.net
In reply to: Andreas Karlsson (#37)
#40Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Peter Eisentraut (#39)
#41Peter Eisentraut
peter_e@gmx.net
In reply to: Kuntal Ghosh (#40)
#42Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Peter Eisentraut (#41)
#43Peter Eisentraut
peter_e@gmx.net
In reply to: Andreas Karlsson (#42)