PostgreSQL Limits and lack of documentation about them.

Started by David Rowleyover 7 years ago28 messageshackers
Jump to latest
#1David Rowley
dgrowleyml@gmail.com

For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:
https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

I think it's a bit strange that we don't have this information fairly
early on in the official documentation. I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

Does anyone else have any thoughts about this?

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#2Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: David Rowley (#1)
Re: PostgreSQL Limits and lack of documentation about them.

On Fri, Oct 26, 2018 at 9:30 AM David Rowley <david.rowley@2ndquadrant.com>
wrote:

For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:

https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

I think it's a bit strange that we don't have this information fairly
early on in the official documentation. I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

I also try to find such limits of PostgreSQL, but I couldn't find it.
+1 to add them to docs.

Regards,
Haribabu Kommi
Fujitsu Australia

#3Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: David Rowley (#1)
RE: PostgreSQL Limits and lack of documentation about them.

From: David Rowley [mailto:david.rowley@2ndquadrant.com]

I think it's a bit strange that we don't have this information fairly
early on in the official documentation. I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

Does anyone else have any thoughts about this?

+1
As a user, I feel I would look for such information in appendix like "A Database limits" in Oracle's Database Reference manual:

https://docs.oracle.com/en/database/oracle/oracle-database/18/refrn/database-limits.html#GUID-ED26F826-DB40-433F-9C2C-8C63A46A3BFE

As a somewhat related topic, PostgreSQL doesn't mention the maximum values for numeric parameters. I was asked several times the questions like "what's the maximum value for max_connections?" and "how much memory can I use for work_mem?" I don't feel a strong need to specify those values, but I wonder if we should do something.

Regards
Takayuki Tsunakawa

#4Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: David Rowley (#1)
Re: PostgreSQL Limits and lack of documentation about them.

On 2018-Oct-26, David Rowley wrote:

For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:
https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

This was removed in
https://git.postgresql.org/gitweb/?p=pgweb.git;a=commitdiff;h=66760d73bca6

Making the /about/ page leaner is a good objective IMO, considering the
target audience of that page (not us), but I wonder if the content
should have been moved elsewhere. It's still in the wiki:
https://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F
but that doesn't seem great either.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#5Narayanan V
vnarayanan.email@gmail.com
In reply to: David Rowley (#1)
Re: PostgreSQL Limits and lack of documentation about them.

+1 for inclusion in docs.

On Fri, Oct 26, 2018 at 4:00 AM David Rowley <david.rowley@2ndquadrant.com>
wrote:

Show quoted text

For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:

https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

I think it's a bit strange that we don't have this information fairly
early on in the official documentation. I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

Does anyone else have any thoughts about this?

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#6David Rowley
dgrowleyml@gmail.com
In reply to: Haribabu Kommi (#2)
Re: PostgreSQL Limits and lack of documentation about them.

On 26 October 2018 at 11:40, Haribabu Kommi <kommi.haribabu@gmail.com> wrote:

On Fri, Oct 26, 2018 at 9:30 AM David Rowley <david.rowley@2ndquadrant.com>
wrote:

For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:

https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

I think it's a bit strange that we don't have this information fairly
early on in the official documentation. I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

I also try to find such limits of PostgreSQL, but I couldn't find it.
+1 to add them to docs.

I've attached a very rough patch which adds a new appendix section
named "Database Limitations". I've included what was mentioned in [1]https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/
plus I've added a few other things that I thought should be mentioned.
I'm sure there will be many more ideas.

I'm not so sure about detailing limits of GUCs since the limits of
those are mentioned in pg_settings.

[1]: https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

v1-0001-Add-documentation-section-appendix-detailing-some.patchapplication/octet-stream; name=v1-0001-Add-documentation-section-appendix-detailing-some.patchDownload+106-1
#7John Naylor
john.naylor@enterprisedb.com
In reply to: David Rowley (#6)
Re: PostgreSQL Limits and lack of documentation about them.

On 10/30/18, David Rowley <david.rowley@2ndquadrant.com> wrote:

On 26 October 2018 at 11:40, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

On Fri, Oct 26, 2018 at 9:30 AM David Rowley
<david.rowley@2ndquadrant.com>
wrote:

For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:

https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

I think it's a bit strange that we don't have this information fairly
early on in the official documentation. I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

I also try to find such limits of PostgreSQL, but I couldn't find it.
+1 to add them to docs.

I've attached a very rough patch which adds a new appendix section
named "Database Limitations". I've included what was mentioned in [1]
plus I've added a few other things that I thought should be mentioned.
I'm sure there will be many more ideas.

David,
Thanks for doing this. I haven't looked at the rendered output yet,
but I have some comments on the content.

+      <entry>Maximum Relation Size</entry>
+      <entry>32 TB</entry>
+      <entry>Limited by 2^32 pages per relation</entry>

I prefer "limited to" or "limited by the max number of pages per
relation, ...". I think pedantically it's 2^32 - 1, since that value
is used for InvalidBlockNumber. More importantly, that seems to be for
8kB pages. I imagine this would go up with a larger page size. Page
size might also be worth mentioning separately. Also max number of
relation file segments, if any.

+      <entry>Maximum Columns per Table</entry>
+      <entry>250 - 1600</entry>
+      <entry>Depending on column types. (More details here)</entry>

Would this also depend on page size? Also, I'd put this entry before this one:

+      <entry>Maximum Row Size</entry>
+      <entry>1600 GB</entry>
+      <entry>Assuming 1600 columns, each 1 GB in size</entry>

A toast pointer is 18 bytes, according to the docs, so I would guess
the number of toasted columns would actually be much less? I'll test
this on my machine sometime (not 1600GB, but the max number of toasted
columns per tuple).

+      <entry>Maximum Identifier Length</entry>
+      <entry>63 characters</entry>
+      <entry></entry>

Can this be increased with recompiling, if not conveniently?

+      <entry>Maximum Indexed Columns</entry>
+      <entry>32</entry>
+      <entry>Can be increased by recompiling
<productname>PostgreSQL</productname></entry>

How about the max number of included columns in a covering index?

I'm not so sure about detailing limits of GUCs since the limits of
those are mentioned in pg_settings.

Maybe we could just have a link to that section in the docs.

--
-John Naylor

#8David Rowley
dgrowleyml@gmail.com
In reply to: John Naylor (#7)
Re: PostgreSQL Limits and lack of documentation about them.

On 1 November 2018 at 04:40, John Naylor <jcnaylor@gmail.com> wrote:

Thanks for doing this. I haven't looked at the rendered output yet,
but I have some comments on the content.

+      <entry>Maximum Relation Size</entry>
+      <entry>32 TB</entry>
+      <entry>Limited by 2^32 pages per relation</entry>

I prefer "limited to" or "limited by the max number of pages per
relation, ...". I think pedantically it's 2^32 - 1, since that value
is used for InvalidBlockNumber. More importantly, that seems to be for
8kB pages. I imagine this would go up with a larger page size. Page
size might also be worth mentioning separately. Also max number of
relation file segments, if any.

Thanks for looking at this.

I've changed this and added mention of BLKSIZE. I was a bit unclear
on how much internal detail should go into this.

+      <entry>Maximum Columns per Table</entry>
+      <entry>250 - 1600</entry>
+      <entry>Depending on column types. (More details here)</entry>

Would this also depend on page size? Also, I'd put this entry before this one:

+      <entry>Maximum Row Size</entry>
+      <entry>1600 GB</entry>
+      <entry>Assuming 1600 columns, each 1 GB in size</entry>

A toast pointer is 18 bytes, according to the docs, so I would guess
the number of toasted columns would actually be much less? I'll test
this on my machine sometime (not 1600GB, but the max number of toasted
columns per tuple).

I did try a table with 1600 text columns then inserted values of
several kB each. Trying with BIGINT columns the row was too large for
the page. I've never really gotten a chance to explore these limits
before, so I guess this is about the time.

+      <entry>Maximum Identifier Length</entry>
+      <entry>63 characters</entry>
+      <entry></entry>

Can this be increased with recompiling, if not conveniently?

Yeah. I added a note about that.

+      <entry>Maximum Indexed Columns</entry>
+      <entry>32</entry>
+      <entry>Can be increased by recompiling
<productname>PostgreSQL</productname></entry>

How about the max number of included columns in a covering index?

Those are included in the limit. I updated the text.

I'm not so sure about detailing limits of GUCs since the limits of
those are mentioned in pg_settings.

Maybe we could just have a link to that section in the docs.

That's likely a good idea. I was just unable to find anything better
than the link to the pg_settings view.

I've attached an updated patch, again it's just intended as an aid for
discussions at this stage. Also included the rendered html.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

dblimits.htmltext/html; charset=UTF-8; name=dblimits.htmlDownload
v2-0001-Add-documentation-section-appendix-detailing-some.patchapplication/octet-stream; name=v2-0001-Add-documentation-section-appendix-detailing-some.patchDownload+109-1
#9Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: David Rowley (#8)
Re: PostgreSQL Limits and lack of documentation about them.

On Oct 31, 2018, at 5:22 PM, David Rowley <david.rowley@2ndquadrant.com> wrote:

On 1 November 2018 at 04:40, John Naylor <jcnaylor@gmail.com> wrote:

Thanks for doing this. I haven't looked at the rendered output yet,
but I have some comments on the content.

+      <entry>Maximum Relation Size</entry>
+      <entry>32 TB</entry>
+      <entry>Limited by 2^32 pages per relation</entry>

I prefer "limited to" or "limited by the max number of pages per
relation, ...". I think pedantically it's 2^32 - 1, since that value
is used for InvalidBlockNumber. More importantly, that seems to be for
8kB pages. I imagine this would go up with a larger page size. Page
size might also be worth mentioning separately. Also max number of
relation file segments, if any.

Thanks for looking at this.

I've changed this and added mention of BLKSIZE. I was a bit unclear
on how much internal detail should go into this.

It’s a bit misleading to say “Can be increased by increasing BLKSZ and recompiling”, since you’d also need to re initdb. Given that messing with BLKSZ is pretty uncommon I would simply put a note somewhere that mentions that these values assume the default BLKSZ of 8192.

+      <entry>Maximum Columns per Table</entry>
+      <entry>250 - 1600</entry>
+      <entry>Depending on column types. (More details here)</entry>

Would this also depend on page size? Also, I'd put this entry before this one:

+      <entry>Maximum Row Size</entry>
+      <entry>1600 GB</entry>
+      <entry>Assuming 1600 columns, each 1 GB in size</entry>

A toast pointer is 18 bytes, according to the docs, so I would guess
the number of toasted columns would actually be much less? I'll test
this on my machine sometime (not 1600GB, but the max number of toasted
columns per tuple).

I did try a table with 1600 text columns then inserted values of
several kB each. Trying with BIGINT columns the row was too large for
the page. I've never really gotten a chance to explore these limits
before, so I guess this is about the time.

Hmm… 18 bytes doesn’t sound right, at least not for the Datum. Offhand I’d expect it to be the small (1 byte) varlena header + an OID (4 bytes). Even then I don’t understand how 1600 text columns would work; the data area of a tuple should be limited to ~2000 bytes, and 2000/5 = 400.

#10John Naylor
john.naylor@enterprisedb.com
In reply to: Jim Nasby (#9)
Re: PostgreSQL Limits and lack of documentation about them.

On 11/1/18, Nasby, Jim <nasbyj@amazon.com> wrote:

Hmm… 18 bytes doesn’t sound right, at least not for the Datum. Offhand I’d
expect it to be the small (1 byte) varlena header + an OID (4 bytes). Even
then I don’t understand how 1600 text columns would work; the data area of a
tuple should be limited to ~2000 bytes, and 2000/5 = 400.

The wording in the docs (under Physical Storage) is "Allowing for the
varlena header bytes, the total size of an on-disk TOAST pointer datum
is therefore 18 bytes regardless of the actual size of the represented
value.", and as I understand it, it's

header + toast table oid + chunk_id + logical size + compressed size.

This is one area where visual diagrams would be nice.

-John Naylor

#11Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Jim Nasby (#9)
Re: PostgreSQL Limits and lack of documentation about them.

"Nasby," == Nasby, Jim <nasbyj@amazon.com> writes:

I did try a table with 1600 text columns then inserted values of
several kB each. Trying with BIGINT columns the row was too large
for the page. I've never really gotten a chance to explore these
limits before, so I guess this is about the time.

Nasby> Hmm… 18 bytes doesn’t sound right, at least not for the Datum.
Nasby> Offhand I’d expect it to be the small (1 byte) varlena header +
Nasby> an OID (4 bytes). Even then I don’t understand how 1600 text
Nasby> columns would work; the data area of a tuple should be limited
Nasby> to ~2000 bytes, and 2000/5 = 400.

1600 text columns won't work unless the values are very short or null.

A toast pointer is indeed 18 bytes: 1 byte varlena header flagging it as
a toast pointer, 1 byte type tag, raw size, saved size, toast value oid,
toast table oid.

A tuple can be almost as large as a block; the block/4 threshold is only
the point at which the toaster is run, not a limit on tuple size.

So (with 8k blocks) the limit on the number of non-null external-toasted
columns is about 450, while you can have the full 1600 columns if they
are integers or smaller, or just over 1015 bigints. But you can have
1600 text columns if they average 4 bytes or less (excluding length
byte).

If you push too close to the limit, it may even be possible to overflow
the tuple size by setting fields to null, since the null bitmap is only
present if at least one field is null. So you can have 1010 non-null
bigints, but if you try and do 1009 non-null bigints and one null, it
won't fit (and nor will 999 non-nulls and 11 nulls, if I calculated
right).

(Note also that dropped columns DO count against the 1600 limit, and
also that they are (for new row versions) set to null and thus force the
null bitmap to be present.)

--
Andrew (irc:RhodiumToad)

#12John Naylor
john.naylor@enterprisedb.com
In reply to: Andrew Gierth (#11)
Re: PostgreSQL Limits and lack of documentation about them.

On 11/1/18, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:

So (with 8k blocks) the limit on the number of non-null external-toasted
columns is about 450, while you can have the full 1600 columns if they
are integers or smaller, or just over 1015 bigints. But you can have
1600 text columns if they average 4 bytes or less (excluding length
byte).

If you push too close to the limit, it may even be possible to overflow
the tuple size by setting fields to null, since the null bitmap is only
present if at least one field is null. So you can have 1010 non-null
bigints, but if you try and do 1009 non-null bigints and one null, it
won't fit (and nor will 999 non-nulls and 11 nulls, if I calculated
right).

Thanks for that, Andrew, that was insightful. I drilled down to get
the exact values:

Non-nullable columns:
text (4 bytes each or less): 1600
toasted text: 452
int: 1600
bigint: 1017

Nullable columns with one null value:
text (4 bytes each or less): 1600
toasted text: 449
int: 1600
bigint: 1002

-John Naylor

#13John Naylor
john.naylor@enterprisedb.com
In reply to: David Rowley (#8)
Re: PostgreSQL Limits and lack of documentation about them.

On 11/1/18, David Rowley <david.rowley@2ndquadrant.com> wrote:

I've attached an updated patch, again it's just intended as an aid for
discussions at this stage. Also included the rendered html.

Looks good so far. Based on experimentation with toasted columns, it
seems the largest row size is 452GB, but I haven't tried that on my
laptop. :-) As for the number-of-column limits, it's a matter of how
much detail we want to include. With all the numbers in my previous
email, that could probably use its own table if we include them all.

On 11/1/18, Nasby, Jim <nasbyj@amazon.com> wrote:

It’s a bit misleading to say “Can be increased by increasing BLKSZ and
recompiling”, since you’d also need to re initdb. Given that messing with
BLKSZ is pretty uncommon I would simply put a note somewhere that mentions
that these values assume the default BLKSZ of 8192.

+1

-John Naylor

#14Robert Haas
robertmhaas@gmail.com
In reply to: John Naylor (#13)
Re: PostgreSQL Limits and lack of documentation about them.

On Tue, Nov 6, 2018 at 6:01 AM John Naylor <jcnaylor@gmail.com> wrote:

On 11/1/18, David Rowley <david.rowley@2ndquadrant.com> wrote:

I've attached an updated patch, again it's just intended as an aid for
discussions at this stage. Also included the rendered html.

Looks good so far. Based on experimentation with toasted columns, it
seems the largest row size is 452GB, but I haven't tried that on my
laptop. :-) As for the number-of-column limits, it's a matter of how
much detail we want to include. With all the numbers in my previous
email, that could probably use its own table if we include them all.

There are a lot of variables here. A particular row size may work for
one encoding and not for another.

IMHO, documenting that you can get up to 1600 integer columns but only
1002 bigint columns doesn't really help anybody, because nobody has a
table with only one type of column, and people usually want to have
some latitude to run ALTER TABLE commands later.

It might be useful for some users to explain that certain things will
should work for values < X, may work for values between X and Y, and
will definitely not work above Y. Or maybe we can provide a narrative
explanation rather than just a table of numbers. Or both. But I
think trying to provide a table of exact cutoffs is sort of like
tilting at windmills.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#15David Rowley
dgrowleyml@gmail.com
In reply to: Robert Haas (#14)
Re: PostgreSQL Limits and lack of documentation about them.

On 8 November 2018 at 10:02, Robert Haas <robertmhaas@gmail.com> wrote:

IMHO, documenting that you can get up to 1600 integer columns but only
1002 bigint columns doesn't really help anybody, because nobody has a
table with only one type of column, and people usually want to have
some latitude to run ALTER TABLE commands later.

It might be useful for some users to explain that certain things will
should work for values < X, may work for values between X and Y, and
will definitely not work above Y. Or maybe we can provide a narrative
explanation rather than just a table of numbers. Or both. But I
think trying to provide a table of exact cutoffs is sort of like
tilting at windmills.

I added something along those lines in a note below the table. Likely
there are better ways to format all this, but trying to detail out
what the content should be first.

Hopefully I I've addressed the other things mentioned too.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

dblimits.htmltext/html; charset=UTF-8; name=dblimits.htmlDownload
v3-0001-Add-documentation-section-appendix-detailing-some.patchapplication/octet-stream; name=v3-0001-Add-documentation-section-appendix-detailing-some.patchDownload+127-1
#16John Naylor
john.naylor@enterprisedb.com
In reply to: David Rowley (#15)
Re: PostgreSQL Limits and lack of documentation about them.

On 11/8/18, David Rowley <david.rowley@2ndquadrant.com> wrote:

On 8 November 2018 at 10:02, Robert Haas <robertmhaas@gmail.com> wrote:

It might be useful for some users to explain that certain things will
should work for values < X, may work for values between X and Y, and
will definitely not work above Y. Or maybe we can provide a narrative
explanation rather than just a table of numbers. Or both. But I
think trying to provide a table of exact cutoffs is sort of like
tilting at windmills.

I added something along those lines in a note below the table. Likely
there are better ways to format all this, but trying to detail out
what the content should be first.

The language seems fine to me.

-John Naylor

#17Peter Eisentraut
peter_e@gmx.net
In reply to: David Rowley (#15)
Re: PostgreSQL Limits and lack of documentation about them.

On 08/11/2018 04:13, David Rowley wrote:

I added something along those lines in a note below the table. Likely
there are better ways to format all this, but trying to detail out
what the content should be first.

Hopefully I I've addressed the other things mentioned too.

Could you adjust this to use fewer capital letters, unless they start
sentences or similar?

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#18David Rowley
dgrowleyml@gmail.com
In reply to: Peter Eisentraut (#17)
Re: PostgreSQL Limits and lack of documentation about them.

On 8 November 2018 at 22:46, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:

Could you adjust this to use fewer capital letters, unless they start
sentences or similar?

Yeah. Changed in the attached.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

v4-0001-Add-documentation-section-appendix-detailing-some.patchapplication/octet-stream; name=v4-0001-Add-documentation-section-appendix-detailing-some.patchDownload+134-1
#19John Naylor
john.naylor@enterprisedb.com
In reply to: David Rowley (#18)
Re: PostgreSQL Limits and lack of documentation about them.

On 11/8/18, David Rowley <david.rowley@2ndquadrant.com> wrote:

On 8 November 2018 at 22:46, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:

Could you adjust this to use fewer capital letters, unless they start
sentences or similar?

Yeah. Changed in the attached.

Looks good to me. Since there have been no new suggestions for a few
days, I'll mark it ready for committer.

-John Naylor

#20David Rowley
dgrowleyml@gmail.com
In reply to: John Naylor (#19)
Re: PostgreSQL Limits and lack of documentation about them.

On 13 November 2018 at 19:46, John Naylor <jcnaylor@gmail.com> wrote:

On 11/8/18, David Rowley <david.rowley@2ndquadrant.com> wrote:

On 8 November 2018 at 22:46, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:

Could you adjust this to use fewer capital letters, unless they start
sentences or similar?

Yeah. Changed in the attached.

Looks good to me. Since there have been no new suggestions for a few
days, I'll mark it ready for committer.

Thanks for your review. I don't think these initially need to include
100% of the limits. If we stumble on things later that seem worth
including, we'll have a place to write them down.

The only other thing that sprung to my mind was the maximum tables per
query. This is currently limited to 64999, not including double
counting partitioned tables and inheritance parents, but I kinda think
of we feel the need to document it, then we might as well just raise
the limit. It seems a bit arbitrarily set at the moment. I don't see
any reason it couldn't be higher. Although, if it was too high we'd
start hitting things like palloc() size limits on simple_rte_array.
I'm inclined to not bother mentioning it.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#21Tom Lane
tgl@sss.pgh.pa.us
In reply to: David Rowley (#20)
#22David Rowley
dgrowleyml@gmail.com
In reply to: Tom Lane (#21)
#23Peter Eisentraut
peter_e@gmx.net
In reply to: David Rowley (#22)
#24Steve Crawford
scrawford@pinpointresearch.com
In reply to: Peter Eisentraut (#23)
#25David Rowley
dgrowleyml@gmail.com
In reply to: Peter Eisentraut (#23)
#26David Rowley
dgrowleyml@gmail.com
In reply to: Steve Crawford (#24)
#27Peter Eisentraut
peter_e@gmx.net
In reply to: David Rowley (#26)
#28David Rowley
dgrowleyml@gmail.com
In reply to: Peter Eisentraut (#27)