BUG #14244: wrong suffix for pg_size_pretty()

Started by Nonameover 9 years ago40 messages
#1Noname
thomas.berger@1und1.de

The following bug has been logged on the website:

Bug reference: 14244
Logged by: Thomas Berger
Email address: thomas.berger@1und1.de
PostgreSQL version: 9.5.3
Operating system: any
Description:

pg_size_pretty uses the suffix "kB" (kilobyte, 10^3 byte), but the returned
value is "KB", or "KiB" ( kibibyte, 2^10 byte). This is missleading and
should be fixed. See also https://en.wikipedia.org/wiki/Kibibyte

=# select pg_size_pretty(1024000::bigint);
pg_size_pretty
----------------
1000 kB

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

#2Bruce Momjian
bruce@momjian.us
In reply to: Noname (#1)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Jul 12, 2016 at 01:36:38PM +0000, thomas.berger@1und1.de wrote:

The following bug has been logged on the website:

Bug reference: 14244
Logged by: Thomas Berger
Email address: thomas.berger@1und1.de
PostgreSQL version: 9.5.3
Operating system: any
Description:

pg_size_pretty uses the suffix "kB" (kilobyte, 10^3 byte), but the returned
value is "KB", or "KiB" ( kibibyte, 2^10 byte). This is missleading and
should be fixed. See also https://en.wikipedia.org/wiki/Kibibyte

=# select pg_size_pretty(1024000::bigint);
pg_size_pretty
----------------
1000 kB

(Thread moved to hackers.)

The Postgres docs specify that kB is based on 1024 or 2^10:

https://www.postgresql.org/docs/9.6/static/functions-admin.html

Note: The units kB, MB, GB and TB used by the functions
pg_size_pretty and pg_size_bytes are defined using powers of 2 rather
than powers of 10, so 1kB is 1024 bytes, 1MB is 10242 = 1048576 bytes,
and so on.

These prefixes were introduced to GUC variable specification in 2006:

commit b517e653489f733893d61e7a84c118325394471c
Author: Peter Eisentraut <peter_e@gmx.net>
Date: Thu Jul 27 08:30:41 2006 +0000

Allow units to be specified with configuration settings.

and added to postgresql.conf:

# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days

and the units were copied when pg_size_pretty() was implemented. These
units are based on the International System of Units (SI)/metric.
However, the SI system is power-of-10-based, and we just re-purposed
them to be 1024 or 2^10-based.

However, that is not the end of the story. Things have moved forward
since 2006 and there is now firm support for either KB or KiB to be
1024-based units. This blog post explains the current state of prefix
specification:

http://pchelp.ricmedia.com/kilobytes-megabytes-gigabytes-terabytes-explained/

and here is a summary for 1000/1024-based units:

Kilobyte (Binary, JEDEC) KB 1024
Kilobyte (Decimal, Metric) kB 1000
Kibibyte (Binary, IEC) KiB 1024

You will notice that none of these list kB as 1024, which explains this
bug report.

Yes, we have redefined kB, and documented its use in postgresql.conf and
pg_size_pretty(), but it does not match any recognized standard.

I am thinking Postgres 10 would be a good time to switch to KB as a
1024-based prefix. Unfortunately, there is no similar fix for MB, GB,
etc. 'm' is 'milli' so there we never used mB, so in JEDEC and Metric,
MB is ambiguous as 1000-based or 1024-based.

IEC does give us a unique specification for 'mega', MiB, and GiB, which
might be what we want to use, but that might be too big a change, and I
rarely see those.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#2)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Fri, Jul 29, 2016 at 08:18:38PM -0400, Bruce Momjian wrote:

However, that is not the end of the story. Things have moved forward
since 2006 and there is now firm support for either KB or KiB to be
1024-based units. This blog post explains the current state of prefix
specification:

http://pchelp.ricmedia.com/kilobytes-megabytes-gigabytes-terabytes-explained/

and here is a summary for 1000/1024-based units:

Kilobyte (Binary, JEDEC) KB 1024
Kilobyte (Decimal, Metric) kB 1000
Kibibyte (Binary, IEC) KiB 1024

Oh, also, here is a Wikipedia article that has a nice chart on the top
right:

https://en.wikipedia.org/wiki/Binary_prefix

and a post that explains some of the background:

http://superuser.com/questions/938234/size-of-files-in-windows-os-its-kb-or-kb

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4David G. Johnston
david.g.johnston@gmail.com
In reply to: Bruce Momjian (#2)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Fri, Jul 29, 2016 at 8:18 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Jul 12, 2016 at 01:36:38PM +0000, thomas.berger@1und1.de wrote:

The following bug has been logged on the website:

Bug reference: 14244
Logged by: Thomas Berger
Email address: thomas.berger@1und1.de
PostgreSQL version: 9.5.3
Operating system: any
Description:

pg_size_pretty uses the suffix "kB" (kilobyte, 10^3 byte), but the

returned

value is "KB", or "KiB" ( kibibyte, 2^10 byte). This is missleading and
should be fixed. See also https://en.wikipedia.org/wiki/Kibibyte

=# select pg_size_pretty(1024000::bigint);
pg_size_pretty
----------------
1000 kB

(Thread moved to hackers.)

Yes, we have redefined kB, and documented its use in postgresql.conf and
pg_size_pretty(), but it does not match any recognized standard.

​After bouncing on this for a bit I'm inclined to mark the bug itself
"won't fix" but introduce a "to_binary_iso" function (I'm hopeful a better
name will emerge...) that will output a number using ISO binary suffixes.
I would document this under 9.8 "data type formatting functions" instead of
within system functions.

pg_size_pretty output can continue with a defined role to be used as input
into a GUC variable; and to keep backward compatibility. Add a note near
its definition to use "to_binary_iso" for a standard-conforming output
string.

​David J.

#5Pavel Stehule
pavel.stehule@gmail.com
In reply to: David G. Johnston (#4)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

2016-07-30 3:47 GMT+02:00 David G. Johnston <david.g.johnston@gmail.com>:

On Fri, Jul 29, 2016 at 8:18 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Jul 12, 2016 at 01:36:38PM +0000, thomas.berger@1und1.de wrote:

The following bug has been logged on the website:

Bug reference: 14244
Logged by: Thomas Berger
Email address: thomas.berger@1und1.de
PostgreSQL version: 9.5.3
Operating system: any
Description:

pg_size_pretty uses the suffix "kB" (kilobyte, 10^3 byte), but the

returned

value is "KB", or "KiB" ( kibibyte, 2^10 byte). This is missleading and
should be fixed. See also https://en.wikipedia.org/wiki/Kibibyte

=# select pg_size_pretty(1024000::bigint);
pg_size_pretty
----------------
1000 kB

(Thread moved to hackers.)

Yes, we have redefined kB, and documented its use in postgresql.conf and
pg_size_pretty(), but it does not match any recognized standard.

​After bouncing on this for a bit I'm inclined to mark the bug itself
"won't fix" but introduce a "to_binary_iso" function (I'm hopeful a better
name will emerge...) that will output a number using ISO binary suffixes.
I would document this under 9.8 "data type formatting functions" instead of
within system functions.

pg_size_pretty output can continue with a defined role to be used as input
into a GUC variable; and to keep backward compatibility. Add a note near
its definition to use "to_binary_iso" for a standard-conforming output
string.

We talked about this issue, when I wrote function pg_size_bytes. It is hard
to fix these functions after years of usage. The new set of functions can
be better

pg_iso_size_pretty();
pg_iso_size_bytes();

or shorter name

pg_isize_pretty();
pg_isize_bytes();

Regards

Pavel

Show quoted text

​David J.

#6Greg Stark
stark@mit.edu
In reply to: David G. Johnston (#4)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Sat, Jul 30, 2016 at 2:47 AM, David G. Johnston
<david.g.johnston@gmail.com> wrote:

After bouncing on this for a bit I'm inclined to mark the bug itself "won't
fix" but introduce a "to_binary_iso" function (I'm hopeful a better name
will emerge...) that will output a number using ISO binary suffixes. I
would document this under 9.8 "data type formatting functions" instead of
within system functions.

I think Bruce's summary is a bit revisionist. All these standards are
attempts to reconcile two different conflicting traditions that have
been conflicting for decades. There's a conflict for a reason though
and the tradition of using powers of 2 is well-ingrained in plenty of
practices and software, not just Postgres.

Personally I'm pretty satisfied with the current mode of operation
because I think powers of 2 are vastly more useful and more likely to
be what the user actually wants. You would be hard pressed to find any
users actually typing KiB or MiB in config files and being surprised
they don't work or any users typing work_mem=100MB and being surprised
that they're not getting 95.367 MiB.

If you really want to support a strict interpretation of the SI
standards then I don't see anything wrong with having a GUC. It
doesn't change the semantics of SQL parsing so the worst-case downside
is that if you change the setting and then reload a config file or if
you move a setting from one place in a config file to another the
interpretation of the config file would change. The best practice
would probably be to set this config at the top of the config file and
nowhere else.

I would suggest having a GUC like "strict_si_units" with false as the
default. If it's true then units like KiB and KB are both accepted and
mean different things. If it's false then still accept both but treat
them as synonyms meaning powers of 2. This means users who don't care
can continue using convenient powers of 2 everywhere without thinking
about it and users who do can start using the new-fangled SI units
(and have the pitfall of accidentally specifying in units of powers of
10).

For outputs like pg_size_pretty, SHOW, and pg_settings you could
either say to always use KiB so that the outputs are always correct to
use regardless of the setting of strict_si_units or you could have it
print KB et al when strict_si_units is false -- the latter have the
advantage that outputs could be copied to older versions safely but
the disadvantage that if you change the setting then the
interpretation of existing config files change. I think it would be
better to print KiB/MiB etc always. I suppose there's the alternative
of trying to guess which unit results in the most concise display but
that seems unnecessarily baroque and likely to just hide mistakes
rather than help.

--
greg

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Greg Stark (#6)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

Greg Stark <stark@mit.edu> writes:

I think Bruce's summary is a bit revisionist.

I would say it's a tempest in a teapot.

What I think we should do is accept "kb" and the rest case-insensitively,
print them all in all-upper-case always, and tell standards pedants
to get lost. The idea of introducing either a GUC or new function names
is just silly; it will cause far more confusion and user code breakage
than will result from just leaving well enough alone.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8David G. Johnston
david.g.johnston@gmail.com
In reply to: Tom Lane (#7)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Sat, Jul 30, 2016 at 10:35 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Greg Stark <stark@mit.edu> writes:

I think Bruce's summary is a bit revisionist.

I would say it's a tempest in a teapot.

What I think we should do is accept "kb" and the rest case-insensitively,
print them all in all-upper-case always, and tell standards pedants
to get lost. The idea of introducing either a GUC or new function names
is just silly; it will cause far more confusion and user code breakage
than will result from just leaving well enough alone.

​I wouldn't mind fixing case sensitivity in the process...but I don't think
we need to touch the GUC infrastructure at all.

For a product that has a reasonably high regard for the SQL standard I'd
like to at least keep an open mind about other relevant standards - and if
accommodation is as simple as writing a new function I'd see no reason to
reject such a patch.​ pg_size_pretty never did seem like a good name for a
function with its behavior...lets be open to accepting an improved version
without a pg_ prefix.

We could even avoid a whole new function and add an "iB" template pattern
to the to_char function - although I'm not sure that wouldn't be more
confusing than helpful in practice.

David J.

#9Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#7)
2 attachment(s)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Sat, Jul 30, 2016 at 10:35:58AM -0400, Tom Lane wrote:

Greg Stark <stark@mit.edu> writes:

I think Bruce's summary is a bit revisionist.

I would say it's a tempest in a teapot.

What I think we should do is accept "kb" and the rest case-insensitively,
print them all in all-upper-case always, and tell standards pedants
to get lost. The idea of introducing either a GUC or new function names
is just silly; it will cause far more confusion and user code breakage
than will result from just leaving well enough alone.

I agree that a GUC and new functions are overkill --- we should just
decide on the format we want to output and what to support for input.

As logical as the IEC format appears, I just don't think the Ki/Mi/Gi
prefixes are used widely enough for us to use it --- I think it will
cause too many problem reports:

https://en.wikipedia.org/wiki/Binary_prefix

I have developed two possible patches for PG 10 --- the first one merely
allows "KB" to be used in addition to the existing "kB", and documents
this as an option.

The second patch does what Tom suggests above by outputting only "KB",
and it supports "kB" for backward compatibility. What it doesn't do is
to allow arbitrary case, which I think would be a step backward. The
second patch actually does match the JEDEC standard, except for allowing
"kB".

I also just applied a doc patch that increases case and spacing
consistency in the use of kB/MB/GB/TB.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

Attachments:

kilo.difftext/x-diff; charset=us-asciiDownload
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
new file mode 100644
index 6ac5184..40038ac
*** a/src/backend/utils/misc/guc.c
--- b/src/backend/utils/misc/guc.c
*************** typedef struct
*** 694,700 ****
  #error XLOG_SEG_SIZE must be between 1MB and 1GB
  #endif
  
! static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"kB\", \"MB\", \"GB\", and \"TB\".");
  
  static const unit_conversion memory_unit_conversion_table[] =
  {
--- 694,700 ----
  #error XLOG_SEG_SIZE must be between 1MB and 1GB
  #endif
  
! static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"kB\"/\"KB\", \"MB\", \"GB\", and \"TB\".");
  
  static const unit_conversion memory_unit_conversion_table[] =
  {
*************** convert_to_base_unit(int64 value, const
*** 5322,5328 ****
  	for (i = 0; *table[i].unit; i++)
  	{
  		if (base_unit == table[i].base_unit &&
! 			strcmp(unit, table[i].unit) == 0)
  		{
  			if (table[i].multiplier < 0)
  				*base_value = value / (-table[i].multiplier);
--- 5322,5331 ----
  	for (i = 0; *table[i].unit; i++)
  	{
  		if (base_unit == table[i].base_unit &&
! 			(strcmp(unit, table[i].unit) == 0 ||
! 			 /* support the JEDEC standard which uses "KB" for 1024 */
! 			 (strcmp(unit, "KB") == 0 &&
! 			  strcmp(table[i].unit, "kB") == 0)))
  		{
  			if (table[i].multiplier < 0)
  				*base_value = value / (-table[i].multiplier);
kilo2.difftext/x-diff; charset=us-asciiDownload
diff --git a/configure b/configure
new file mode 100755
index b49cc11..8466e5a
*** a/configure
--- b/configure
*************** Optional Packages:
*** 1502,1511 ****
    --with-libs=DIRS        alternative spelling of --with-libraries
    --with-pgport=PORTNUM   set default port number [5432]
    --with-blocksize=BLOCKSIZE
!                           set table block size in kB [8]
    --with-segsize=SEGSIZE  set table segment size in GB [1]
    --with-wal-blocksize=BLOCKSIZE
!                           set WAL block size in kB [8]
    --with-wal-segsize=SEGSIZE
                            set WAL segment size in MB [16]
    --with-CC=CMD           set compiler (deprecated)
--- 1502,1511 ----
    --with-libs=DIRS        alternative spelling of --with-libraries
    --with-pgport=PORTNUM   set default port number [5432]
    --with-blocksize=BLOCKSIZE
!                           set table block size in KB [8]
    --with-segsize=SEGSIZE  set table segment size in GB [1]
    --with-wal-blocksize=BLOCKSIZE
!                           set WAL block size in KB [8]
    --with-wal-segsize=SEGSIZE
                            set WAL segment size in MB [16]
    --with-CC=CMD           set compiler (deprecated)
*************** case ${blocksize} in
*** 3550,3557 ****
   32) BLCKSZ=32768;;
    *) as_fn_error $? "Invalid block size. Allowed values are 1,2,4,8,16,32." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${blocksize}kB" >&5
! $as_echo "${blocksize}kB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
--- 3550,3557 ----
   32) BLCKSZ=32768;;
    *) as_fn_error $? "Invalid block size. Allowed values are 1,2,4,8,16,32." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${blocksize}KB" >&5
! $as_echo "${blocksize}KB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
*************** case ${wal_blocksize} in
*** 3638,3645 ****
   64) XLOG_BLCKSZ=65536;;
    *) as_fn_error $? "Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${wal_blocksize}kB" >&5
! $as_echo "${wal_blocksize}kB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
--- 3638,3645 ----
   64) XLOG_BLCKSZ=65536;;
    *) as_fn_error $? "Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${wal_blocksize}KB" >&5
! $as_echo "${wal_blocksize}KB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
diff --git a/configure.in b/configure.in
new file mode 100644
index 5da4f74..2ed6298
*** a/configure.in
--- b/configure.in
*************** AC_SUBST(enable_tap_tests)
*** 250,256 ****
  # Block size
  #
  AC_MSG_CHECKING([for block size])
! PGAC_ARG_REQ(with, blocksize, [BLOCKSIZE], [set table block size in kB [8]],
               [blocksize=$withval],
               [blocksize=8])
  case ${blocksize} in
--- 250,256 ----
  # Block size
  #
  AC_MSG_CHECKING([for block size])
! PGAC_ARG_REQ(with, blocksize, [BLOCKSIZE], [set table block size in KB [8]],
               [blocksize=$withval],
               [blocksize=8])
  case ${blocksize} in
*************** case ${blocksize} in
*** 262,268 ****
   32) BLCKSZ=32768;;
    *) AC_MSG_ERROR([Invalid block size. Allowed values are 1,2,4,8,16,32.])
  esac
! AC_MSG_RESULT([${blocksize}kB])
  
  AC_DEFINE_UNQUOTED([BLCKSZ], ${BLCKSZ}, [
   Size of a disk block --- this also limits the size of a tuple.  You
--- 262,268 ----
   32) BLCKSZ=32768;;
    *) AC_MSG_ERROR([Invalid block size. Allowed values are 1,2,4,8,16,32.])
  esac
! AC_MSG_RESULT([${blocksize}KB])
  
  AC_DEFINE_UNQUOTED([BLCKSZ], ${BLCKSZ}, [
   Size of a disk block --- this also limits the size of a tuple.  You
*************** AC_DEFINE_UNQUOTED([RELSEG_SIZE], ${RELS
*** 314,320 ****
  # WAL block size
  #
  AC_MSG_CHECKING([for WAL block size])
! PGAC_ARG_REQ(with, wal-blocksize, [BLOCKSIZE], [set WAL block size in kB [8]],
               [wal_blocksize=$withval],
               [wal_blocksize=8])
  case ${wal_blocksize} in
--- 314,320 ----
  # WAL block size
  #
  AC_MSG_CHECKING([for WAL block size])
! PGAC_ARG_REQ(with, wal-blocksize, [BLOCKSIZE], [set WAL block size in KB [8]],
               [wal_blocksize=$withval],
               [wal_blocksize=8])
  case ${wal_blocksize} in
*************** case ${wal_blocksize} in
*** 327,333 ****
   64) XLOG_BLCKSZ=65536;;
    *) AC_MSG_ERROR([Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64.])
  esac
! AC_MSG_RESULT([${wal_blocksize}kB])
  
  AC_DEFINE_UNQUOTED([XLOG_BLCKSZ], ${XLOG_BLCKSZ}, [
   Size of a WAL file block.  This need have no particular relation to BLCKSZ.
--- 327,333 ----
   64) XLOG_BLCKSZ=65536;;
    *) AC_MSG_ERROR([Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64.])
  esac
! AC_MSG_RESULT([${wal_blocksize}KB])
  
  AC_DEFINE_UNQUOTED([XLOG_BLCKSZ], ${XLOG_BLCKSZ}, [
   Size of a WAL file block.  This need have no particular relation to BLCKSZ.
diff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml
new file mode 100644
index 38e6f50..34d87b3
*** a/doc/src/sgml/auto-explain.sgml
--- b/doc/src/sgml/auto-explain.sgml
*************** LOG:  duration: 3.651 ms  plan:
*** 263,269 ****
            Hash Cond: (pg_class.oid = pg_index.indrelid)
            ->  Seq Scan on pg_class  (cost=0.00..9.55 rows=255 width=4) (actual time=0.016..0.140 rows=255 loops=1)
            ->  Hash  (cost=3.02..3.02 rows=92 width=4) (actual time=3.238..3.238 rows=92 loops=1)
!                 Buckets: 1024  Batches: 1  Memory Usage: 4kB
                  ->  Seq Scan on pg_index  (cost=0.00..3.02 rows=92 width=4) (actual time=0.008..3.187 rows=92 loops=1)
                        Filter: indisunique
  ]]></screen>
--- 263,269 ----
            Hash Cond: (pg_class.oid = pg_index.indrelid)
            ->  Seq Scan on pg_class  (cost=0.00..9.55 rows=255 width=4) (actual time=0.016..0.140 rows=255 loops=1)
            ->  Hash  (cost=3.02..3.02 rows=92 width=4) (actual time=3.238..3.238 rows=92 loops=1)
!                 Buckets: 1024  Batches: 1  Memory Usage: 4KB
                  ->  Seq Scan on pg_index  (cost=0.00..3.02 rows=92 width=4) (actual time=0.008..3.187 rows=92 loops=1)
                        Filter: indisunique
  ]]></screen>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
new file mode 100644
index cbb333f..a0678b2
*** a/doc/src/sgml/catalogs.sgml
--- b/doc/src/sgml/catalogs.sgml
***************
*** 4021,4027 ****
     segments or <quote>pages</> small enough to be conveniently stored as rows
     in <structname>pg_largeobject</structname>.
     The amount of data per page is defined to be <symbol>LOBLKSIZE</> (which is currently
!    <literal>BLCKSZ/4</>, or typically 2kB).
    </para>
  
    <para>
--- 4021,4027 ----
     segments or <quote>pages</> small enough to be conveniently stored as rows
     in <structname>pg_largeobject</structname>.
     The amount of data per page is defined to be <symbol>LOBLKSIZE</> (which is currently
!    <literal>BLCKSZ/4</>, or typically 2KB).
    </para>
  
    <para>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
new file mode 100644
index b9581d9..23d666a
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
***************
*** 81,87 ****
         <itemizedlist>
          <listitem>
           <para>
!           Valid memory units are <literal>kB</literal> (kilobytes),
            <literal>MB</literal> (megabytes), <literal>GB</literal>
            (gigabytes), and <literal>TB</literal> (terabytes).
            The multiplier for memory units is 1024, not 1000.
--- 81,87 ----
         <itemizedlist>
          <listitem>
           <para>
!           Valid memory units are <literal>KB</literal> (kilobytes),
            <literal>MB</literal> (megabytes), <literal>GB</literal>
            (gigabytes), and <literal>TB</literal> (terabytes).
            The multiplier for memory units is 1024, not 1000.
*************** include_dir 'conf.d'
*** 1903,1909 ****
           cache, where performance might degrade.  This setting may have no
           effect on some platforms.  The valid range is between
           <literal>0</literal>, which disables controlled writeback, and
!          <literal>2MB</literal>.  The default is <literal>512kB</> on Linux,
           <literal>0</> elsewhere.  (Non-default values of
           <symbol>BLCKSZ</symbol> change the default and maximum.)
           This parameter can only be set in the <filename>postgresql.conf</>
--- 1903,1909 ----
           cache, where performance might degrade.  This setting may have no
           effect on some platforms.  The valid range is between
           <literal>0</literal>, which disables controlled writeback, and
!          <literal>2MB</literal>.  The default is <literal>512KB</> on Linux,
           <literal>0</> elsewhere.  (Non-default values of
           <symbol>BLCKSZ</symbol> change the default and maximum.)
           This parameter can only be set in the <filename>postgresql.conf</>
*************** include_dir 'conf.d'
*** 2481,2491 ****
          The amount of shared memory used for WAL data that has not yet been
          written to disk.  The default setting of -1 selects a size equal to
          1/32nd (about 3%) of <xref linkend="guc-shared-buffers">, but not less
!         than <literal>64kB</literal> nor more than the size of one WAL
          segment, typically <literal>16MB</literal>.  This value can be set
          manually if the automatic choice is too large or too small,
!         but any positive value less than <literal>32kB</literal> will be
!         treated as <literal>32kB</literal>.
          This parameter can only be set at server start.
         </para>
  
--- 2481,2491 ----
          The amount of shared memory used for WAL data that has not yet been
          written to disk.  The default setting of -1 selects a size equal to
          1/32nd (about 3%) of <xref linkend="guc-shared-buffers">, but not less
!         than <literal>64KB</literal> nor more than the size of one WAL
          segment, typically <literal>16MB</literal>.  This value can be set
          manually if the automatic choice is too large or too small,
!         but any positive value less than <literal>32KB</literal> will be
!         treated as <literal>32KB</literal>.
          This parameter can only be set at server start.
         </para>
  
*************** include_dir 'conf.d'
*** 2660,2666 ****
          than the OS's page cache, where performance might degrade.  This
          setting may have no effect on some platforms.  The valid range is
          between <literal>0</literal>, which disables controlled writeback,
!         and <literal>2MB</literal>.  The default is <literal>256kB</> on
          Linux, <literal>0</> elsewhere.  (Non-default values of
          <symbol>BLCKSZ</symbol> change the default and maximum.)
          This parameter can only be set in the <filename>postgresql.conf</>
--- 2660,2666 ----
          than the OS's page cache, where performance might degrade.  This
          setting may have no effect on some platforms.  The valid range is
          between <literal>0</literal>, which disables controlled writeback,
!         and <literal>2MB</literal>.  The default is <literal>256KB</> on
          Linux, <literal>0</> elsewhere.  (Non-default values of
          <symbol>BLCKSZ</symbol> change the default and maximum.)
          This parameter can only be set in the <filename>postgresql.conf</>
diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml
new file mode 100644
index a30e25c..b917bdd
*** a/doc/src/sgml/ecpg.sgml
--- b/doc/src/sgml/ecpg.sgml
*************** if (*(int2 *)sqldata->sqlvar[i].sqlind !
*** 8165,8171 ****
       <term><literal>sqlilongdata</></term>
        <listitem>
         <para>
!         It equals to <literal>sqldata</literal> if <literal>sqllen</literal> is larger than 32kB.
         </para>
        </listitem>
       </varlistentry>
--- 8165,8171 ----
       <term><literal>sqlilongdata</></term>
        <listitem>
         <para>
!         It equals to <literal>sqldata</literal> if <literal>sqllen</literal> is larger than 32KB.
         </para>
        </listitem>
       </varlistentry>
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index 971e642..64f347e
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*************** postgres=# SELECT * FROM pg_xlogfile_nam
*** 18788,18809 ****
  
     <para>
      <function>pg_size_pretty</> can be used to format the result of one of
!     the other functions in a human-readable way, using bytes, kB, MB, GB or TB
      as appropriate.
     </para>
  
     <para>
      <function>pg_size_bytes</> can be used to get the size in bytes from a
!     string in human-readable format. The input may have units of bytes, kB,
      MB, GB or TB, and is parsed case-insensitively. If no units are specified,
      bytes are assumed.
     </para>
  
     <note>
      <para>
!      The units kB, MB, GB and TB used by the functions
       <function>pg_size_pretty</> and <function>pg_size_bytes</> are defined
!      using powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is
       1024<superscript>2</> = 1048576 bytes, and so on.
      </para>
     </note>
--- 18788,18809 ----
  
     <para>
      <function>pg_size_pretty</> can be used to format the result of one of
!     the other functions in a human-readable way, using bytes, KB, MB, GB or TB
      as appropriate.
     </para>
  
     <para>
      <function>pg_size_bytes</> can be used to get the size in bytes from a
!     string in human-readable format. The input may have units of bytes, KB,
      MB, GB or TB, and is parsed case-insensitively. If no units are specified,
      bytes are assumed.
     </para>
  
     <note>
      <para>
!      The units KB, MB, GB and TB used by the functions
       <function>pg_size_pretty</> and <function>pg_size_bytes</> are defined
!      using powers of 2 rather than powers of 10, so 1KB is 1024 bytes, 1MB is
       1024<superscript>2</> = 1048576 bytes, and so on.
      </para>
     </note>
diff --git a/doc/src/sgml/ltree.sgml b/doc/src/sgml/ltree.sgml
new file mode 100644
index fccfd32..29be58b
*** a/doc/src/sgml/ltree.sgml
--- b/doc/src/sgml/ltree.sgml
***************
*** 31,37 ****
     A <firstterm>label path</firstterm> is a sequence of zero or more
     labels separated by dots, for example <literal>L1.L2.L3</>, representing
     a path from the root of a hierarchical tree to a particular node.  The
!    length of a label path must be less than 65kB, but keeping it under 2kB is
     preferable.  In practice this is not a major limitation; for example,
     the longest label path in the DMOZ catalog (<ulink
     url="http://www.dmoz.org"></ulink>) is about 240 bytes.
--- 31,37 ----
     A <firstterm>label path</firstterm> is a sequence of zero or more
     labels separated by dots, for example <literal>L1.L2.L3</>, representing
     a path from the root of a hierarchical tree to a particular node.  The
!    length of a label path must be less than 65KB, but keeping it under 2KB is
     preferable.  In practice this is not a major limitation; for example,
     the longest label path in the DMOZ catalog (<ulink
     url="http://www.dmoz.org"></ulink>) is about 240 bytes.
diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
new file mode 100644
index 7bcbfa7..a30b276
*** a/doc/src/sgml/perform.sgml
--- b/doc/src/sgml/perform.sgml
*************** WHERE t1.unique1 &lt; 100 AND t1.unique2
*** 603,614 ****
  --------------------------------------------------------------------------------------------------------------------------------------------
   Sort  (cost=717.34..717.59 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1)
     Sort Key: t1.fivethous
!    Sort Method: quicksort  Memory: 77kB
     -&gt;  Hash Join  (cost=230.47..713.98 rows=101 width=488) (actual time=0.711..7.427 rows=100 loops=1)
           Hash Cond: (t2.unique2 = t1.unique2)
           -&gt;  Seq Scan on tenk2 t2  (cost=0.00..445.00 rows=10000 width=244) (actual time=0.007..2.583 rows=10000 loops=1)
           -&gt;  Hash  (cost=229.20..229.20 rows=101 width=244) (actual time=0.659..0.659 rows=100 loops=1)
!                Buckets: 1024  Batches: 1  Memory Usage: 28kB
                 -&gt;  Bitmap Heap Scan on tenk1 t1  (cost=5.07..229.20 rows=101 width=244) (actual time=0.080..0.526 rows=100 loops=1)
                       Recheck Cond: (unique1 &lt; 100)
                       -&gt;  Bitmap Index Scan on tenk1_unique1  (cost=0.00..5.04 rows=101 width=0) (actual time=0.049..0.049 rows=100 loops=1)
--- 603,614 ----
  --------------------------------------------------------------------------------------------------------------------------------------------
   Sort  (cost=717.34..717.59 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1)
     Sort Key: t1.fivethous
!    Sort Method: quicksort  Memory: 77KB
     -&gt;  Hash Join  (cost=230.47..713.98 rows=101 width=488) (actual time=0.711..7.427 rows=100 loops=1)
           Hash Cond: (t2.unique2 = t1.unique2)
           -&gt;  Seq Scan on tenk2 t2  (cost=0.00..445.00 rows=10000 width=244) (actual time=0.007..2.583 rows=10000 loops=1)
           -&gt;  Hash  (cost=229.20..229.20 rows=101 width=244) (actual time=0.659..0.659 rows=100 loops=1)
!                Buckets: 1024  Batches: 1  Memory Usage: 28KB
                 -&gt;  Bitmap Heap Scan on tenk1 t1  (cost=5.07..229.20 rows=101 width=244) (actual time=0.080..0.526 rows=100 loops=1)
                       Recheck Cond: (unique1 &lt; 100)
                       -&gt;  Bitmap Index Scan on tenk1_unique1  (cost=0.00..5.04 rows=101 width=0) (actual time=0.049..0.049 rows=100 loops=1)
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
new file mode 100644
index 8e701aa..3bab944
*** a/doc/src/sgml/protocol.sgml
--- b/doc/src/sgml/protocol.sgml
*************** The commands accepted in walsender mode
*** 1973,1979 ****
            Limit (throttle) the maximum amount of data transferred from server
            to client per unit of time.  The expected unit is kilobytes per second.
            If this option is specified, the value must either be equal to zero
!           or it must fall within the range from 32kB through 1GB (inclusive).
            If zero is passed or the option is not specified, no restriction is
            imposed on the transfer.
           </para>
--- 1973,1979 ----
            Limit (throttle) the maximum amount of data transferred from server
            to client per unit of time.  The expected unit is kilobytes per second.
            If this option is specified, the value must either be equal to zero
!           or it must fall within the range from 32KB through 1GB (inclusive).
            If zero is passed or the option is not specified, no restriction is
            imposed on the transfer.
           </para>
diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml
new file mode 100644
index ca1b767..e7c5c6e
*** a/doc/src/sgml/rules.sgml
--- b/doc/src/sgml/rules.sgml
*************** SELECT word FROM words ORDER BY word <->
*** 1079,1085 ****
   Limit  (cost=11583.61..11583.64 rows=10 width=32) (actual time=1431.591..1431.594 rows=10 loops=1)
     -&gt;  Sort  (cost=11583.61..11804.76 rows=88459 width=32) (actual time=1431.589..1431.591 rows=10 loops=1)
           Sort Key: ((word &lt;-&gt; 'caterpiler'::text))
!          Sort Method: top-N heapsort  Memory: 25kB
           -&gt;  Foreign Scan on words  (cost=0.00..9672.05 rows=88459 width=32) (actual time=0.057..1286.455 rows=479829 loops=1)
                 Foreign File: /usr/share/dict/words
                 Foreign File Size: 4953699
--- 1079,1085 ----
   Limit  (cost=11583.61..11583.64 rows=10 width=32) (actual time=1431.591..1431.594 rows=10 loops=1)
     -&gt;  Sort  (cost=11583.61..11804.76 rows=88459 width=32) (actual time=1431.589..1431.591 rows=10 loops=1)
           Sort Key: ((word &lt;-&gt; 'caterpiler'::text))
!          Sort Method: top-N heapsort  Memory: 25KB
           -&gt;  Foreign Scan on words  (cost=0.00..9672.05 rows=88459 width=32) (actual time=0.057..1286.455 rows=479829 loops=1)
                 Foreign File: /usr/share/dict/words
                 Foreign File Size: 4953699
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
new file mode 100644
index 4c5d748..02644ce
*** a/doc/src/sgml/runtime.sgml
--- b/doc/src/sgml/runtime.sgml
*************** psql: could not connect to server: No su
*** 659,665 ****
        <row>
         <entry><varname>SHMMAX</></>
         <entry>Maximum size of shared memory segment (bytes)</>
!        <entry>at least 1kB (more if running many copies of the server)</entry>
        </row>
  
        <row>
--- 659,665 ----
        <row>
         <entry><varname>SHMMAX</></>
         <entry>Maximum size of shared memory segment (bytes)</>
!        <entry>at least 1KB (more if running many copies of the server)</entry>
        </row>
  
        <row>
*************** kern.sysv.shmall=1024
*** 1032,1038 ****
         </para>
  
         <para>
!         <varname>SHMALL</> is measured in 4kB pages on this platform.
         </para>
  
         <para>
--- 1032,1038 ----
         </para>
  
         <para>
!         <varname>SHMALL</> is measured in 4KB pages on this platform.
         </para>
  
         <para>
*************** sysctl -w kern.sysv.shmall
*** 1075,1081 ****
        </term>
        <listitem>
         <para>
!         In the default configuration, only 512kB of shared memory per
          segment is allowed. To increase the setting, first change to the
          directory <filename>/etc/conf/cf.d</>. To display the current value of
          <varname>SHMMAX</>, run:
--- 1075,1081 ----
        </term>
        <listitem>
         <para>
!         In the default configuration, only 512KB of shared memory per
          segment is allowed. To increase the setting, first change to the
          directory <filename>/etc/conf/cf.d</>. To display the current value of
          <varname>SHMMAX</>, run:
*************** project.max-msg-ids=(priv,4096,deny)
*** 1180,1186 ****
        <listitem>
         <para>
          On <productname>UnixWare</> 7, the maximum size for shared
!         memory segments is 512kB in the default configuration.
          To display the current value of <varname>SHMMAX</>, run:
  <programlisting>
  /etc/conf/bin/idtune -g SHMMAX
--- 1180,1186 ----
        <listitem>
         <para>
          On <productname>UnixWare</> 7, the maximum size for shared
!         memory segments is 512KB in the default configuration.
          To display the current value of <varname>SHMMAX</>, run:
  <programlisting>
  /etc/conf/bin/idtune -g SHMMAX
diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml
new file mode 100644
index f40c790..6a22054
*** a/doc/src/sgml/spgist.sgml
--- b/doc/src/sgml/spgist.sgml
*************** typedef struct spgLeafConsistentOut
*** 755,761 ****
  
    <para>
     Individual leaf tuples and inner tuples must fit on a single index page
!    (8kB by default).  Therefore, when indexing values of variable-length
     data types, long values can only be supported by methods such as radix
     trees, in which each level of the tree includes a prefix that is short
     enough to fit on a page, and the final leaf level includes a suffix also
--- 755,761 ----
  
    <para>
     Individual leaf tuples and inner tuples must fit on a single index page
!    (8KB by default).  Therefore, when indexing values of variable-length
     data types, long values can only be supported by methods such as radix
     trees, in which each level of the tree includes a prefix that is short
     enough to fit on a page, and the final leaf level includes a suffix also
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
new file mode 100644
index 2d82953..aff3dd8
*** a/doc/src/sgml/storage.sgml
--- b/doc/src/sgml/storage.sgml
*************** Oversized-Attribute Storage Technique).
*** 303,309 ****
  
  <para>
  <productname>PostgreSQL</productname> uses a fixed page size (commonly
! 8kB), and does not allow tuples to span multiple pages.  Therefore, it is
  not possible to store very large field values directly.  To overcome
  this limitation, large field values are compressed and/or broken up into
  multiple physical rows.  This happens transparently to the user, with only
--- 303,309 ----
  
  <para>
  <productname>PostgreSQL</productname> uses a fixed page size (commonly
! 8KB), and does not allow tuples to span multiple pages.  Therefore, it is
  not possible to store very large field values directly.  To overcome
  this limitation, large field values are compressed and/or broken up into
  multiple physical rows.  This happens transparently to the user, with only
*************** bytes regardless of the actual size of t
*** 420,429 ****
  <para>
  The <acronym>TOAST</> management code is triggered only
  when a row value to be stored in a table is wider than
! <symbol>TOAST_TUPLE_THRESHOLD</> bytes (normally 2kB).
  The <acronym>TOAST</> code will compress and/or move
  field values out-of-line until the row value is shorter than
! <symbol>TOAST_TUPLE_TARGET</> bytes (also normally 2kB)
  or no more gains can be had.  During an UPDATE
  operation, values of unchanged fields are normally preserved as-is; so an
  UPDATE of a row with out-of-line values incurs no <acronym>TOAST</> costs if
--- 420,429 ----
  <para>
  The <acronym>TOAST</> management code is triggered only
  when a row value to be stored in a table is wider than
! <symbol>TOAST_TUPLE_THRESHOLD</> bytes (normally 2KB).
  The <acronym>TOAST</> code will compress and/or move
  field values out-of-line until the row value is shorter than
! <symbol>TOAST_TUPLE_TARGET</> bytes (also normally 2KB)
  or no more gains can be had.  During an UPDATE
  operation, values of unchanged fields are normally preserved as-is; so an
  UPDATE of a row with out-of-line values incurs no <acronym>TOAST</> costs if
*************** containing typical HTML pages and their
*** 491,497 ****
  raw data size including the <acronym>TOAST</> table, and that the main table
  contained only about 10% of the entire data (the URLs and some small HTML
  pages). There was no run time difference compared to an un-<acronym>TOAST</>ed
! comparison table, in which all the HTML pages were cut down to 7kB to fit.
  </para>
  
  </sect2>
--- 491,497 ----
  raw data size including the <acronym>TOAST</> table, and that the main table
  contained only about 10% of the entire data (the URLs and some small HTML
  pages). There was no run time difference compared to an un-<acronym>TOAST</>ed
! comparison table, in which all the HTML pages were cut down to 7KB to fit.
  </para>
  
  </sect2>
*************** an item is a row; in an index, an item i
*** 703,709 ****
  
  <para>
  Every table and index is stored as an array of <firstterm>pages</> of a
! fixed size (usually 8kB, although a different page size can be selected
  when compiling the server).  In a table, all the pages are logically
  equivalent, so a particular item (row) can be stored in any page.  In
  indexes, the first page is generally reserved as a <firstterm>metapage</>
--- 703,709 ----
  
  <para>
  Every table and index is stored as an array of <firstterm>pages</> of a
! fixed size (usually 8KB, although a different page size can be selected
  when compiling the server).  In a table, all the pages are logically
  equivalent, so a particular item (row) can be stored in any page.  In
  indexes, the first page is generally reserved as a <firstterm>metapage</>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
new file mode 100644
index 2089040..1c2764b
*** a/doc/src/sgml/wal.sgml
--- b/doc/src/sgml/wal.sgml
***************
*** 176,182 ****
     this page imaging by turning off the <xref
     linkend="guc-full-page-writes"> parameter. Battery-Backed Unit
     (BBU) disk controllers do not prevent partial page writes unless
!    they guarantee that data is written to the BBU as full (8kB) pages.
    </para>
    <para>
     <productname>PostgreSQL</> also protects against some kinds of data corruption
--- 176,182 ----
     this page imaging by turning off the <xref
     linkend="guc-full-page-writes"> parameter. Battery-Backed Unit
     (BBU) disk controllers do not prevent partial page writes unless
!    they guarantee that data is written to the BBU as full (8KB) pages.
    </para>
    <para>
     <productname>PostgreSQL</> also protects against some kinds of data corruption
***************
*** 664,670 ****
     linkend="pgtestfsync"> program can be used to measure the average time
     in microseconds that a single WAL flush operation takes.  A value of
     half of the average time the program reports it takes to flush after a
!    single 8kB write operation is often the most effective setting for
     <varname>commit_delay</varname>, so this value is recommended as the
     starting point to use when optimizing for a particular workload.  While
     tuning <varname>commit_delay</varname> is particularly useful when the
--- 664,670 ----
     linkend="pgtestfsync"> program can be used to measure the average time
     in microseconds that a single WAL flush operation takes.  A value of
     half of the average time the program reports it takes to flush after a
!    single 8KB write operation is often the most effective setting for
     <varname>commit_delay</varname>, so this value is recommended as the
     starting point to use when optimizing for a particular workload.  While
     tuning <varname>commit_delay</varname> is particularly useful when the
***************
*** 738,744 ****
     segment files, normally each 16MB in size (but the size can be changed
     by altering the <option>--with-wal-segsize</> configure option when
     building the server).  Each segment is divided into pages, normally
!    8kB each (this size can be changed via the <option>--with-wal-blocksize</>
     configure option).  The log record headers are described in
     <filename>access/xlogrecord.h</filename>; the record content is dependent
     on the type of event that is being logged.  Segment files are given
--- 738,744 ----
     segment files, normally each 16MB in size (but the size can be changed
     by altering the <option>--with-wal-segsize</> configure option when
     building the server).  Each segment is divided into pages, normally
!    8KB each (this size can be changed via the <option>--with-wal-blocksize</>
     configure option).  The log record headers are described in
     <filename>access/xlogrecord.h</filename>; the record content is dependent
     on the type of event that is being logged.  Segment files are given
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
new file mode 100644
index c2e4fa3..027188f
*** a/src/backend/access/transam/multixact.c
--- b/src/backend/access/transam/multixact.c
***************
*** 119,125 ****
   * additional flag bits for each TransactionId.  To do this without getting
   * into alignment issues, we store four bytes of flags, and then the
   * corresponding 4 Xids.  Each such 5-word (20-byte) set we call a "group", and
!  * are stored as a whole in pages.  Thus, with 8kB BLCKSZ, we keep 409 groups
   * per page.  This wastes 12 bytes per page, but that's OK -- simplicity (and
   * performance) trumps space efficiency here.
   *
--- 119,125 ----
   * additional flag bits for each TransactionId.  To do this without getting
   * into alignment issues, we store four bytes of flags, and then the
   * corresponding 4 Xids.  Each such 5-word (20-byte) set we call a "group", and
!  * are stored as a whole in pages.  Thus, with 8KB BLCKSZ, we keep 409 groups
   * per page.  This wastes 12 bytes per page, but that's OK -- simplicity (and
   * performance) trumps space efficiency here.
   *
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
new file mode 100644
index f13f9c1..be0d277
*** a/src/backend/access/transam/xlog.c
--- b/src/backend/access/transam/xlog.c
*************** LogCheckpointEnd(bool restartpoint)
*** 8063,8069 ****
  		 "%d transaction log file(s) added, %d removed, %d recycled; "
  		 "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
  		 "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
! 		 "distance=%d kB, estimate=%d kB",
  		 restartpoint ? "restartpoint" : "checkpoint",
  		 CheckpointStats.ckpt_bufs_written,
  		 (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
--- 8063,8069 ----
  		 "%d transaction log file(s) added, %d removed, %d recycled; "
  		 "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
  		 "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
! 		 "distance=%d KB, estimate=%d KB",
  		 restartpoint ? "restartpoint" : "checkpoint",
  		 CheckpointStats.ckpt_bufs_written,
  		 (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
new file mode 100644
index dbd27e5..943823f
*** a/src/backend/commands/explain.c
--- b/src/backend/commands/explain.c
*************** show_sort_info(SortState *sortstate, Exp
*** 2163,2169 ****
  		if (es->format == EXPLAIN_FORMAT_TEXT)
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
! 			appendStringInfo(es->str, "Sort Method: %s  %s: %ldkB\n",
  							 sortMethod, spaceType, spaceUsed);
  		}
  		else
--- 2163,2169 ----
  		if (es->format == EXPLAIN_FORMAT_TEXT)
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
! 			appendStringInfo(es->str, "Sort Method: %s  %s: %ldKB\n",
  							 sortMethod, spaceType, spaceUsed);
  		}
  		else
*************** show_hash_info(HashState *hashstate, Exp
*** 2205,2211 ****
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 							 "Buckets: %d (originally %d)  Batches: %d (originally %d)  Memory Usage: %ldkB\n",
  							 hashtable->nbuckets,
  							 hashtable->nbuckets_original,
  							 hashtable->nbatch,
--- 2205,2211 ----
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 							 "Buckets: %d (originally %d)  Batches: %d (originally %d)  Memory Usage: %ldKB\n",
  							 hashtable->nbuckets,
  							 hashtable->nbuckets_original,
  							 hashtable->nbatch,
*************** show_hash_info(HashState *hashstate, Exp
*** 2216,2222 ****
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 						   "Buckets: %d  Batches: %d  Memory Usage: %ldkB\n",
  							 hashtable->nbuckets, hashtable->nbatch,
  							 spacePeakKb);
  		}
--- 2216,2222 ----
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 						   "Buckets: %d  Batches: %d  Memory Usage: %ldKB\n",
  							 hashtable->nbuckets, hashtable->nbatch,
  							 spacePeakKb);
  		}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
new file mode 100644
index 7d8fc3e..ca9458b
*** a/src/backend/libpq/auth.c
--- b/src/backend/libpq/auth.c
*************** static int	CheckRADIUSAuth(Port *port);
*** 191,197 ****
   * Attribute Certificate (PAC), which contains the user's Windows permissions
   * (group memberships etc.). The PAC is copied into all tickets obtained on
   * the basis of this TGT (even those issued by Unix realms which the Windows
!  * realm trusts), and can be several kB in size. The maximum token size
   * accepted by Windows systems is determined by the MaxAuthToken Windows
   * registry setting. Microsoft recommends that it is not set higher than
   * 65535 bytes, so that seems like a reasonable limit for us as well.
--- 191,197 ----
   * Attribute Certificate (PAC), which contains the user's Windows permissions
   * (group memberships etc.). The PAC is copied into all tickets obtained on
   * the basis of this TGT (even those issued by Unix realms which the Windows
!  * realm trusts), and can be several KB in size. The maximum token size
   * accepted by Windows systems is determined by the MaxAuthToken Windows
   * registry setting. Microsoft recommends that it is not set higher than
   * 65535 bytes, so that seems like a reasonable limit for us as well.
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
new file mode 100644
index ba42753..6762a6b
*** a/src/backend/libpq/pqcomm.c
--- b/src/backend/libpq/pqcomm.c
*************** StreamConnection(pgsocket server_fd, Por
*** 740,749 ****
  		 * very large message needs to be sent, but we won't attempt to
  		 * enlarge the OS buffer if that happens, so somewhat arbitrarily
  		 * ensure that the OS buffer is at least PQ_SEND_BUFFER_SIZE * 4.
! 		 * (That's 32kB with the current default).
  		 *
! 		 * The default OS buffer size used to be 8kB in earlier Windows
! 		 * versions, but was raised to 64kB in Windows 2012.  So it shouldn't
  		 * be necessary to change it in later versions anymore.  Changing it
  		 * unnecessarily can even reduce performance, because setting
  		 * SO_SNDBUF in the application disables the "dynamic send buffering"
--- 740,749 ----
  		 * very large message needs to be sent, but we won't attempt to
  		 * enlarge the OS buffer if that happens, so somewhat arbitrarily
  		 * ensure that the OS buffer is at least PQ_SEND_BUFFER_SIZE * 4.
! 		 * (That's 32KB with the current default).
  		 *
! 		 * The default OS buffer size used to be 8KB in earlier Windows
! 		 * versions, but was raised to 64KB in Windows 2012.  So it shouldn't
  		 * be necessary to change it in later versions anymore.  Changing it
  		 * unnecessarily can even reduce performance, because setting
  		 * SO_SNDBUF in the application disables the "dynamic send buffering"
diff --git a/src/backend/main/main.c b/src/backend/main/main.c
new file mode 100644
index c018c90..3338843
*** a/src/backend/main/main.c
--- b/src/backend/main/main.c
*************** help(const char *progname)
*** 345,351 ****
  	printf(_("  -o OPTIONS         pass \"OPTIONS\" to each server process (obsolete)\n"));
  	printf(_("  -p PORT            port number to listen on\n"));
  	printf(_("  -s                 show statistics after each query\n"));
! 	printf(_("  -S WORK-MEM        set amount of memory for sorts (in kB)\n"));
  	printf(_("  -V, --version      output version information, then exit\n"));
  	printf(_("  --NAME=VALUE       set run-time parameter\n"));
  	printf(_("  --describe-config  describe configuration parameters, then exit\n"));
--- 345,351 ----
  	printf(_("  -o OPTIONS         pass \"OPTIONS\" to each server process (obsolete)\n"));
  	printf(_("  -p PORT            port number to listen on\n"));
  	printf(_("  -s                 show statistics after each query\n"));
! 	printf(_("  -S WORK-MEM        set amount of memory for sorts (in KB)\n"));
  	printf(_("  -V, --version      output version information, then exit\n"));
  	printf(_("  --NAME=VALUE       set run-time parameter\n"));
  	printf(_("  --describe-config  describe configuration parameters, then exit\n"));
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
new file mode 100644
index a0dba19..b560164
*** a/src/backend/replication/walsender.c
--- b/src/backend/replication/walsender.c
***************
*** 87,93 ****
   * We don't have a good idea of what a good value would be; there's some
   * overhead per message in both walsender and walreceiver, but on the other
   * hand sending large batches makes walsender less responsive to signals
!  * because signals are checked only between messages.  128kB (with
   * default 8k blocks) seems like a reasonable guess for now.
   */
  #define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)
--- 87,93 ----
   * We don't have a good idea of what a good value would be; there's some
   * overhead per message in both walsender and walreceiver, but on the other
   * hand sending large batches makes walsender less responsive to signals
!  * because signals are checked only between messages.  128KB (with
   * default 8k blocks) seems like a reasonable guess for now.
   */
  #define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
new file mode 100644
index 03143f1..36480b5
*** a/src/backend/storage/file/fd.c
--- b/src/backend/storage/file/fd.c
*************** FileWrite(File file, char *buffer, int a
*** 1653,1659 ****
  			if (newTotal > (uint64) temp_file_limit * (uint64) 1024)
  				ereport(ERROR,
  						(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
! 				 errmsg("temporary file size exceeds temp_file_limit (%dkB)",
  						temp_file_limit)));
  		}
  	}
--- 1653,1659 ----
  			if (newTotal > (uint64) temp_file_limit * (uint64) 1024)
  				ereport(ERROR,
  						(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
! 				 errmsg("temporary file size exceeds temp_file_limit (%dKB)",
  						temp_file_limit)));
  		}
  	}
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
new file mode 100644
index b185c1b..10a2d24
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
*************** check_stack_depth(void)
*** 3114,3120 ****
  		ereport(ERROR,
  				(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
  				 errmsg("stack depth limit exceeded"),
! 				 errhint("Increase the configuration parameter \"max_stack_depth\" (currently %dkB), "
  			  "after ensuring the platform's stack depth limit is adequate.",
  						 max_stack_depth)));
  	}
--- 3114,3120 ----
  		ereport(ERROR,
  				(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
  				 errmsg("stack depth limit exceeded"),
! 				 errhint("Increase the configuration parameter \"max_stack_depth\" (currently %dKB), "
  			  "after ensuring the platform's stack depth limit is adequate.",
  						 max_stack_depth)));
  	}
*************** check_max_stack_depth(int *newval, void
*** 3177,3183 ****
  
  	if (stack_rlimit > 0 && newval_bytes > stack_rlimit - STACK_DEPTH_SLOP)
  	{
! 		GUC_check_errdetail("\"max_stack_depth\" must not exceed %ldkB.",
  							(stack_rlimit - STACK_DEPTH_SLOP) / 1024L);
  		GUC_check_errhint("Increase the platform's stack depth limit via \"ulimit -s\" or local equivalent.");
  		return false;
--- 3177,3183 ----
  
  	if (stack_rlimit > 0 && newval_bytes > stack_rlimit - STACK_DEPTH_SLOP)
  	{
! 		GUC_check_errdetail("\"max_stack_depth\" must not exceed %ldKB.",
  							(stack_rlimit - STACK_DEPTH_SLOP) / 1024L);
  		GUC_check_errhint("Increase the platform's stack depth limit via \"ulimit -s\" or local equivalent.");
  		return false;
diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
new file mode 100644
index 0776f3b..86ea0e2
*** a/src/backend/utils/adt/dbsize.c
--- b/src/backend/utils/adt/dbsize.c
*************** pg_size_pretty(PG_FUNCTION_ARGS)
*** 542,548 ****
  	{
  		size >>= 9;				/* keep one extra bit for rounding */
  		if (Abs(size) < limit2)
! 			snprintf(buf, sizeof(buf), INT64_FORMAT " kB",
  					 half_rounded(size));
  		else
  		{
--- 542,548 ----
  	{
  		size >>= 9;				/* keep one extra bit for rounding */
  		if (Abs(size) < limit2)
! 			snprintf(buf, sizeof(buf), INT64_FORMAT " KB",
  					 half_rounded(size));
  		else
  		{
*************** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 664,670 ****
  		if (numeric_is_less(numeric_absolute(size), limit2))
  		{
  			size = numeric_half_rounded(size);
! 			result = psprintf("%s kB", numeric_to_cstring(size));
  		}
  		else
  		{
--- 664,670 ----
  		if (numeric_is_less(numeric_absolute(size), limit2))
  		{
  			size = numeric_half_rounded(size);
! 			result = psprintf("%s KB", numeric_to_cstring(size));
  		}
  		else
  		{
*************** pg_size_bytes(PG_FUNCTION_ARGS)
*** 830,836 ****
  					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
  					 errmsg("invalid size: \"%s\"", text_to_cstring(arg)),
  					 errdetail("Invalid size unit: \"%s\".", strptr),
! 					 errhint("Valid units are \"bytes\", \"kB\", \"MB\", \"GB\", and \"TB\".")));
  
  		if (multiplier > 1)
  		{
--- 830,836 ----
  					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
  					 errmsg("invalid size: \"%s\"", text_to_cstring(arg)),
  					 errdetail("Invalid size unit: \"%s\".", strptr),
! 					 errhint("Valid units are \"bytes\", \"KB\", \"MB\", \"GB\", and \"TB\".")));
  
  		if (multiplier > 1)
  		{
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
new file mode 100644
index 6ac5184..798f651
*** a/src/backend/utils/misc/guc.c
--- b/src/backend/utils/misc/guc.c
*************** const char *const config_type_names[] =
*** 675,681 ****
  
  typedef struct
  {
! 	char		unit[MAX_UNIT_LEN + 1]; /* unit, as a string, like "kB" or
  										 * "min" */
  	int			base_unit;		/* GUC_UNIT_XXX */
  	int			multiplier;		/* If positive, multiply the value with this
--- 675,681 ----
  
  typedef struct
  {
! 	char		unit[MAX_UNIT_LEN + 1]; /* unit, as a string, like "KB" or
  										 * "min" */
  	int			base_unit;		/* GUC_UNIT_XXX */
  	int			multiplier;		/* If positive, multiply the value with this
*************** typedef struct
*** 694,722 ****
  #error XLOG_SEG_SIZE must be between 1MB and 1GB
  #endif
  
! static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"kB\", \"MB\", \"GB\", and \"TB\".");
  
  static const unit_conversion memory_unit_conversion_table[] =
  {
  	{"TB", GUC_UNIT_KB, 1024 * 1024 * 1024},
  	{"GB", GUC_UNIT_KB, 1024 * 1024},
  	{"MB", GUC_UNIT_KB, 1024},
! 	{"kB", GUC_UNIT_KB, 1},
  
  	{"TB", GUC_UNIT_BLOCKS, (1024 * 1024 * 1024) / (BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_BLOCKS, (1024 * 1024) / (BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_BLOCKS, 1024 / (BLCKSZ / 1024)},
! 	{"kB", GUC_UNIT_BLOCKS, -(BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XBLOCKS, (1024 * 1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_XBLOCKS, (1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_XBLOCKS, 1024 / (XLOG_BLCKSZ / 1024)},
! 	{"kB", GUC_UNIT_XBLOCKS, -(XLOG_BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XSEGS, (1024 * 1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"GB", GUC_UNIT_XSEGS, (1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"MB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / (1024 * 1024))},
! 	{"kB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / 1024)},
  
  	{""}						/* end of table marker */
  };
--- 694,722 ----
  #error XLOG_SEG_SIZE must be between 1MB and 1GB
  #endif
  
! static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"KB\", \"MB\", \"GB\", and \"TB\".");
  
  static const unit_conversion memory_unit_conversion_table[] =
  {
  	{"TB", GUC_UNIT_KB, 1024 * 1024 * 1024},
  	{"GB", GUC_UNIT_KB, 1024 * 1024},
  	{"MB", GUC_UNIT_KB, 1024},
! 	{"KB", GUC_UNIT_KB, 1},
  
  	{"TB", GUC_UNIT_BLOCKS, (1024 * 1024 * 1024) / (BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_BLOCKS, (1024 * 1024) / (BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_BLOCKS, 1024 / (BLCKSZ / 1024)},
! 	{"KB", GUC_UNIT_BLOCKS, -(BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XBLOCKS, (1024 * 1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_XBLOCKS, (1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_XBLOCKS, 1024 / (XLOG_BLCKSZ / 1024)},
! 	{"KB", GUC_UNIT_XBLOCKS, -(XLOG_BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XSEGS, (1024 * 1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"GB", GUC_UNIT_XSEGS, (1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"MB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / (1024 * 1024))},
! 	{"KB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / 1024)},
  
  	{""}						/* end of table marker */
  };
*************** static struct config_int ConfigureNamesI
*** 1930,1936 ****
  	},
  
  	/*
! 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
  	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
  	 * possible, depending on the actual platform-specific stack limit.
  	 */
--- 1930,1936 ----
  	},
  
  	/*
! 	 * We use the hopefully-safely-small value of 100KB as the compiled-in
  	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
  	 * possible, depending on the actual platform-specific stack limit.
  	 */
*************** static struct config_int ConfigureNamesI
*** 2739,2745 ****
  			gettext_noop("Sets the planner's assumption about the size of the disk cache."),
  			gettext_noop("That is, the portion of the kernel's disk cache that "
  						 "will be used for PostgreSQL data files. This is measured in disk "
! 						 "pages, which are normally 8 kB each."),
  			GUC_UNIT_BLOCKS,
  		},
  		&effective_cache_size,
--- 2739,2745 ----
  			gettext_noop("Sets the planner's assumption about the size of the disk cache."),
  			gettext_noop("That is, the portion of the kernel's disk cache that "
  						 "will be used for PostgreSQL data files. This is measured in disk "
! 						 "pages, which are normally 8KB each."),
  			GUC_UNIT_BLOCKS,
  		},
  		&effective_cache_size,
*************** ReportGUCOption(struct config_generic *
*** 5301,5307 ****
  }
  
  /*
!  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
   * to the given base unit.  'value' and 'unit' are the input value and unit
   * to convert from.  The converted value is stored in *base_value.
   *
--- 5301,5307 ----
  }
  
  /*
!  * Convert a value from one of the human-friendly units ("KB", "min" etc.)
   * to the given base unit.  'value' and 'unit' are the input value and unit
   * to convert from.  The converted value is stored in *base_value.
   *
*************** convert_to_base_unit(int64 value, const
*** 5322,5328 ****
  	for (i = 0; *table[i].unit; i++)
  	{
  		if (base_unit == table[i].base_unit &&
! 			strcmp(unit, table[i].unit) == 0)
  		{
  			if (table[i].multiplier < 0)
  				*base_value = value / (-table[i].multiplier);
--- 5322,5331 ----
  	for (i = 0; *table[i].unit; i++)
  	{
  		if (base_unit == table[i].base_unit &&
! 			(strcmp(unit, table[i].unit) == 0 ||
! 			 /* support pre-PG 10 SI/metric syntax */
! 			 (strcmp(unit, "kB") == 0 &&
! 			  strcmp(table[i].unit, "KB") == 0)))
  		{
  			if (table[i].multiplier < 0)
  				*base_value = value / (-table[i].multiplier);
*************** convert_to_base_unit(int64 value, const
*** 5338,5344 ****
   * Convert a value in some base unit to a human-friendly unit.  The output
   * unit is chosen so that it's the greatest unit that can represent the value
   * without loss.  For example, if the base unit is GUC_UNIT_KB, 1024 is
!  * converted to 1 MB, but 1025 is represented as 1025 kB.
   */
  static void
  convert_from_base_unit(int64 base_value, int base_unit,
--- 5341,5347 ----
   * Convert a value in some base unit to a human-friendly unit.  The output
   * unit is chosen so that it's the greatest unit that can represent the value
   * without loss.  For example, if the base unit is GUC_UNIT_KB, 1024 is
!  * converted to 1 MB, but 1025 is represented as 1025KB.
   */
  static void
  convert_from_base_unit(int64 base_value, int base_unit,
*************** GetConfigOptionByNum(int varnum, const c
*** 7999,8012 ****
  		switch (conf->flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME))
  		{
  			case GUC_UNIT_KB:
! 				values[2] = "kB";
  				break;
  			case GUC_UNIT_BLOCKS:
! 				snprintf(buf, sizeof(buf), "%dkB", BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_XBLOCKS:
! 				snprintf(buf, sizeof(buf), "%dkB", XLOG_BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_MS:
--- 8002,8015 ----
  		switch (conf->flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME))
  		{
  			case GUC_UNIT_KB:
! 				values[2] = "KB";
  				break;
  			case GUC_UNIT_BLOCKS:
! 				snprintf(buf, sizeof(buf), "%dKB", BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_XBLOCKS:
! 				snprintf(buf, sizeof(buf), "%dKB", XLOG_BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_MS:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
new file mode 100644
index 6d0666c..92e4264
*** a/src/backend/utils/misc/postgresql.conf.sample
--- b/src/backend/utils/misc/postgresql.conf.sample
***************
*** 24,30 ****
  # "postgres -c log_connections=on".  Some parameters can be changed at run time
  # with the "SET" SQL command.
  #
! # Memory units:  kB = kilobytes        Time units:  ms  = milliseconds
  #                MB = megabytes                     s   = seconds
  #                GB = gigabytes                     min = minutes
  #                TB = terabytes                     h   = hours
--- 24,30 ----
  # "postgres -c log_connections=on".  Some parameters can be changed at run time
  # with the "SET" SQL command.
  #
! # Memory units:  KB = kilobytes        Time units:  ms  = milliseconds
  #                MB = megabytes                     s   = seconds
  #                GB = gigabytes                     min = minutes
  #                TB = terabytes                     h   = hours
***************
*** 110,129 ****
  
  # - Memory -
  
! #shared_buffers = 32MB			# min 128kB
  					# (change requires restart)
  #huge_pages = try			# on, off, or try
  					# (change requires restart)
! #temp_buffers = 8MB			# min 800kB
  #max_prepared_transactions = 0		# zero disables the feature
  					# (change requires restart)
  # Caution: it is not advisable to set max_prepared_transactions nonzero unless
  # you actively intend to use prepared transactions.
! #work_mem = 4MB				# min 64kB
  #maintenance_work_mem = 64MB		# min 1MB
  #replacement_sort_tuples = 150000	# limits use of replacement selection sort
  #autovacuum_work_mem = -1		# min 1MB, or -1 to use maintenance_work_mem
! #max_stack_depth = 2MB			# min 100kB
  #dynamic_shared_memory_type = posix	# the default is the first option
  					# supported by the operating system:
  					#   posix
--- 110,129 ----
  
  # - Memory -
  
! #shared_buffers = 32MB			# min 128KB
  					# (change requires restart)
  #huge_pages = try			# on, off, or try
  					# (change requires restart)
! #temp_buffers = 8MB			# min 800KB
  #max_prepared_transactions = 0		# zero disables the feature
  					# (change requires restart)
  # Caution: it is not advisable to set max_prepared_transactions nonzero unless
  # you actively intend to use prepared transactions.
! #work_mem = 4MB				# min 64KB
  #maintenance_work_mem = 64MB		# min 1MB
  #replacement_sort_tuples = 150000	# limits use of replacement selection sort
  #autovacuum_work_mem = -1		# min 1MB, or -1 to use maintenance_work_mem
! #max_stack_depth = 2MB			# min 100KB
  #dynamic_shared_memory_type = posix	# the default is the first option
  					# supported by the operating system:
  					#   posix
***************
*** 135,141 ****
  # - Disk -
  
  #temp_file_limit = -1			# limits per-process temp file space
! 					# in kB, or -1 for no limit
  
  # - Kernel Resource Usage -
  
--- 135,141 ----
  # - Disk -
  
  #temp_file_limit = -1			# limits per-process temp file space
! 					# in KB, or -1 for no limit
  
  # - Kernel Resource Usage -
  
***************
*** 157,163 ****
  #bgwriter_lru_maxpages = 100		# 0-1000 max buffers written/round
  #bgwriter_lru_multiplier = 2.0		# 0-10.0 multiplier on buffers scanned/round
  #bgwriter_flush_after = 0		# 0 disables,
! 					# default is 512kB on linux, 0 otherwise
  
  # - Asynchronous Behavior -
  
--- 157,163 ----
  #bgwriter_lru_maxpages = 100		# 0-1000 max buffers written/round
  #bgwriter_lru_multiplier = 2.0		# 0-10.0 multiplier on buffers scanned/round
  #bgwriter_flush_after = 0		# 0 disables,
! 					# default is 512KB on linux, 0 otherwise
  
  # - Asynchronous Behavior -
  
***************
*** 193,199 ****
  #wal_compression = off			# enable compression of full-page writes
  #wal_log_hints = off			# also do full page writes of non-critical updates
  					# (change requires restart)
! #wal_buffers = -1			# min 32kB, -1 sets based on shared_buffers
  					# (change requires restart)
  #wal_writer_delay = 200ms		# 1-10000 milliseconds
  #wal_writer_flush_after = 1MB		# 0 disables
--- 193,199 ----
  #wal_compression = off			# enable compression of full-page writes
  #wal_log_hints = off			# also do full page writes of non-critical updates
  					# (change requires restart)
! #wal_buffers = -1			# min 32KB, -1 sets based on shared_buffers
  					# (change requires restart)
  #wal_writer_delay = 200ms		# 1-10000 milliseconds
  #wal_writer_flush_after = 1MB		# 0 disables
***************
*** 208,214 ****
  #min_wal_size = 80MB
  #checkpoint_completion_target = 0.5	# checkpoint target duration, 0.0 - 1.0
  #checkpoint_flush_after = 0		# 0 disables,
! 					# default is 256kB on linux, 0 otherwise
  #checkpoint_warning = 30s		# 0 disables
  
  # - Archiving -
--- 208,214 ----
  #min_wal_size = 80MB
  #checkpoint_completion_target = 0.5	# checkpoint target duration, 0.0 - 1.0
  #checkpoint_flush_after = 0		# 0 disables,
! 					# default is 256KB on linux, 0 otherwise
  #checkpoint_warning = 30s		# 0 disables
  
  # - Archiving -
diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c
new file mode 100644
index 73cb7ee..dc102fa
*** a/src/bin/initdb/initdb.c
--- b/src/bin/initdb/initdb.c
*************** test_config_settings(void)
*** 1180,1186 ****
  	if ((n_buffers * (BLCKSZ / 1024)) % 1024 == 0)
  		printf("%dMB\n", (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		printf("%dkB\n", n_buffers * (BLCKSZ / 1024));
  
  	printf(_("selecting dynamic shared memory implementation ... "));
  	fflush(stdout);
--- 1180,1186 ----
  	if ((n_buffers * (BLCKSZ / 1024)) % 1024 == 0)
  		printf("%dMB\n", (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		printf("%dKB\n", n_buffers * (BLCKSZ / 1024));
  
  	printf(_("selecting dynamic shared memory implementation ... "));
  	fflush(stdout);
*************** setup_config(void)
*** 1214,1220 ****
  		snprintf(repltok, sizeof(repltok), "shared_buffers = %dMB",
  				 (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		snprintf(repltok, sizeof(repltok), "shared_buffers = %dkB",
  				 n_buffers * (BLCKSZ / 1024));
  	conflines = replace_token(conflines, "#shared_buffers = 32MB", repltok);
  
--- 1214,1220 ----
  		snprintf(repltok, sizeof(repltok), "shared_buffers = %dMB",
  				 (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		snprintf(repltok, sizeof(repltok), "shared_buffers = %dKB",
  				 n_buffers * (BLCKSZ / 1024));
  	conflines = replace_token(conflines, "#shared_buffers = 32MB", repltok);
  
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
new file mode 100644
index ec69682..d41e330
*** a/src/bin/pg_basebackup/pg_basebackup.c
--- b/src/bin/pg_basebackup/pg_basebackup.c
*************** usage(void)
*** 236,242 ****
  	printf(_("  -D, --pgdata=DIRECTORY receive base backup into directory\n"));
  	printf(_("  -F, --format=p|t       output format (plain (default), tar)\n"));
  	printf(_("  -r, --max-rate=RATE    maximum transfer rate to transfer data directory\n"
! 	  "                         (in kB/s, or use suffix \"k\" or \"M\")\n"));
  	printf(_("  -R, --write-recovery-conf\n"
  			 "                         write recovery.conf after backup\n"));
  	printf(_("  -S, --slot=SLOTNAME    replication slot to use\n"));
--- 236,242 ----
  	printf(_("  -D, --pgdata=DIRECTORY receive base backup into directory\n"));
  	printf(_("  -F, --format=p|t       output format (plain (default), tar)\n"));
  	printf(_("  -r, --max-rate=RATE    maximum transfer rate to transfer data directory\n"
! 	  "                         (in KB/s, or use suffix \"k\" or \"M\")\n"));
  	printf(_("  -R, --write-recovery-conf\n"
  			 "                         write recovery.conf after backup\n"));
  	printf(_("  -S, --slot=SLOTNAME    replication slot to use\n"));
*************** progress_report(int tablespacenum, const
*** 601,608 ****
  			 * call)
  			 */
  			fprintf(stderr,
! 					ngettext("%*s/%s kB (100%%), %d/%d tablespace %*s",
! 							 "%*s/%s kB (100%%), %d/%d tablespaces %*s",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str,
--- 601,608 ----
  			 * call)
  			 */
  			fprintf(stderr,
! 					ngettext("%*s/%s KB (100%%), %d/%d tablespace %*s",
! 							 "%*s/%s KB (100%%), %d/%d tablespaces %*s",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str,
*************** progress_report(int tablespacenum, const
*** 613,620 ****
  			bool		truncate = (strlen(filename) > VERBOSE_FILENAME_LENGTH);
  
  			fprintf(stderr,
! 					ngettext("%*s/%s kB (%d%%), %d/%d tablespace (%s%-*.*s)",
! 							 "%*s/%s kB (%d%%), %d/%d tablespaces (%s%-*.*s)",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str, percent,
--- 613,620 ----
  			bool		truncate = (strlen(filename) > VERBOSE_FILENAME_LENGTH);
  
  			fprintf(stderr,
! 					ngettext("%*s/%s KB (%d%%), %d/%d tablespace (%s%-*.*s)",
! 							 "%*s/%s KB (%d%%), %d/%d tablespaces (%s%-*.*s)",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str, percent,
*************** progress_report(int tablespacenum, const
*** 629,636 ****
  	}
  	else
  		fprintf(stderr,
! 				ngettext("%*s/%s kB (%d%%), %d/%d tablespace",
! 						 "%*s/%s kB (%d%%), %d/%d tablespaces",
  						 tablespacecount),
  				(int) strlen(totalsize_str),
  				totaldone_str, totalsize_str, percent,
--- 629,636 ----
  	}
  	else
  		fprintf(stderr,
! 				ngettext("%*s/%s KB (%d%%), %d/%d tablespace",
! 						 "%*s/%s KB (%d%%), %d/%d tablespaces",
  						 tablespacecount),
  				(int) strlen(totalsize_str),
  				totaldone_str, totalsize_str, percent,
diff --git a/src/bin/pg_rewind/logging.c b/src/bin/pg_rewind/logging.c
new file mode 100644
index a232abb..6b728d9
*** a/src/bin/pg_rewind/logging.c
--- b/src/bin/pg_rewind/logging.c
*************** progress_report(bool force)
*** 137,143 ****
  	snprintf(fetch_size_str, sizeof(fetch_size_str), INT64_FORMAT,
  			 fetch_size / 1024);
  
! 	pg_log(PG_PROGRESS, "%*s/%s kB (%d%%) copied",
  		   (int) strlen(fetch_size_str), fetch_done_str, fetch_size_str,
  		   percent);
  	printf("\r");
--- 137,143 ----
  	snprintf(fetch_size_str, sizeof(fetch_size_str), INT64_FORMAT,
  			 fetch_size / 1024);
  
! 	pg_log(PG_PROGRESS, "%*s/%s KB (%d%%) copied",
  		   (int) strlen(fetch_size_str), fetch_done_str, fetch_size_str,
  		   percent);
  	printf("\r");
diff --git a/src/bin/pg_test_fsync/pg_test_fsync.c b/src/bin/pg_test_fsync/pg_test_fsync.c
new file mode 100644
index c842762..5fa1a45
*** a/src/bin/pg_test_fsync/pg_test_fsync.c
--- b/src/bin/pg_test_fsync/pg_test_fsync.c
*************** test_sync(int writes_per_op)
*** 239,247 ****
  	bool		fs_warning = false;
  
  	if (writes_per_op == 1)
! 		printf("\nCompare file sync methods using one %dkB write:\n", XLOG_BLCKSZ_K);
  	else
! 		printf("\nCompare file sync methods using two %dkB writes:\n", XLOG_BLCKSZ_K);
  	printf("(in wal_sync_method preference order, except fdatasync is Linux's default)\n");
  
  	/*
--- 239,247 ----
  	bool		fs_warning = false;
  
  	if (writes_per_op == 1)
! 		printf("\nCompare file sync methods using one %dKB write:\n", XLOG_BLCKSZ_K);
  	else
! 		printf("\nCompare file sync methods using two %dKB writes:\n", XLOG_BLCKSZ_K);
  	printf("(in wal_sync_method preference order, except fdatasync is Linux's default)\n");
  
  	/*
*************** static void
*** 395,408 ****
  test_open_syncs(void)
  {
  	printf("\nCompare open_sync with different write sizes:\n");
! 	printf("(This is designed to compare the cost of writing 16kB in different write\n"
  		   "open_sync sizes.)\n");
  
! 	test_open_sync(" 1 * 16kB open_sync write", 16);
! 	test_open_sync(" 2 *  8kB open_sync writes", 8);
! 	test_open_sync(" 4 *  4kB open_sync writes", 4);
! 	test_open_sync(" 8 *  2kB open_sync writes", 2);
! 	test_open_sync("16 *  1kB open_sync writes", 1);
  }
  
  /*
--- 395,408 ----
  test_open_syncs(void)
  {
  	printf("\nCompare open_sync with different write sizes:\n");
! 	printf("(This is designed to compare the cost of writing 16KB in different write\n"
  		   "open_sync sizes.)\n");
  
! 	test_open_sync(" 1 * 16KB open_sync write", 16);
! 	test_open_sync(" 2 *  8KB open_sync writes", 8);
! 	test_open_sync(" 4 *  4KB open_sync writes", 4);
! 	test_open_sync(" 8 *  2KB open_sync writes", 2);
! 	test_open_sync("16 *  1KB open_sync writes", 1);
  }
  
  /*
*************** test_non_sync(void)
*** 521,527 ****
  	/*
  	 * Test a simple write without fsync
  	 */
! 	printf("\nNon-sync'ed %dkB writes:\n", XLOG_BLCKSZ_K);
  	printf(LABEL_FORMAT, "write");
  	fflush(stdout);
  
--- 521,527 ----
  	/*
  	 * Test a simple write without fsync
  	 */
! 	printf("\nNon-sync'ed %dKB writes:\n", XLOG_BLCKSZ_K);
  	printf(LABEL_FORMAT, "write");
  	fflush(stdout);
  
diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h
new file mode 100644
index 6d0e12b..425768e
*** a/src/include/executor/hashjoin.h
--- b/src/include/executor/hashjoin.h
*************** typedef struct HashSkewBucket
*** 104,110 ****
  
  /*
   * To reduce palloc overhead, the HashJoinTuples for the current batch are
!  * packed in 32kB buffers instead of pallocing each tuple individually.
   */
  typedef struct HashMemoryChunkData
  {
--- 104,110 ----
  
  /*
   * To reduce palloc overhead, the HashJoinTuples for the current batch are
!  * packed in 32KB buffers instead of pallocing each tuple individually.
   */
  typedef struct HashMemoryChunkData
  {
diff --git a/src/test/regress/expected/dbsize.out b/src/test/regress/expected/dbsize.out
new file mode 100644
index 20d8cb5..5c0a796
*** a/src/test/regress/expected/dbsize.out
--- b/src/test/regress/expected/dbsize.out
*************** SELECT size, pg_size_pretty(size), pg_si
*** 6,12 ****
  ------------------+----------------+----------------
                 10 | 10 bytes       | -10 bytes
               1000 | 1000 bytes     | -1000 bytes
!           1000000 | 977 kB         | -977 kB
         1000000000 | 954 MB         | -954 MB
      1000000000000 | 931 GB         | -931 GB
   1000000000000000 | 909 TB         | -909 TB
--- 6,12 ----
  ------------------+----------------+----------------
                 10 | 10 bytes       | -10 bytes
               1000 | 1000 bytes     | -1000 bytes
!           1000000 | 977 KB         | -977 KB
         1000000000 | 954 MB         | -954 MB
      1000000000000 | 931 GB         | -931 GB
   1000000000000000 | 909 TB         | -909 TB
*************** SELECT size, pg_size_pretty(size), pg_si
*** 23,48 ****
  --------------------+----------------+----------------
                   10 | 10 bytes       | -10 bytes
                 1000 | 1000 bytes     | -1000 bytes
!             1000000 | 977 kB         | -977 kB
           1000000000 | 954 MB         | -954 MB
        1000000000000 | 931 GB         | -931 GB
     1000000000000000 | 909 TB         | -909 TB
                 10.5 | 10.5 bytes     | -10.5 bytes
               1000.5 | 1000.5 bytes   | -1000.5 bytes
!           1000000.5 | 977 kB         | -977 kB
         1000000000.5 | 954 MB         | -954 MB
      1000000000000.5 | 931 GB         | -931 GB
   1000000000000000.5 | 909 TB         | -909 TB
  (12 rows)
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1kB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
     size   |  pg_size_bytes   
  ----------+------------------
   1        |                1
   123bytes |              123
!  1kB      |             1024
   1MB      |          1048576
    1 GB    |       1073741824
   1.5 GB   |       1610612736
--- 23,48 ----
  --------------------+----------------+----------------
                   10 | 10 bytes       | -10 bytes
                 1000 | 1000 bytes     | -1000 bytes
!             1000000 | 977 KB         | -977 KB
           1000000000 | 954 MB         | -954 MB
        1000000000000 | 931 GB         | -931 GB
     1000000000000000 | 909 TB         | -909 TB
                 10.5 | 10.5 bytes     | -10.5 bytes
               1000.5 | 1000.5 bytes   | -1000.5 bytes
!           1000000.5 | 977 KB         | -977 KB
         1000000000.5 | 954 MB         | -954 MB
      1000000000000.5 | 931 GB         | -931 GB
   1000000000000000.5 | 909 TB         | -909 TB
  (12 rows)
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1KB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
     size   |  pg_size_bytes   
  ----------+------------------
   1        |                1
   123bytes |              123
!  1KB      |             1024
   1MB      |          1048576
    1 GB    |       1073741824
   1.5 GB   |       1610612736
*************** SELECT size, pg_size_bytes(size) FROM
*** 105,119 ****
  SELECT pg_size_bytes('1 AB');
  ERROR:  invalid size: "1 AB"
  DETAIL:  Invalid size unit: "AB".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A');
  ERROR:  invalid size: "1 AB A"
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A    ');
  ERROR:  invalid size: "1 AB A    "
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('9223372036854775807.9');
  ERROR:  bigint out of range
  SELECT pg_size_bytes('1e100');
--- 105,119 ----
  SELECT pg_size_bytes('1 AB');
  ERROR:  invalid size: "1 AB"
  DETAIL:  Invalid size unit: "AB".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A');
  ERROR:  invalid size: "1 AB A"
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A    ');
  ERROR:  invalid size: "1 AB A    "
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('9223372036854775807.9');
  ERROR:  bigint out of range
  SELECT pg_size_bytes('1e100');
*************** ERROR:  invalid size: "1e100000000000000
*** 123,129 ****
  SELECT pg_size_bytes('1 byte');  -- the singular "byte" is not supported
  ERROR:  invalid size: "1 byte"
  DETAIL:  Invalid size unit: "byte".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('');
  ERROR:  invalid size: ""
  SELECT pg_size_bytes('kb');
--- 123,129 ----
  SELECT pg_size_bytes('1 byte');  -- the singular "byte" is not supported
  ERROR:  invalid size: "1 byte"
  DETAIL:  Invalid size unit: "byte".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('');
  ERROR:  invalid size: ""
  SELECT pg_size_bytes('kb');
*************** SELECT pg_size_bytes('-. kb');
*** 138,146 ****
  ERROR:  invalid size: "-. kb"
  SELECT pg_size_bytes('.+912');
  ERROR:  invalid size: ".+912"
! SELECT pg_size_bytes('+912+ kB');
! ERROR:  invalid size: "+912+ kB"
! DETAIL:  Invalid size unit: "+ kB".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
! SELECT pg_size_bytes('++123 kB');
! ERROR:  invalid size: "++123 kB"
--- 138,146 ----
  ERROR:  invalid size: "-. kb"
  SELECT pg_size_bytes('.+912');
  ERROR:  invalid size: ".+912"
! SELECT pg_size_bytes('+912+ KB');
! ERROR:  invalid size: "+912+ KB"
! DETAIL:  Invalid size unit: "+ KB".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
! SELECT pg_size_bytes('++123 KB');
! ERROR:  invalid size: "++123 KB"
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
new file mode 100644
index d9bbae0..a04a117
*** a/src/test/regress/expected/join.out
--- b/src/test/regress/expected/join.out
*************** reset enable_nestloop;
*** 2365,2371 ****
  --
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
! set work_mem to '64kB';
  set enable_mergejoin to off;
  explain (costs off)
  select count(*) from tenk1 a, tenk1 b
--- 2365,2371 ----
  --
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
! set work_mem to '64KB';
  set enable_mergejoin to off;
  explain (costs off)
  select count(*) from tenk1 a, tenk1 b
diff --git a/src/test/regress/expected/json.out b/src/test/regress/expected/json.out
new file mode 100644
index efcdc41..6679203
*** a/src/test/regress/expected/json.out
--- b/src/test/regress/expected/json.out
*************** LINE 1: SELECT '{"abc":1,3}'::json;
*** 203,215 ****
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::json;			-- OK
--- 203,215 ----
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::json;			-- OK
diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out
new file mode 100644
index a6d25de..289dae5
*** a/src/test/regress/expected/jsonb.out
--- b/src/test/regress/expected/jsonb.out
*************** LINE 1: SELECT '{"abc":1,3}'::jsonb;
*** 203,215 ****
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::jsonb;			-- OK
--- 203,215 ----
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::jsonb;			-- OK
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
new file mode 100644
index f06cfa4..5bc8b58
*** a/src/test/regress/expected/rangefuncs.out
--- b/src/test/regress/expected/rangefuncs.out
*************** create function foo1(n integer, out a te
*** 1772,1778 ****
    returns setof record
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
! set work_mem='64kB';
  select t.a, t, t.a from foo1(10000) t limit 1;
     a   |         t         |   a   
  -------+-------------------+-------
--- 1772,1778 ----
    returns setof record
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
! set work_mem='64KB';
  select t.a, t, t.a from foo1(10000) t limit 1;
     a   |         t         |   a   
  -------+-------------------+-------
diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c
new file mode 100644
index 574f5b8..4271ffa
*** a/src/test/regress/pg_regress.c
--- b/src/test/regress/pg_regress.c
*************** regression_main(int argc, char *argv[],
*** 2248,2254 ****
  		fputs("log_autovacuum_min_duration = 0\n", pg_conf);
  		fputs("log_checkpoints = on\n", pg_conf);
  		fputs("log_lock_waits = on\n", pg_conf);
! 		fputs("log_temp_files = 128kB\n", pg_conf);
  		fputs("max_prepared_transactions = 2\n", pg_conf);
  
  		for (sl = temp_configs; sl != NULL; sl = sl->next)
--- 2248,2254 ----
  		fputs("log_autovacuum_min_duration = 0\n", pg_conf);
  		fputs("log_checkpoints = on\n", pg_conf);
  		fputs("log_lock_waits = on\n", pg_conf);
! 		fputs("log_temp_files = 128KB\n", pg_conf);
  		fputs("max_prepared_transactions = 2\n", pg_conf);
  
  		for (sl = temp_configs; sl != NULL; sl = sl->next)
diff --git a/src/test/regress/sql/dbsize.sql b/src/test/regress/sql/dbsize.sql
new file mode 100644
index d10a4d7..d34d71d
*** a/src/test/regress/sql/dbsize.sql
--- b/src/test/regress/sql/dbsize.sql
*************** SELECT size, pg_size_pretty(size), pg_si
*** 12,18 ****
              (1000000000000000.5::numeric)) x(size);
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1kB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
  
  -- case-insensitive units are supported
--- 12,18 ----
              (1000000000000000.5::numeric)) x(size);
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1KB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
  
  -- case-insensitive units are supported
*************** SELECT pg_size_bytes('-.kb');
*** 47,51 ****
  SELECT pg_size_bytes('-. kb');
  
  SELECT pg_size_bytes('.+912');
! SELECT pg_size_bytes('+912+ kB');
! SELECT pg_size_bytes('++123 kB');
--- 47,51 ----
  SELECT pg_size_bytes('-. kb');
  
  SELECT pg_size_bytes('.+912');
! SELECT pg_size_bytes('+912+ KB');
! SELECT pg_size_bytes('++123 KB');
diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql
new file mode 100644
index 97bccec..2e680a6
*** a/src/test/regress/sql/join.sql
--- b/src/test/regress/sql/join.sql
*************** reset enable_nestloop;
*** 484,490 ****
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
  
! set work_mem to '64kB';
  set enable_mergejoin to off;
  
  explain (costs off)
--- 484,490 ----
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
  
! set work_mem to '64KB';
  set enable_mergejoin to off;
  
  explain (costs off)
diff --git a/src/test/regress/sql/json.sql b/src/test/regress/sql/json.sql
new file mode 100644
index 603288b..201689e
*** a/src/test/regress/sql/json.sql
--- b/src/test/regress/sql/json.sql
*************** SELECT '{"abc":1:2}'::json;		-- ERROR, c
*** 42,48 ****
  SELECT '{"abc":1,3}'::json;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::json;
  SELECT repeat('{"a":', 10000)::json;
  RESET max_stack_depth;
--- 42,48 ----
  SELECT '{"abc":1,3}'::json;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::json;
  SELECT repeat('{"a":', 10000)::json;
  RESET max_stack_depth;
diff --git a/src/test/regress/sql/jsonb.sql b/src/test/regress/sql/jsonb.sql
new file mode 100644
index b84bd70..090478d
*** a/src/test/regress/sql/jsonb.sql
--- b/src/test/regress/sql/jsonb.sql
*************** SELECT '{"abc":1:2}'::jsonb;		-- ERROR,
*** 42,48 ****
  SELECT '{"abc":1,3}'::jsonb;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::jsonb;
  SELECT repeat('{"a":', 10000)::jsonb;
  RESET max_stack_depth;
--- 42,48 ----
  SELECT '{"abc":1,3}'::jsonb;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::jsonb;
  SELECT repeat('{"a":', 10000)::jsonb;
  RESET max_stack_depth;
diff --git a/src/test/regress/sql/rangefuncs.sql b/src/test/regress/sql/rangefuncs.sql
new file mode 100644
index c8edc55..3a08f66
*** a/src/test/regress/sql/rangefuncs.sql
--- b/src/test/regress/sql/rangefuncs.sql
*************** create function foo1(n integer, out a te
*** 484,490 ****
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
  
! set work_mem='64kB';
  select t.a, t, t.a from foo1(10000) t limit 1;
  reset work_mem;
  select t.a, t, t.a from foo1(10000) t limit 1;
--- 484,490 ----
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
  
! set work_mem='64KB';
  select t.a, t, t.a from foo1(10000) t limit 1;
  reset work_mem;
  select t.a, t, t.a from foo1(10000) t limit 1;
diff --git a/src/tools/msvc/config_default.pl b/src/tools/msvc/config_default.pl
new file mode 100644
index f046687..04f9560
*** a/src/tools/msvc/config_default.pl
--- b/src/tools/msvc/config_default.pl
*************** our $config = {
*** 10,17 ****
  	# float8byval=> $platformbits == 64, # --disable-float8-byval,
  	# off by default on 32 bit platforms, on by default on 64 bit platforms
  
! 	# blocksize => 8,         # --with-blocksize, 8kB by default
! 	# wal_blocksize => 8,     # --with-wal-blocksize, 8kB by default
  	# wal_segsize => 16,      # --with-wal-segsize, 16MB by default
  	ldap      => 1,        # --with-ldap
  	extraver  => undef,    # --with-extra-version=<string>
--- 10,17 ----
  	# float8byval=> $platformbits == 64, # --disable-float8-byval,
  	# off by default on 32 bit platforms, on by default on 64 bit platforms
  
! 	# blocksize => 8,         # --with-blocksize, 8KB by default
! 	# wal_blocksize => 8,     # --with-wal-blocksize, 8KB by default
  	# wal_segsize => 16,      # --with-wal-segsize, 16MB by default
  	ldap      => 1,        # --with-ldap
  	extraver  => undef,    # --with-extra-version=<string>
#10Joshua D. Drake
jd@commandprompt.com
In reply to: Bruce Momjian (#9)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On 07/30/2016 11:16 AM, Bruce Momjian wrote:

On Sat, Jul 30, 2016 at 10:35:58AM -0400, Tom Lane wrote:

Greg Stark <stark@mit.edu> writes:

I agree that a GUC and new functions are overkill --- we should just
decide on the format we want to output and what to support for input.

As logical as the IEC format appears, I just don't think the Ki/Mi/Gi
prefixes are used widely enough for us to use it --- I think it will
cause too many problem reports:

https://en.wikipedia.org/wiki/Binary_prefix

I have developed two possible patches for PG 10 --- the first one merely
allows "KB" to be used in addition to the existing "kB", and documents
this as an option.

The second patch does what Tom suggests above by outputting only "KB",
and it supports "kB" for backward compatibility. What it doesn't do is
to allow arbitrary case, which I think would be a step backward. The
second patch actually does match the JEDEC standard, except for allowing
"kB".

I also just applied a doc patch that increases case and spacing
consistency in the use of kB/MB/GB/TB.

+1

--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Christoph Berg
myon@debian.org
In reply to: Bruce Momjian (#9)
pg_size_pretty, SHOW, and spaces

Re: Bruce Momjian 2016-07-30 <20160730181643.GD22405@momjian.us>

I also just applied a doc patch that increases case and spacing
consistency in the use of kB/MB/GB/TB.

Hi,

PostgreSQL uses the spaces inconsistently, though. pg_size_pretty uses spaces:

# select pg_size_pretty((2^20)::bigint);
pg_size_pretty
────────────────
1024 kB

SHOW does not:

# show work_mem;
work_mem
──────────
1MB

The SHOW output is formatted by _ShowOption() using 'INT64_FORMAT "%s"',
via convert_from_base_unit(). The latter has a comment attached...
/*
* Convert a value in some base unit to a human-friendly unit. The output
* unit is chosen so that it's the greatest unit that can represent the value
* without loss. For example, if the base unit is GUC_UNIT_KB, 1024 is
* converted to 1 MB, but 1025 is represented as 1025 kB.
*/
... where the spaces are present again.

General typesetting standard seems to be "1 MB", i.e. to include a
space between value and unit. (This would also be my preference.)

Opinions? (I'd opt to insert spaces in the docs now, and then see if
inserting a space in the SHOW output is acceptable for 10.0.)

Christoph

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Bruce Momjian
bruce@momjian.us
In reply to: Christoph Berg (#11)
1 attachment(s)
Re: pg_size_pretty, SHOW, and spaces

On Mon, Aug 1, 2016 at 01:35:53PM +0200, Christoph Berg wrote:

Re: Bruce Momjian 2016-07-30 <20160730181643.GD22405@momjian.us>

I also just applied a doc patch that increases case and spacing
consistency in the use of kB/MB/GB/TB.

Hi,

PostgreSQL uses the spaces inconsistently, though. pg_size_pretty uses spaces:

# select pg_size_pretty((2^20)::bigint);
pg_size_pretty
────────────────
1024 kB

SHOW does not:

# show work_mem;
work_mem
──────────
1MB

Yes, that is inconsistent. I have updated my attached patch to remove
spaces between the number and the units --- see below.

The SHOW output is formatted by _ShowOption() using 'INT64_FORMAT "%s"',
via convert_from_base_unit(). The latter has a comment attached...
/*
* Convert a value in some base unit to a human-friendly unit. The output
* unit is chosen so that it's the greatest unit that can represent the value
* without loss. For example, if the base unit is GUC_UNIT_KB, 1024 is
* converted to 1 MB, but 1025 is represented as 1025 kB.
*/
... where the spaces are present again.

General typesetting standard seems to be "1 MB", i.e. to include a
space between value and unit. (This would also be my preference.)

Opinions? (I'd opt to insert spaces in the docs now, and then see if
inserting a space in the SHOW output is acceptable for 10.0.)

I went through the docs a few days ago and committed a change to removed
spaces between the number and units in the few cases that had them ---
the majority didn't have spaces.

Looking at the Wikipedia article I posted earlier, that also doesn't use
spaces:

https://en.wikipedia.org/wiki/Binary_prefix

I think the only argument _for_ spaces is the output of pg_size_pretty()
now looks odd, e.g.:

10 | 10 bytes | -10 bytes
1000 | 1000 bytes | -1000 bytes
1000000 | 977KB | -977KB
1000000000 | 954MB | -954MB
1000000000000 | 931GB | -931GB
1000000000000000 | 909TB | -909TB
^^^^^ ^^^^^

The issue is that we output "10 bytes", not "10bytes", but for units we
use "977KB". That seems inconsistent, but it is the normal policy
people use. I think this is because "977KB" is really "977K bytes", but
we just append the "B" after the "K" for bevity.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

Attachments:

kilo2.difftext/x-diff; charset=us-asciiDownload
diff --git a/configure b/configure
new file mode 100755
index b49cc11..8466e5a
*** a/configure
--- b/configure
*************** Optional Packages:
*** 1502,1511 ****
    --with-libs=DIRS        alternative spelling of --with-libraries
    --with-pgport=PORTNUM   set default port number [5432]
    --with-blocksize=BLOCKSIZE
!                           set table block size in kB [8]
    --with-segsize=SEGSIZE  set table segment size in GB [1]
    --with-wal-blocksize=BLOCKSIZE
!                           set WAL block size in kB [8]
    --with-wal-segsize=SEGSIZE
                            set WAL segment size in MB [16]
    --with-CC=CMD           set compiler (deprecated)
--- 1502,1511 ----
    --with-libs=DIRS        alternative spelling of --with-libraries
    --with-pgport=PORTNUM   set default port number [5432]
    --with-blocksize=BLOCKSIZE
!                           set table block size in KB [8]
    --with-segsize=SEGSIZE  set table segment size in GB [1]
    --with-wal-blocksize=BLOCKSIZE
!                           set WAL block size in KB [8]
    --with-wal-segsize=SEGSIZE
                            set WAL segment size in MB [16]
    --with-CC=CMD           set compiler (deprecated)
*************** case ${blocksize} in
*** 3550,3557 ****
   32) BLCKSZ=32768;;
    *) as_fn_error $? "Invalid block size. Allowed values are 1,2,4,8,16,32." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${blocksize}kB" >&5
! $as_echo "${blocksize}kB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
--- 3550,3557 ----
   32) BLCKSZ=32768;;
    *) as_fn_error $? "Invalid block size. Allowed values are 1,2,4,8,16,32." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${blocksize}KB" >&5
! $as_echo "${blocksize}KB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
*************** case ${wal_blocksize} in
*** 3638,3645 ****
   64) XLOG_BLCKSZ=65536;;
    *) as_fn_error $? "Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${wal_blocksize}kB" >&5
! $as_echo "${wal_blocksize}kB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
--- 3638,3645 ----
   64) XLOG_BLCKSZ=65536;;
    *) as_fn_error $? "Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64." "$LINENO" 5
  esac
! { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${wal_blocksize}KB" >&5
! $as_echo "${wal_blocksize}KB" >&6; }
  
  
  cat >>confdefs.h <<_ACEOF
diff --git a/configure.in b/configure.in
new file mode 100644
index 5da4f74..2ed6298
*** a/configure.in
--- b/configure.in
*************** AC_SUBST(enable_tap_tests)
*** 250,256 ****
  # Block size
  #
  AC_MSG_CHECKING([for block size])
! PGAC_ARG_REQ(with, blocksize, [BLOCKSIZE], [set table block size in kB [8]],
               [blocksize=$withval],
               [blocksize=8])
  case ${blocksize} in
--- 250,256 ----
  # Block size
  #
  AC_MSG_CHECKING([for block size])
! PGAC_ARG_REQ(with, blocksize, [BLOCKSIZE], [set table block size in KB [8]],
               [blocksize=$withval],
               [blocksize=8])
  case ${blocksize} in
*************** case ${blocksize} in
*** 262,268 ****
   32) BLCKSZ=32768;;
    *) AC_MSG_ERROR([Invalid block size. Allowed values are 1,2,4,8,16,32.])
  esac
! AC_MSG_RESULT([${blocksize}kB])
  
  AC_DEFINE_UNQUOTED([BLCKSZ], ${BLCKSZ}, [
   Size of a disk block --- this also limits the size of a tuple.  You
--- 262,268 ----
   32) BLCKSZ=32768;;
    *) AC_MSG_ERROR([Invalid block size. Allowed values are 1,2,4,8,16,32.])
  esac
! AC_MSG_RESULT([${blocksize}KB])
  
  AC_DEFINE_UNQUOTED([BLCKSZ], ${BLCKSZ}, [
   Size of a disk block --- this also limits the size of a tuple.  You
*************** AC_DEFINE_UNQUOTED([RELSEG_SIZE], ${RELS
*** 314,320 ****
  # WAL block size
  #
  AC_MSG_CHECKING([for WAL block size])
! PGAC_ARG_REQ(with, wal-blocksize, [BLOCKSIZE], [set WAL block size in kB [8]],
               [wal_blocksize=$withval],
               [wal_blocksize=8])
  case ${wal_blocksize} in
--- 314,320 ----
  # WAL block size
  #
  AC_MSG_CHECKING([for WAL block size])
! PGAC_ARG_REQ(with, wal-blocksize, [BLOCKSIZE], [set WAL block size in KB [8]],
               [wal_blocksize=$withval],
               [wal_blocksize=8])
  case ${wal_blocksize} in
*************** case ${wal_blocksize} in
*** 327,333 ****
   64) XLOG_BLCKSZ=65536;;
    *) AC_MSG_ERROR([Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64.])
  esac
! AC_MSG_RESULT([${wal_blocksize}kB])
  
  AC_DEFINE_UNQUOTED([XLOG_BLCKSZ], ${XLOG_BLCKSZ}, [
   Size of a WAL file block.  This need have no particular relation to BLCKSZ.
--- 327,333 ----
   64) XLOG_BLCKSZ=65536;;
    *) AC_MSG_ERROR([Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64.])
  esac
! AC_MSG_RESULT([${wal_blocksize}KB])
  
  AC_DEFINE_UNQUOTED([XLOG_BLCKSZ], ${XLOG_BLCKSZ}, [
   Size of a WAL file block.  This need have no particular relation to BLCKSZ.
diff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml
new file mode 100644
index 38e6f50..34d87b3
*** a/doc/src/sgml/auto-explain.sgml
--- b/doc/src/sgml/auto-explain.sgml
*************** LOG:  duration: 3.651 ms  plan:
*** 263,269 ****
            Hash Cond: (pg_class.oid = pg_index.indrelid)
            ->  Seq Scan on pg_class  (cost=0.00..9.55 rows=255 width=4) (actual time=0.016..0.140 rows=255 loops=1)
            ->  Hash  (cost=3.02..3.02 rows=92 width=4) (actual time=3.238..3.238 rows=92 loops=1)
!                 Buckets: 1024  Batches: 1  Memory Usage: 4kB
                  ->  Seq Scan on pg_index  (cost=0.00..3.02 rows=92 width=4) (actual time=0.008..3.187 rows=92 loops=1)
                        Filter: indisunique
  ]]></screen>
--- 263,269 ----
            Hash Cond: (pg_class.oid = pg_index.indrelid)
            ->  Seq Scan on pg_class  (cost=0.00..9.55 rows=255 width=4) (actual time=0.016..0.140 rows=255 loops=1)
            ->  Hash  (cost=3.02..3.02 rows=92 width=4) (actual time=3.238..3.238 rows=92 loops=1)
!                 Buckets: 1024  Batches: 1  Memory Usage: 4KB
                  ->  Seq Scan on pg_index  (cost=0.00..3.02 rows=92 width=4) (actual time=0.008..3.187 rows=92 loops=1)
                        Filter: indisunique
  ]]></screen>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
new file mode 100644
index cbb333f..a0678b2
*** a/doc/src/sgml/catalogs.sgml
--- b/doc/src/sgml/catalogs.sgml
***************
*** 4021,4027 ****
     segments or <quote>pages</> small enough to be conveniently stored as rows
     in <structname>pg_largeobject</structname>.
     The amount of data per page is defined to be <symbol>LOBLKSIZE</> (which is currently
!    <literal>BLCKSZ/4</>, or typically 2kB).
    </para>
  
    <para>
--- 4021,4027 ----
     segments or <quote>pages</> small enough to be conveniently stored as rows
     in <structname>pg_largeobject</structname>.
     The amount of data per page is defined to be <symbol>LOBLKSIZE</> (which is currently
!    <literal>BLCKSZ/4</>, or typically 2KB).
    </para>
  
    <para>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
new file mode 100644
index b9581d9..23d666a
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
***************
*** 81,87 ****
         <itemizedlist>
          <listitem>
           <para>
!           Valid memory units are <literal>kB</literal> (kilobytes),
            <literal>MB</literal> (megabytes), <literal>GB</literal>
            (gigabytes), and <literal>TB</literal> (terabytes).
            The multiplier for memory units is 1024, not 1000.
--- 81,87 ----
         <itemizedlist>
          <listitem>
           <para>
!           Valid memory units are <literal>KB</literal> (kilobytes),
            <literal>MB</literal> (megabytes), <literal>GB</literal>
            (gigabytes), and <literal>TB</literal> (terabytes).
            The multiplier for memory units is 1024, not 1000.
*************** include_dir 'conf.d'
*** 1903,1909 ****
           cache, where performance might degrade.  This setting may have no
           effect on some platforms.  The valid range is between
           <literal>0</literal>, which disables controlled writeback, and
!          <literal>2MB</literal>.  The default is <literal>512kB</> on Linux,
           <literal>0</> elsewhere.  (Non-default values of
           <symbol>BLCKSZ</symbol> change the default and maximum.)
           This parameter can only be set in the <filename>postgresql.conf</>
--- 1903,1909 ----
           cache, where performance might degrade.  This setting may have no
           effect on some platforms.  The valid range is between
           <literal>0</literal>, which disables controlled writeback, and
!          <literal>2MB</literal>.  The default is <literal>512KB</> on Linux,
           <literal>0</> elsewhere.  (Non-default values of
           <symbol>BLCKSZ</symbol> change the default and maximum.)
           This parameter can only be set in the <filename>postgresql.conf</>
*************** include_dir 'conf.d'
*** 2481,2491 ****
          The amount of shared memory used for WAL data that has not yet been
          written to disk.  The default setting of -1 selects a size equal to
          1/32nd (about 3%) of <xref linkend="guc-shared-buffers">, but not less
!         than <literal>64kB</literal> nor more than the size of one WAL
          segment, typically <literal>16MB</literal>.  This value can be set
          manually if the automatic choice is too large or too small,
!         but any positive value less than <literal>32kB</literal> will be
!         treated as <literal>32kB</literal>.
          This parameter can only be set at server start.
         </para>
  
--- 2481,2491 ----
          The amount of shared memory used for WAL data that has not yet been
          written to disk.  The default setting of -1 selects a size equal to
          1/32nd (about 3%) of <xref linkend="guc-shared-buffers">, but not less
!         than <literal>64KB</literal> nor more than the size of one WAL
          segment, typically <literal>16MB</literal>.  This value can be set
          manually if the automatic choice is too large or too small,
!         but any positive value less than <literal>32KB</literal> will be
!         treated as <literal>32KB</literal>.
          This parameter can only be set at server start.
         </para>
  
*************** include_dir 'conf.d'
*** 2660,2666 ****
          than the OS's page cache, where performance might degrade.  This
          setting may have no effect on some platforms.  The valid range is
          between <literal>0</literal>, which disables controlled writeback,
!         and <literal>2MB</literal>.  The default is <literal>256kB</> on
          Linux, <literal>0</> elsewhere.  (Non-default values of
          <symbol>BLCKSZ</symbol> change the default and maximum.)
          This parameter can only be set in the <filename>postgresql.conf</>
--- 2660,2666 ----
          than the OS's page cache, where performance might degrade.  This
          setting may have no effect on some platforms.  The valid range is
          between <literal>0</literal>, which disables controlled writeback,
!         and <literal>2MB</literal>.  The default is <literal>256KB</> on
          Linux, <literal>0</> elsewhere.  (Non-default values of
          <symbol>BLCKSZ</symbol> change the default and maximum.)
          This parameter can only be set in the <filename>postgresql.conf</>
diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml
new file mode 100644
index a30e25c..b917bdd
*** a/doc/src/sgml/ecpg.sgml
--- b/doc/src/sgml/ecpg.sgml
*************** if (*(int2 *)sqldata->sqlvar[i].sqlind !
*** 8165,8171 ****
       <term><literal>sqlilongdata</></term>
        <listitem>
         <para>
!         It equals to <literal>sqldata</literal> if <literal>sqllen</literal> is larger than 32kB.
         </para>
        </listitem>
       </varlistentry>
--- 8165,8171 ----
       <term><literal>sqlilongdata</></term>
        <listitem>
         <para>
!         It equals to <literal>sqldata</literal> if <literal>sqllen</literal> is larger than 32KB.
         </para>
        </listitem>
       </varlistentry>
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index 971e642..64f347e
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*************** postgres=# SELECT * FROM pg_xlogfile_nam
*** 18788,18809 ****
  
     <para>
      <function>pg_size_pretty</> can be used to format the result of one of
!     the other functions in a human-readable way, using bytes, kB, MB, GB or TB
      as appropriate.
     </para>
  
     <para>
      <function>pg_size_bytes</> can be used to get the size in bytes from a
!     string in human-readable format. The input may have units of bytes, kB,
      MB, GB or TB, and is parsed case-insensitively. If no units are specified,
      bytes are assumed.
     </para>
  
     <note>
      <para>
!      The units kB, MB, GB and TB used by the functions
       <function>pg_size_pretty</> and <function>pg_size_bytes</> are defined
!      using powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is
       1024<superscript>2</> = 1048576 bytes, and so on.
      </para>
     </note>
--- 18788,18809 ----
  
     <para>
      <function>pg_size_pretty</> can be used to format the result of one of
!     the other functions in a human-readable way, using bytes, KB, MB, GB or TB
      as appropriate.
     </para>
  
     <para>
      <function>pg_size_bytes</> can be used to get the size in bytes from a
!     string in human-readable format. The input may have units of bytes, KB,
      MB, GB or TB, and is parsed case-insensitively. If no units are specified,
      bytes are assumed.
     </para>
  
     <note>
      <para>
!      The units KB, MB, GB and TB used by the functions
       <function>pg_size_pretty</> and <function>pg_size_bytes</> are defined
!      using powers of 2 rather than powers of 10, so 1KB is 1024 bytes, 1MB is
       1024<superscript>2</> = 1048576 bytes, and so on.
      </para>
     </note>
diff --git a/doc/src/sgml/ltree.sgml b/doc/src/sgml/ltree.sgml
new file mode 100644
index fccfd32..29be58b
*** a/doc/src/sgml/ltree.sgml
--- b/doc/src/sgml/ltree.sgml
***************
*** 31,37 ****
     A <firstterm>label path</firstterm> is a sequence of zero or more
     labels separated by dots, for example <literal>L1.L2.L3</>, representing
     a path from the root of a hierarchical tree to a particular node.  The
!    length of a label path must be less than 65kB, but keeping it under 2kB is
     preferable.  In practice this is not a major limitation; for example,
     the longest label path in the DMOZ catalog (<ulink
     url="http://www.dmoz.org"></ulink>) is about 240 bytes.
--- 31,37 ----
     A <firstterm>label path</firstterm> is a sequence of zero or more
     labels separated by dots, for example <literal>L1.L2.L3</>, representing
     a path from the root of a hierarchical tree to a particular node.  The
!    length of a label path must be less than 65KB, but keeping it under 2KB is
     preferable.  In practice this is not a major limitation; for example,
     the longest label path in the DMOZ catalog (<ulink
     url="http://www.dmoz.org"></ulink>) is about 240 bytes.
diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
new file mode 100644
index 7bcbfa7..a30b276
*** a/doc/src/sgml/perform.sgml
--- b/doc/src/sgml/perform.sgml
*************** WHERE t1.unique1 &lt; 100 AND t1.unique2
*** 603,614 ****
  --------------------------------------------------------------------------------------------------------------------------------------------
   Sort  (cost=717.34..717.59 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1)
     Sort Key: t1.fivethous
!    Sort Method: quicksort  Memory: 77kB
     -&gt;  Hash Join  (cost=230.47..713.98 rows=101 width=488) (actual time=0.711..7.427 rows=100 loops=1)
           Hash Cond: (t2.unique2 = t1.unique2)
           -&gt;  Seq Scan on tenk2 t2  (cost=0.00..445.00 rows=10000 width=244) (actual time=0.007..2.583 rows=10000 loops=1)
           -&gt;  Hash  (cost=229.20..229.20 rows=101 width=244) (actual time=0.659..0.659 rows=100 loops=1)
!                Buckets: 1024  Batches: 1  Memory Usage: 28kB
                 -&gt;  Bitmap Heap Scan on tenk1 t1  (cost=5.07..229.20 rows=101 width=244) (actual time=0.080..0.526 rows=100 loops=1)
                       Recheck Cond: (unique1 &lt; 100)
                       -&gt;  Bitmap Index Scan on tenk1_unique1  (cost=0.00..5.04 rows=101 width=0) (actual time=0.049..0.049 rows=100 loops=1)
--- 603,614 ----
  --------------------------------------------------------------------------------------------------------------------------------------------
   Sort  (cost=717.34..717.59 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1)
     Sort Key: t1.fivethous
!    Sort Method: quicksort  Memory: 77KB
     -&gt;  Hash Join  (cost=230.47..713.98 rows=101 width=488) (actual time=0.711..7.427 rows=100 loops=1)
           Hash Cond: (t2.unique2 = t1.unique2)
           -&gt;  Seq Scan on tenk2 t2  (cost=0.00..445.00 rows=10000 width=244) (actual time=0.007..2.583 rows=10000 loops=1)
           -&gt;  Hash  (cost=229.20..229.20 rows=101 width=244) (actual time=0.659..0.659 rows=100 loops=1)
!                Buckets: 1024  Batches: 1  Memory Usage: 28KB
                 -&gt;  Bitmap Heap Scan on tenk1 t1  (cost=5.07..229.20 rows=101 width=244) (actual time=0.080..0.526 rows=100 loops=1)
                       Recheck Cond: (unique1 &lt; 100)
                       -&gt;  Bitmap Index Scan on tenk1_unique1  (cost=0.00..5.04 rows=101 width=0) (actual time=0.049..0.049 rows=100 loops=1)
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
new file mode 100644
index 8e701aa..3bab944
*** a/doc/src/sgml/protocol.sgml
--- b/doc/src/sgml/protocol.sgml
*************** The commands accepted in walsender mode
*** 1973,1979 ****
            Limit (throttle) the maximum amount of data transferred from server
            to client per unit of time.  The expected unit is kilobytes per second.
            If this option is specified, the value must either be equal to zero
!           or it must fall within the range from 32kB through 1GB (inclusive).
            If zero is passed or the option is not specified, no restriction is
            imposed on the transfer.
           </para>
--- 1973,1979 ----
            Limit (throttle) the maximum amount of data transferred from server
            to client per unit of time.  The expected unit is kilobytes per second.
            If this option is specified, the value must either be equal to zero
!           or it must fall within the range from 32KB through 1GB (inclusive).
            If zero is passed or the option is not specified, no restriction is
            imposed on the transfer.
           </para>
diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml
new file mode 100644
index ca1b767..e7c5c6e
*** a/doc/src/sgml/rules.sgml
--- b/doc/src/sgml/rules.sgml
*************** SELECT word FROM words ORDER BY word <->
*** 1079,1085 ****
   Limit  (cost=11583.61..11583.64 rows=10 width=32) (actual time=1431.591..1431.594 rows=10 loops=1)
     -&gt;  Sort  (cost=11583.61..11804.76 rows=88459 width=32) (actual time=1431.589..1431.591 rows=10 loops=1)
           Sort Key: ((word &lt;-&gt; 'caterpiler'::text))
!          Sort Method: top-N heapsort  Memory: 25kB
           -&gt;  Foreign Scan on words  (cost=0.00..9672.05 rows=88459 width=32) (actual time=0.057..1286.455 rows=479829 loops=1)
                 Foreign File: /usr/share/dict/words
                 Foreign File Size: 4953699
--- 1079,1085 ----
   Limit  (cost=11583.61..11583.64 rows=10 width=32) (actual time=1431.591..1431.594 rows=10 loops=1)
     -&gt;  Sort  (cost=11583.61..11804.76 rows=88459 width=32) (actual time=1431.589..1431.591 rows=10 loops=1)
           Sort Key: ((word &lt;-&gt; 'caterpiler'::text))
!          Sort Method: top-N heapsort  Memory: 25KB
           -&gt;  Foreign Scan on words  (cost=0.00..9672.05 rows=88459 width=32) (actual time=0.057..1286.455 rows=479829 loops=1)
                 Foreign File: /usr/share/dict/words
                 Foreign File Size: 4953699
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
new file mode 100644
index 4c5d748..02644ce
*** a/doc/src/sgml/runtime.sgml
--- b/doc/src/sgml/runtime.sgml
*************** psql: could not connect to server: No su
*** 659,665 ****
        <row>
         <entry><varname>SHMMAX</></>
         <entry>Maximum size of shared memory segment (bytes)</>
!        <entry>at least 1kB (more if running many copies of the server)</entry>
        </row>
  
        <row>
--- 659,665 ----
        <row>
         <entry><varname>SHMMAX</></>
         <entry>Maximum size of shared memory segment (bytes)</>
!        <entry>at least 1KB (more if running many copies of the server)</entry>
        </row>
  
        <row>
*************** kern.sysv.shmall=1024
*** 1032,1038 ****
         </para>
  
         <para>
!         <varname>SHMALL</> is measured in 4kB pages on this platform.
         </para>
  
         <para>
--- 1032,1038 ----
         </para>
  
         <para>
!         <varname>SHMALL</> is measured in 4KB pages on this platform.
         </para>
  
         <para>
*************** sysctl -w kern.sysv.shmall
*** 1075,1081 ****
        </term>
        <listitem>
         <para>
!         In the default configuration, only 512kB of shared memory per
          segment is allowed. To increase the setting, first change to the
          directory <filename>/etc/conf/cf.d</>. To display the current value of
          <varname>SHMMAX</>, run:
--- 1075,1081 ----
        </term>
        <listitem>
         <para>
!         In the default configuration, only 512KB of shared memory per
          segment is allowed. To increase the setting, first change to the
          directory <filename>/etc/conf/cf.d</>. To display the current value of
          <varname>SHMMAX</>, run:
*************** project.max-msg-ids=(priv,4096,deny)
*** 1180,1186 ****
        <listitem>
         <para>
          On <productname>UnixWare</> 7, the maximum size for shared
!         memory segments is 512kB in the default configuration.
          To display the current value of <varname>SHMMAX</>, run:
  <programlisting>
  /etc/conf/bin/idtune -g SHMMAX
--- 1180,1186 ----
        <listitem>
         <para>
          On <productname>UnixWare</> 7, the maximum size for shared
!         memory segments is 512KB in the default configuration.
          To display the current value of <varname>SHMMAX</>, run:
  <programlisting>
  /etc/conf/bin/idtune -g SHMMAX
diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml
new file mode 100644
index f40c790..6a22054
*** a/doc/src/sgml/spgist.sgml
--- b/doc/src/sgml/spgist.sgml
*************** typedef struct spgLeafConsistentOut
*** 755,761 ****
  
    <para>
     Individual leaf tuples and inner tuples must fit on a single index page
!    (8kB by default).  Therefore, when indexing values of variable-length
     data types, long values can only be supported by methods such as radix
     trees, in which each level of the tree includes a prefix that is short
     enough to fit on a page, and the final leaf level includes a suffix also
--- 755,761 ----
  
    <para>
     Individual leaf tuples and inner tuples must fit on a single index page
!    (8KB by default).  Therefore, when indexing values of variable-length
     data types, long values can only be supported by methods such as radix
     trees, in which each level of the tree includes a prefix that is short
     enough to fit on a page, and the final leaf level includes a suffix also
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
new file mode 100644
index 2d82953..aff3dd8
*** a/doc/src/sgml/storage.sgml
--- b/doc/src/sgml/storage.sgml
*************** Oversized-Attribute Storage Technique).
*** 303,309 ****
  
  <para>
  <productname>PostgreSQL</productname> uses a fixed page size (commonly
! 8kB), and does not allow tuples to span multiple pages.  Therefore, it is
  not possible to store very large field values directly.  To overcome
  this limitation, large field values are compressed and/or broken up into
  multiple physical rows.  This happens transparently to the user, with only
--- 303,309 ----
  
  <para>
  <productname>PostgreSQL</productname> uses a fixed page size (commonly
! 8KB), and does not allow tuples to span multiple pages.  Therefore, it is
  not possible to store very large field values directly.  To overcome
  this limitation, large field values are compressed and/or broken up into
  multiple physical rows.  This happens transparently to the user, with only
*************** bytes regardless of the actual size of t
*** 420,429 ****
  <para>
  The <acronym>TOAST</> management code is triggered only
  when a row value to be stored in a table is wider than
! <symbol>TOAST_TUPLE_THRESHOLD</> bytes (normally 2kB).
  The <acronym>TOAST</> code will compress and/or move
  field values out-of-line until the row value is shorter than
! <symbol>TOAST_TUPLE_TARGET</> bytes (also normally 2kB)
  or no more gains can be had.  During an UPDATE
  operation, values of unchanged fields are normally preserved as-is; so an
  UPDATE of a row with out-of-line values incurs no <acronym>TOAST</> costs if
--- 420,429 ----
  <para>
  The <acronym>TOAST</> management code is triggered only
  when a row value to be stored in a table is wider than
! <symbol>TOAST_TUPLE_THRESHOLD</> bytes (normally 2KB).
  The <acronym>TOAST</> code will compress and/or move
  field values out-of-line until the row value is shorter than
! <symbol>TOAST_TUPLE_TARGET</> bytes (also normally 2KB)
  or no more gains can be had.  During an UPDATE
  operation, values of unchanged fields are normally preserved as-is; so an
  UPDATE of a row with out-of-line values incurs no <acronym>TOAST</> costs if
*************** containing typical HTML pages and their
*** 491,497 ****
  raw data size including the <acronym>TOAST</> table, and that the main table
  contained only about 10% of the entire data (the URLs and some small HTML
  pages). There was no run time difference compared to an un-<acronym>TOAST</>ed
! comparison table, in which all the HTML pages were cut down to 7kB to fit.
  </para>
  
  </sect2>
--- 491,497 ----
  raw data size including the <acronym>TOAST</> table, and that the main table
  contained only about 10% of the entire data (the URLs and some small HTML
  pages). There was no run time difference compared to an un-<acronym>TOAST</>ed
! comparison table, in which all the HTML pages were cut down to 7KB to fit.
  </para>
  
  </sect2>
*************** an item is a row; in an index, an item i
*** 703,709 ****
  
  <para>
  Every table and index is stored as an array of <firstterm>pages</> of a
! fixed size (usually 8kB, although a different page size can be selected
  when compiling the server).  In a table, all the pages are logically
  equivalent, so a particular item (row) can be stored in any page.  In
  indexes, the first page is generally reserved as a <firstterm>metapage</>
--- 703,709 ----
  
  <para>
  Every table and index is stored as an array of <firstterm>pages</> of a
! fixed size (usually 8KB, although a different page size can be selected
  when compiling the server).  In a table, all the pages are logically
  equivalent, so a particular item (row) can be stored in any page.  In
  indexes, the first page is generally reserved as a <firstterm>metapage</>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
new file mode 100644
index 2089040..1c2764b
*** a/doc/src/sgml/wal.sgml
--- b/doc/src/sgml/wal.sgml
***************
*** 176,182 ****
     this page imaging by turning off the <xref
     linkend="guc-full-page-writes"> parameter. Battery-Backed Unit
     (BBU) disk controllers do not prevent partial page writes unless
!    they guarantee that data is written to the BBU as full (8kB) pages.
    </para>
    <para>
     <productname>PostgreSQL</> also protects against some kinds of data corruption
--- 176,182 ----
     this page imaging by turning off the <xref
     linkend="guc-full-page-writes"> parameter. Battery-Backed Unit
     (BBU) disk controllers do not prevent partial page writes unless
!    they guarantee that data is written to the BBU as full (8KB) pages.
    </para>
    <para>
     <productname>PostgreSQL</> also protects against some kinds of data corruption
***************
*** 664,670 ****
     linkend="pgtestfsync"> program can be used to measure the average time
     in microseconds that a single WAL flush operation takes.  A value of
     half of the average time the program reports it takes to flush after a
!    single 8kB write operation is often the most effective setting for
     <varname>commit_delay</varname>, so this value is recommended as the
     starting point to use when optimizing for a particular workload.  While
     tuning <varname>commit_delay</varname> is particularly useful when the
--- 664,670 ----
     linkend="pgtestfsync"> program can be used to measure the average time
     in microseconds that a single WAL flush operation takes.  A value of
     half of the average time the program reports it takes to flush after a
!    single 8KB write operation is often the most effective setting for
     <varname>commit_delay</varname>, so this value is recommended as the
     starting point to use when optimizing for a particular workload.  While
     tuning <varname>commit_delay</varname> is particularly useful when the
***************
*** 738,744 ****
     segment files, normally each 16MB in size (but the size can be changed
     by altering the <option>--with-wal-segsize</> configure option when
     building the server).  Each segment is divided into pages, normally
!    8kB each (this size can be changed via the <option>--with-wal-blocksize</>
     configure option).  The log record headers are described in
     <filename>access/xlogrecord.h</filename>; the record content is dependent
     on the type of event that is being logged.  Segment files are given
--- 738,744 ----
     segment files, normally each 16MB in size (but the size can be changed
     by altering the <option>--with-wal-segsize</> configure option when
     building the server).  Each segment is divided into pages, normally
!    8KB each (this size can be changed via the <option>--with-wal-blocksize</>
     configure option).  The log record headers are described in
     <filename>access/xlogrecord.h</filename>; the record content is dependent
     on the type of event that is being logged.  Segment files are given
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
new file mode 100644
index c2e4fa3..027188f
*** a/src/backend/access/transam/multixact.c
--- b/src/backend/access/transam/multixact.c
***************
*** 119,125 ****
   * additional flag bits for each TransactionId.  To do this without getting
   * into alignment issues, we store four bytes of flags, and then the
   * corresponding 4 Xids.  Each such 5-word (20-byte) set we call a "group", and
!  * are stored as a whole in pages.  Thus, with 8kB BLCKSZ, we keep 409 groups
   * per page.  This wastes 12 bytes per page, but that's OK -- simplicity (and
   * performance) trumps space efficiency here.
   *
--- 119,125 ----
   * additional flag bits for each TransactionId.  To do this without getting
   * into alignment issues, we store four bytes of flags, and then the
   * corresponding 4 Xids.  Each such 5-word (20-byte) set we call a "group", and
!  * are stored as a whole in pages.  Thus, with 8KB BLCKSZ, we keep 409 groups
   * per page.  This wastes 12 bytes per page, but that's OK -- simplicity (and
   * performance) trumps space efficiency here.
   *
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
new file mode 100644
index f13f9c1..be0d277
*** a/src/backend/access/transam/xlog.c
--- b/src/backend/access/transam/xlog.c
*************** LogCheckpointEnd(bool restartpoint)
*** 8063,8069 ****
  		 "%d transaction log file(s) added, %d removed, %d recycled; "
  		 "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
  		 "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
! 		 "distance=%d kB, estimate=%d kB",
  		 restartpoint ? "restartpoint" : "checkpoint",
  		 CheckpointStats.ckpt_bufs_written,
  		 (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
--- 8063,8069 ----
  		 "%d transaction log file(s) added, %d removed, %d recycled; "
  		 "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
  		 "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
! 		 "distance=%d KB, estimate=%d KB",
  		 restartpoint ? "restartpoint" : "checkpoint",
  		 CheckpointStats.ckpt_bufs_written,
  		 (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
new file mode 100644
index dbd27e5..943823f
*** a/src/backend/commands/explain.c
--- b/src/backend/commands/explain.c
*************** show_sort_info(SortState *sortstate, Exp
*** 2163,2169 ****
  		if (es->format == EXPLAIN_FORMAT_TEXT)
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
! 			appendStringInfo(es->str, "Sort Method: %s  %s: %ldkB\n",
  							 sortMethod, spaceType, spaceUsed);
  		}
  		else
--- 2163,2169 ----
  		if (es->format == EXPLAIN_FORMAT_TEXT)
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
! 			appendStringInfo(es->str, "Sort Method: %s  %s: %ldKB\n",
  							 sortMethod, spaceType, spaceUsed);
  		}
  		else
*************** show_hash_info(HashState *hashstate, Exp
*** 2205,2211 ****
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 							 "Buckets: %d (originally %d)  Batches: %d (originally %d)  Memory Usage: %ldkB\n",
  							 hashtable->nbuckets,
  							 hashtable->nbuckets_original,
  							 hashtable->nbatch,
--- 2205,2211 ----
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 							 "Buckets: %d (originally %d)  Batches: %d (originally %d)  Memory Usage: %ldKB\n",
  							 hashtable->nbuckets,
  							 hashtable->nbuckets_original,
  							 hashtable->nbatch,
*************** show_hash_info(HashState *hashstate, Exp
*** 2216,2222 ****
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 						   "Buckets: %d  Batches: %d  Memory Usage: %ldkB\n",
  							 hashtable->nbuckets, hashtable->nbatch,
  							 spacePeakKb);
  		}
--- 2216,2222 ----
  		{
  			appendStringInfoSpaces(es->str, es->indent * 2);
  			appendStringInfo(es->str,
! 						   "Buckets: %d  Batches: %d  Memory Usage: %ldKB\n",
  							 hashtable->nbuckets, hashtable->nbatch,
  							 spacePeakKb);
  		}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
new file mode 100644
index 7d8fc3e..ca9458b
*** a/src/backend/libpq/auth.c
--- b/src/backend/libpq/auth.c
*************** static int	CheckRADIUSAuth(Port *port);
*** 191,197 ****
   * Attribute Certificate (PAC), which contains the user's Windows permissions
   * (group memberships etc.). The PAC is copied into all tickets obtained on
   * the basis of this TGT (even those issued by Unix realms which the Windows
!  * realm trusts), and can be several kB in size. The maximum token size
   * accepted by Windows systems is determined by the MaxAuthToken Windows
   * registry setting. Microsoft recommends that it is not set higher than
   * 65535 bytes, so that seems like a reasonable limit for us as well.
--- 191,197 ----
   * Attribute Certificate (PAC), which contains the user's Windows permissions
   * (group memberships etc.). The PAC is copied into all tickets obtained on
   * the basis of this TGT (even those issued by Unix realms which the Windows
!  * realm trusts), and can be several KB in size. The maximum token size
   * accepted by Windows systems is determined by the MaxAuthToken Windows
   * registry setting. Microsoft recommends that it is not set higher than
   * 65535 bytes, so that seems like a reasonable limit for us as well.
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
new file mode 100644
index ba42753..6762a6b
*** a/src/backend/libpq/pqcomm.c
--- b/src/backend/libpq/pqcomm.c
*************** StreamConnection(pgsocket server_fd, Por
*** 740,749 ****
  		 * very large message needs to be sent, but we won't attempt to
  		 * enlarge the OS buffer if that happens, so somewhat arbitrarily
  		 * ensure that the OS buffer is at least PQ_SEND_BUFFER_SIZE * 4.
! 		 * (That's 32kB with the current default).
  		 *
! 		 * The default OS buffer size used to be 8kB in earlier Windows
! 		 * versions, but was raised to 64kB in Windows 2012.  So it shouldn't
  		 * be necessary to change it in later versions anymore.  Changing it
  		 * unnecessarily can even reduce performance, because setting
  		 * SO_SNDBUF in the application disables the "dynamic send buffering"
--- 740,749 ----
  		 * very large message needs to be sent, but we won't attempt to
  		 * enlarge the OS buffer if that happens, so somewhat arbitrarily
  		 * ensure that the OS buffer is at least PQ_SEND_BUFFER_SIZE * 4.
! 		 * (That's 32KB with the current default).
  		 *
! 		 * The default OS buffer size used to be 8KB in earlier Windows
! 		 * versions, but was raised to 64KB in Windows 2012.  So it shouldn't
  		 * be necessary to change it in later versions anymore.  Changing it
  		 * unnecessarily can even reduce performance, because setting
  		 * SO_SNDBUF in the application disables the "dynamic send buffering"
diff --git a/src/backend/main/main.c b/src/backend/main/main.c
new file mode 100644
index c018c90..3338843
*** a/src/backend/main/main.c
--- b/src/backend/main/main.c
*************** help(const char *progname)
*** 345,351 ****
  	printf(_("  -o OPTIONS         pass \"OPTIONS\" to each server process (obsolete)\n"));
  	printf(_("  -p PORT            port number to listen on\n"));
  	printf(_("  -s                 show statistics after each query\n"));
! 	printf(_("  -S WORK-MEM        set amount of memory for sorts (in kB)\n"));
  	printf(_("  -V, --version      output version information, then exit\n"));
  	printf(_("  --NAME=VALUE       set run-time parameter\n"));
  	printf(_("  --describe-config  describe configuration parameters, then exit\n"));
--- 345,351 ----
  	printf(_("  -o OPTIONS         pass \"OPTIONS\" to each server process (obsolete)\n"));
  	printf(_("  -p PORT            port number to listen on\n"));
  	printf(_("  -s                 show statistics after each query\n"));
! 	printf(_("  -S WORK-MEM        set amount of memory for sorts (in KB)\n"));
  	printf(_("  -V, --version      output version information, then exit\n"));
  	printf(_("  --NAME=VALUE       set run-time parameter\n"));
  	printf(_("  --describe-config  describe configuration parameters, then exit\n"));
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
new file mode 100644
index a0dba19..b560164
*** a/src/backend/replication/walsender.c
--- b/src/backend/replication/walsender.c
***************
*** 87,93 ****
   * We don't have a good idea of what a good value would be; there's some
   * overhead per message in both walsender and walreceiver, but on the other
   * hand sending large batches makes walsender less responsive to signals
!  * because signals are checked only between messages.  128kB (with
   * default 8k blocks) seems like a reasonable guess for now.
   */
  #define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)
--- 87,93 ----
   * We don't have a good idea of what a good value would be; there's some
   * overhead per message in both walsender and walreceiver, but on the other
   * hand sending large batches makes walsender less responsive to signals
!  * because signals are checked only between messages.  128KB (with
   * default 8k blocks) seems like a reasonable guess for now.
   */
  #define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
new file mode 100644
index 03143f1..36480b5
*** a/src/backend/storage/file/fd.c
--- b/src/backend/storage/file/fd.c
*************** FileWrite(File file, char *buffer, int a
*** 1653,1659 ****
  			if (newTotal > (uint64) temp_file_limit * (uint64) 1024)
  				ereport(ERROR,
  						(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
! 				 errmsg("temporary file size exceeds temp_file_limit (%dkB)",
  						temp_file_limit)));
  		}
  	}
--- 1653,1659 ----
  			if (newTotal > (uint64) temp_file_limit * (uint64) 1024)
  				ereport(ERROR,
  						(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
! 				 errmsg("temporary file size exceeds temp_file_limit (%dKB)",
  						temp_file_limit)));
  		}
  	}
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
new file mode 100644
index b185c1b..10a2d24
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
*************** check_stack_depth(void)
*** 3114,3120 ****
  		ereport(ERROR,
  				(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
  				 errmsg("stack depth limit exceeded"),
! 				 errhint("Increase the configuration parameter \"max_stack_depth\" (currently %dkB), "
  			  "after ensuring the platform's stack depth limit is adequate.",
  						 max_stack_depth)));
  	}
--- 3114,3120 ----
  		ereport(ERROR,
  				(errcode(ERRCODE_STATEMENT_TOO_COMPLEX),
  				 errmsg("stack depth limit exceeded"),
! 				 errhint("Increase the configuration parameter \"max_stack_depth\" (currently %dKB), "
  			  "after ensuring the platform's stack depth limit is adequate.",
  						 max_stack_depth)));
  	}
*************** check_max_stack_depth(int *newval, void
*** 3177,3183 ****
  
  	if (stack_rlimit > 0 && newval_bytes > stack_rlimit - STACK_DEPTH_SLOP)
  	{
! 		GUC_check_errdetail("\"max_stack_depth\" must not exceed %ldkB.",
  							(stack_rlimit - STACK_DEPTH_SLOP) / 1024L);
  		GUC_check_errhint("Increase the platform's stack depth limit via \"ulimit -s\" or local equivalent.");
  		return false;
--- 3177,3183 ----
  
  	if (stack_rlimit > 0 && newval_bytes > stack_rlimit - STACK_DEPTH_SLOP)
  	{
! 		GUC_check_errdetail("\"max_stack_depth\" must not exceed %ldKB.",
  							(stack_rlimit - STACK_DEPTH_SLOP) / 1024L);
  		GUC_check_errhint("Increase the platform's stack depth limit via \"ulimit -s\" or local equivalent.");
  		return false;
diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
new file mode 100644
index 0776f3b..a92d4e4
*** a/src/backend/utils/adt/dbsize.c
--- b/src/backend/utils/adt/dbsize.c
*************** pg_size_pretty(PG_FUNCTION_ARGS)
*** 542,565 ****
  	{
  		size >>= 9;				/* keep one extra bit for rounding */
  		if (Abs(size) < limit2)
! 			snprintf(buf, sizeof(buf), INT64_FORMAT " kB",
  					 half_rounded(size));
  		else
  		{
  			size >>= 10;
  			if (Abs(size) < limit2)
! 				snprintf(buf, sizeof(buf), INT64_FORMAT " MB",
  						 half_rounded(size));
  			else
  			{
  				size >>= 10;
  				if (Abs(size) < limit2)
! 					snprintf(buf, sizeof(buf), INT64_FORMAT " GB",
  							 half_rounded(size));
  				else
  				{
  					size >>= 10;
! 					snprintf(buf, sizeof(buf), INT64_FORMAT " TB",
  							 half_rounded(size));
  				}
  			}
--- 542,565 ----
  	{
  		size >>= 9;				/* keep one extra bit for rounding */
  		if (Abs(size) < limit2)
! 			snprintf(buf, sizeof(buf), INT64_FORMAT "KB",
  					 half_rounded(size));
  		else
  		{
  			size >>= 10;
  			if (Abs(size) < limit2)
! 				snprintf(buf, sizeof(buf), INT64_FORMAT "MB",
  						 half_rounded(size));
  			else
  			{
  				size >>= 10;
  				if (Abs(size) < limit2)
! 					snprintf(buf, sizeof(buf), INT64_FORMAT "GB",
  							 half_rounded(size));
  				else
  				{
  					size >>= 10;
! 					snprintf(buf, sizeof(buf), INT64_FORMAT "TB",
  							 half_rounded(size));
  				}
  			}
*************** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 664,670 ****
  		if (numeric_is_less(numeric_absolute(size), limit2))
  		{
  			size = numeric_half_rounded(size);
! 			result = psprintf("%s kB", numeric_to_cstring(size));
  		}
  		else
  		{
--- 664,670 ----
  		if (numeric_is_less(numeric_absolute(size), limit2))
  		{
  			size = numeric_half_rounded(size);
! 			result = psprintf("%sKB", numeric_to_cstring(size));
  		}
  		else
  		{
*************** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 673,679 ****
  			if (numeric_is_less(numeric_absolute(size), limit2))
  			{
  				size = numeric_half_rounded(size);
! 				result = psprintf("%s MB", numeric_to_cstring(size));
  			}
  			else
  			{
--- 673,679 ----
  			if (numeric_is_less(numeric_absolute(size), limit2))
  			{
  				size = numeric_half_rounded(size);
! 				result = psprintf("%sMB", numeric_to_cstring(size));
  			}
  			else
  			{
*************** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 683,696 ****
  				if (numeric_is_less(numeric_absolute(size), limit2))
  				{
  					size = numeric_half_rounded(size);
! 					result = psprintf("%s GB", numeric_to_cstring(size));
  				}
  				else
  				{
  					/* size >>= 10 */
  					size = numeric_shift_right(size, 10);
  					size = numeric_half_rounded(size);
! 					result = psprintf("%s TB", numeric_to_cstring(size));
  				}
  			}
  		}
--- 683,696 ----
  				if (numeric_is_less(numeric_absolute(size), limit2))
  				{
  					size = numeric_half_rounded(size);
! 					result = psprintf("%sGB", numeric_to_cstring(size));
  				}
  				else
  				{
  					/* size >>= 10 */
  					size = numeric_shift_right(size, 10);
  					size = numeric_half_rounded(size);
! 					result = psprintf("%sTB", numeric_to_cstring(size));
  				}
  			}
  		}
*************** pg_size_bytes(PG_FUNCTION_ARGS)
*** 830,836 ****
  					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
  					 errmsg("invalid size: \"%s\"", text_to_cstring(arg)),
  					 errdetail("Invalid size unit: \"%s\".", strptr),
! 					 errhint("Valid units are \"bytes\", \"kB\", \"MB\", \"GB\", and \"TB\".")));
  
  		if (multiplier > 1)
  		{
--- 830,836 ----
  					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
  					 errmsg("invalid size: \"%s\"", text_to_cstring(arg)),
  					 errdetail("Invalid size unit: \"%s\".", strptr),
! 					 errhint("Valid units are \"bytes\", \"KB\", \"MB\", \"GB\", and \"TB\".")));
  
  		if (multiplier > 1)
  		{
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
new file mode 100644
index 6ac5184..798f651
*** a/src/backend/utils/misc/guc.c
--- b/src/backend/utils/misc/guc.c
*************** const char *const config_type_names[] =
*** 675,681 ****
  
  typedef struct
  {
! 	char		unit[MAX_UNIT_LEN + 1]; /* unit, as a string, like "kB" or
  										 * "min" */
  	int			base_unit;		/* GUC_UNIT_XXX */
  	int			multiplier;		/* If positive, multiply the value with this
--- 675,681 ----
  
  typedef struct
  {
! 	char		unit[MAX_UNIT_LEN + 1]; /* unit, as a string, like "KB" or
  										 * "min" */
  	int			base_unit;		/* GUC_UNIT_XXX */
  	int			multiplier;		/* If positive, multiply the value with this
*************** typedef struct
*** 694,722 ****
  #error XLOG_SEG_SIZE must be between 1MB and 1GB
  #endif
  
! static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"kB\", \"MB\", \"GB\", and \"TB\".");
  
  static const unit_conversion memory_unit_conversion_table[] =
  {
  	{"TB", GUC_UNIT_KB, 1024 * 1024 * 1024},
  	{"GB", GUC_UNIT_KB, 1024 * 1024},
  	{"MB", GUC_UNIT_KB, 1024},
! 	{"kB", GUC_UNIT_KB, 1},
  
  	{"TB", GUC_UNIT_BLOCKS, (1024 * 1024 * 1024) / (BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_BLOCKS, (1024 * 1024) / (BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_BLOCKS, 1024 / (BLCKSZ / 1024)},
! 	{"kB", GUC_UNIT_BLOCKS, -(BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XBLOCKS, (1024 * 1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_XBLOCKS, (1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_XBLOCKS, 1024 / (XLOG_BLCKSZ / 1024)},
! 	{"kB", GUC_UNIT_XBLOCKS, -(XLOG_BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XSEGS, (1024 * 1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"GB", GUC_UNIT_XSEGS, (1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"MB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / (1024 * 1024))},
! 	{"kB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / 1024)},
  
  	{""}						/* end of table marker */
  };
--- 694,722 ----
  #error XLOG_SEG_SIZE must be between 1MB and 1GB
  #endif
  
! static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"KB\", \"MB\", \"GB\", and \"TB\".");
  
  static const unit_conversion memory_unit_conversion_table[] =
  {
  	{"TB", GUC_UNIT_KB, 1024 * 1024 * 1024},
  	{"GB", GUC_UNIT_KB, 1024 * 1024},
  	{"MB", GUC_UNIT_KB, 1024},
! 	{"KB", GUC_UNIT_KB, 1},
  
  	{"TB", GUC_UNIT_BLOCKS, (1024 * 1024 * 1024) / (BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_BLOCKS, (1024 * 1024) / (BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_BLOCKS, 1024 / (BLCKSZ / 1024)},
! 	{"KB", GUC_UNIT_BLOCKS, -(BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XBLOCKS, (1024 * 1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"GB", GUC_UNIT_XBLOCKS, (1024 * 1024) / (XLOG_BLCKSZ / 1024)},
  	{"MB", GUC_UNIT_XBLOCKS, 1024 / (XLOG_BLCKSZ / 1024)},
! 	{"KB", GUC_UNIT_XBLOCKS, -(XLOG_BLCKSZ / 1024)},
  
  	{"TB", GUC_UNIT_XSEGS, (1024 * 1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"GB", GUC_UNIT_XSEGS, (1024 * 1024) / (XLOG_SEG_SIZE / 1024)},
  	{"MB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / (1024 * 1024))},
! 	{"KB", GUC_UNIT_XSEGS, -(XLOG_SEG_SIZE / 1024)},
  
  	{""}						/* end of table marker */
  };
*************** static struct config_int ConfigureNamesI
*** 1930,1936 ****
  	},
  
  	/*
! 	 * We use the hopefully-safely-small value of 100kB as the compiled-in
  	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
  	 * possible, depending on the actual platform-specific stack limit.
  	 */
--- 1930,1936 ----
  	},
  
  	/*
! 	 * We use the hopefully-safely-small value of 100KB as the compiled-in
  	 * default for max_stack_depth.  InitializeGUCOptions will increase it if
  	 * possible, depending on the actual platform-specific stack limit.
  	 */
*************** static struct config_int ConfigureNamesI
*** 2739,2745 ****
  			gettext_noop("Sets the planner's assumption about the size of the disk cache."),
  			gettext_noop("That is, the portion of the kernel's disk cache that "
  						 "will be used for PostgreSQL data files. This is measured in disk "
! 						 "pages, which are normally 8 kB each."),
  			GUC_UNIT_BLOCKS,
  		},
  		&effective_cache_size,
--- 2739,2745 ----
  			gettext_noop("Sets the planner's assumption about the size of the disk cache."),
  			gettext_noop("That is, the portion of the kernel's disk cache that "
  						 "will be used for PostgreSQL data files. This is measured in disk "
! 						 "pages, which are normally 8KB each."),
  			GUC_UNIT_BLOCKS,
  		},
  		&effective_cache_size,
*************** ReportGUCOption(struct config_generic *
*** 5301,5307 ****
  }
  
  /*
!  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
   * to the given base unit.  'value' and 'unit' are the input value and unit
   * to convert from.  The converted value is stored in *base_value.
   *
--- 5301,5307 ----
  }
  
  /*
!  * Convert a value from one of the human-friendly units ("KB", "min" etc.)
   * to the given base unit.  'value' and 'unit' are the input value and unit
   * to convert from.  The converted value is stored in *base_value.
   *
*************** convert_to_base_unit(int64 value, const
*** 5322,5328 ****
  	for (i = 0; *table[i].unit; i++)
  	{
  		if (base_unit == table[i].base_unit &&
! 			strcmp(unit, table[i].unit) == 0)
  		{
  			if (table[i].multiplier < 0)
  				*base_value = value / (-table[i].multiplier);
--- 5322,5331 ----
  	for (i = 0; *table[i].unit; i++)
  	{
  		if (base_unit == table[i].base_unit &&
! 			(strcmp(unit, table[i].unit) == 0 ||
! 			 /* support pre-PG 10 SI/metric syntax */
! 			 (strcmp(unit, "kB") == 0 &&
! 			  strcmp(table[i].unit, "KB") == 0)))
  		{
  			if (table[i].multiplier < 0)
  				*base_value = value / (-table[i].multiplier);
*************** convert_to_base_unit(int64 value, const
*** 5338,5344 ****
   * Convert a value in some base unit to a human-friendly unit.  The output
   * unit is chosen so that it's the greatest unit that can represent the value
   * without loss.  For example, if the base unit is GUC_UNIT_KB, 1024 is
!  * converted to 1 MB, but 1025 is represented as 1025 kB.
   */
  static void
  convert_from_base_unit(int64 base_value, int base_unit,
--- 5341,5347 ----
   * Convert a value in some base unit to a human-friendly unit.  The output
   * unit is chosen so that it's the greatest unit that can represent the value
   * without loss.  For example, if the base unit is GUC_UNIT_KB, 1024 is
!  * converted to 1 MB, but 1025 is represented as 1025KB.
   */
  static void
  convert_from_base_unit(int64 base_value, int base_unit,
*************** GetConfigOptionByNum(int varnum, const c
*** 7999,8012 ****
  		switch (conf->flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME))
  		{
  			case GUC_UNIT_KB:
! 				values[2] = "kB";
  				break;
  			case GUC_UNIT_BLOCKS:
! 				snprintf(buf, sizeof(buf), "%dkB", BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_XBLOCKS:
! 				snprintf(buf, sizeof(buf), "%dkB", XLOG_BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_MS:
--- 8002,8015 ----
  		switch (conf->flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME))
  		{
  			case GUC_UNIT_KB:
! 				values[2] = "KB";
  				break;
  			case GUC_UNIT_BLOCKS:
! 				snprintf(buf, sizeof(buf), "%dKB", BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_XBLOCKS:
! 				snprintf(buf, sizeof(buf), "%dKB", XLOG_BLCKSZ / 1024);
  				values[2] = buf;
  				break;
  			case GUC_UNIT_MS:
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
new file mode 100644
index 6d0666c..92e4264
*** a/src/backend/utils/misc/postgresql.conf.sample
--- b/src/backend/utils/misc/postgresql.conf.sample
***************
*** 24,30 ****
  # "postgres -c log_connections=on".  Some parameters can be changed at run time
  # with the "SET" SQL command.
  #
! # Memory units:  kB = kilobytes        Time units:  ms  = milliseconds
  #                MB = megabytes                     s   = seconds
  #                GB = gigabytes                     min = minutes
  #                TB = terabytes                     h   = hours
--- 24,30 ----
  # "postgres -c log_connections=on".  Some parameters can be changed at run time
  # with the "SET" SQL command.
  #
! # Memory units:  KB = kilobytes        Time units:  ms  = milliseconds
  #                MB = megabytes                     s   = seconds
  #                GB = gigabytes                     min = minutes
  #                TB = terabytes                     h   = hours
***************
*** 110,129 ****
  
  # - Memory -
  
! #shared_buffers = 32MB			# min 128kB
  					# (change requires restart)
  #huge_pages = try			# on, off, or try
  					# (change requires restart)
! #temp_buffers = 8MB			# min 800kB
  #max_prepared_transactions = 0		# zero disables the feature
  					# (change requires restart)
  # Caution: it is not advisable to set max_prepared_transactions nonzero unless
  # you actively intend to use prepared transactions.
! #work_mem = 4MB				# min 64kB
  #maintenance_work_mem = 64MB		# min 1MB
  #replacement_sort_tuples = 150000	# limits use of replacement selection sort
  #autovacuum_work_mem = -1		# min 1MB, or -1 to use maintenance_work_mem
! #max_stack_depth = 2MB			# min 100kB
  #dynamic_shared_memory_type = posix	# the default is the first option
  					# supported by the operating system:
  					#   posix
--- 110,129 ----
  
  # - Memory -
  
! #shared_buffers = 32MB			# min 128KB
  					# (change requires restart)
  #huge_pages = try			# on, off, or try
  					# (change requires restart)
! #temp_buffers = 8MB			# min 800KB
  #max_prepared_transactions = 0		# zero disables the feature
  					# (change requires restart)
  # Caution: it is not advisable to set max_prepared_transactions nonzero unless
  # you actively intend to use prepared transactions.
! #work_mem = 4MB				# min 64KB
  #maintenance_work_mem = 64MB		# min 1MB
  #replacement_sort_tuples = 150000	# limits use of replacement selection sort
  #autovacuum_work_mem = -1		# min 1MB, or -1 to use maintenance_work_mem
! #max_stack_depth = 2MB			# min 100KB
  #dynamic_shared_memory_type = posix	# the default is the first option
  					# supported by the operating system:
  					#   posix
***************
*** 135,141 ****
  # - Disk -
  
  #temp_file_limit = -1			# limits per-process temp file space
! 					# in kB, or -1 for no limit
  
  # - Kernel Resource Usage -
  
--- 135,141 ----
  # - Disk -
  
  #temp_file_limit = -1			# limits per-process temp file space
! 					# in KB, or -1 for no limit
  
  # - Kernel Resource Usage -
  
***************
*** 157,163 ****
  #bgwriter_lru_maxpages = 100		# 0-1000 max buffers written/round
  #bgwriter_lru_multiplier = 2.0		# 0-10.0 multiplier on buffers scanned/round
  #bgwriter_flush_after = 0		# 0 disables,
! 					# default is 512kB on linux, 0 otherwise
  
  # - Asynchronous Behavior -
  
--- 157,163 ----
  #bgwriter_lru_maxpages = 100		# 0-1000 max buffers written/round
  #bgwriter_lru_multiplier = 2.0		# 0-10.0 multiplier on buffers scanned/round
  #bgwriter_flush_after = 0		# 0 disables,
! 					# default is 512KB on linux, 0 otherwise
  
  # - Asynchronous Behavior -
  
***************
*** 193,199 ****
  #wal_compression = off			# enable compression of full-page writes
  #wal_log_hints = off			# also do full page writes of non-critical updates
  					# (change requires restart)
! #wal_buffers = -1			# min 32kB, -1 sets based on shared_buffers
  					# (change requires restart)
  #wal_writer_delay = 200ms		# 1-10000 milliseconds
  #wal_writer_flush_after = 1MB		# 0 disables
--- 193,199 ----
  #wal_compression = off			# enable compression of full-page writes
  #wal_log_hints = off			# also do full page writes of non-critical updates
  					# (change requires restart)
! #wal_buffers = -1			# min 32KB, -1 sets based on shared_buffers
  					# (change requires restart)
  #wal_writer_delay = 200ms		# 1-10000 milliseconds
  #wal_writer_flush_after = 1MB		# 0 disables
***************
*** 208,214 ****
  #min_wal_size = 80MB
  #checkpoint_completion_target = 0.5	# checkpoint target duration, 0.0 - 1.0
  #checkpoint_flush_after = 0		# 0 disables,
! 					# default is 256kB on linux, 0 otherwise
  #checkpoint_warning = 30s		# 0 disables
  
  # - Archiving -
--- 208,214 ----
  #min_wal_size = 80MB
  #checkpoint_completion_target = 0.5	# checkpoint target duration, 0.0 - 1.0
  #checkpoint_flush_after = 0		# 0 disables,
! 					# default is 256KB on linux, 0 otherwise
  #checkpoint_warning = 30s		# 0 disables
  
  # - Archiving -
diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c
new file mode 100644
index 73cb7ee..dc102fa
*** a/src/bin/initdb/initdb.c
--- b/src/bin/initdb/initdb.c
*************** test_config_settings(void)
*** 1180,1186 ****
  	if ((n_buffers * (BLCKSZ / 1024)) % 1024 == 0)
  		printf("%dMB\n", (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		printf("%dkB\n", n_buffers * (BLCKSZ / 1024));
  
  	printf(_("selecting dynamic shared memory implementation ... "));
  	fflush(stdout);
--- 1180,1186 ----
  	if ((n_buffers * (BLCKSZ / 1024)) % 1024 == 0)
  		printf("%dMB\n", (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		printf("%dKB\n", n_buffers * (BLCKSZ / 1024));
  
  	printf(_("selecting dynamic shared memory implementation ... "));
  	fflush(stdout);
*************** setup_config(void)
*** 1214,1220 ****
  		snprintf(repltok, sizeof(repltok), "shared_buffers = %dMB",
  				 (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		snprintf(repltok, sizeof(repltok), "shared_buffers = %dkB",
  				 n_buffers * (BLCKSZ / 1024));
  	conflines = replace_token(conflines, "#shared_buffers = 32MB", repltok);
  
--- 1214,1220 ----
  		snprintf(repltok, sizeof(repltok), "shared_buffers = %dMB",
  				 (n_buffers * (BLCKSZ / 1024)) / 1024);
  	else
! 		snprintf(repltok, sizeof(repltok), "shared_buffers = %dKB",
  				 n_buffers * (BLCKSZ / 1024));
  	conflines = replace_token(conflines, "#shared_buffers = 32MB", repltok);
  
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
new file mode 100644
index ec69682..d41e330
*** a/src/bin/pg_basebackup/pg_basebackup.c
--- b/src/bin/pg_basebackup/pg_basebackup.c
*************** usage(void)
*** 236,242 ****
  	printf(_("  -D, --pgdata=DIRECTORY receive base backup into directory\n"));
  	printf(_("  -F, --format=p|t       output format (plain (default), tar)\n"));
  	printf(_("  -r, --max-rate=RATE    maximum transfer rate to transfer data directory\n"
! 	  "                         (in kB/s, or use suffix \"k\" or \"M\")\n"));
  	printf(_("  -R, --write-recovery-conf\n"
  			 "                         write recovery.conf after backup\n"));
  	printf(_("  -S, --slot=SLOTNAME    replication slot to use\n"));
--- 236,242 ----
  	printf(_("  -D, --pgdata=DIRECTORY receive base backup into directory\n"));
  	printf(_("  -F, --format=p|t       output format (plain (default), tar)\n"));
  	printf(_("  -r, --max-rate=RATE    maximum transfer rate to transfer data directory\n"
! 	  "                         (in KB/s, or use suffix \"k\" or \"M\")\n"));
  	printf(_("  -R, --write-recovery-conf\n"
  			 "                         write recovery.conf after backup\n"));
  	printf(_("  -S, --slot=SLOTNAME    replication slot to use\n"));
*************** progress_report(int tablespacenum, const
*** 601,608 ****
  			 * call)
  			 */
  			fprintf(stderr,
! 					ngettext("%*s/%s kB (100%%), %d/%d tablespace %*s",
! 							 "%*s/%s kB (100%%), %d/%d tablespaces %*s",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str,
--- 601,608 ----
  			 * call)
  			 */
  			fprintf(stderr,
! 					ngettext("%*s/%s KB (100%%), %d/%d tablespace %*s",
! 							 "%*s/%s KB (100%%), %d/%d tablespaces %*s",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str,
*************** progress_report(int tablespacenum, const
*** 613,620 ****
  			bool		truncate = (strlen(filename) > VERBOSE_FILENAME_LENGTH);
  
  			fprintf(stderr,
! 					ngettext("%*s/%s kB (%d%%), %d/%d tablespace (%s%-*.*s)",
! 							 "%*s/%s kB (%d%%), %d/%d tablespaces (%s%-*.*s)",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str, percent,
--- 613,620 ----
  			bool		truncate = (strlen(filename) > VERBOSE_FILENAME_LENGTH);
  
  			fprintf(stderr,
! 					ngettext("%*s/%s KB (%d%%), %d/%d tablespace (%s%-*.*s)",
! 							 "%*s/%s KB (%d%%), %d/%d tablespaces (%s%-*.*s)",
  							 tablespacecount),
  					(int) strlen(totalsize_str),
  					totaldone_str, totalsize_str, percent,
*************** progress_report(int tablespacenum, const
*** 629,636 ****
  	}
  	else
  		fprintf(stderr,
! 				ngettext("%*s/%s kB (%d%%), %d/%d tablespace",
! 						 "%*s/%s kB (%d%%), %d/%d tablespaces",
  						 tablespacecount),
  				(int) strlen(totalsize_str),
  				totaldone_str, totalsize_str, percent,
--- 629,636 ----
  	}
  	else
  		fprintf(stderr,
! 				ngettext("%*s/%s KB (%d%%), %d/%d tablespace",
! 						 "%*s/%s KB (%d%%), %d/%d tablespaces",
  						 tablespacecount),
  				(int) strlen(totalsize_str),
  				totaldone_str, totalsize_str, percent,
diff --git a/src/bin/pg_rewind/logging.c b/src/bin/pg_rewind/logging.c
new file mode 100644
index a232abb..6b728d9
*** a/src/bin/pg_rewind/logging.c
--- b/src/bin/pg_rewind/logging.c
*************** progress_report(bool force)
*** 137,143 ****
  	snprintf(fetch_size_str, sizeof(fetch_size_str), INT64_FORMAT,
  			 fetch_size / 1024);
  
! 	pg_log(PG_PROGRESS, "%*s/%s kB (%d%%) copied",
  		   (int) strlen(fetch_size_str), fetch_done_str, fetch_size_str,
  		   percent);
  	printf("\r");
--- 137,143 ----
  	snprintf(fetch_size_str, sizeof(fetch_size_str), INT64_FORMAT,
  			 fetch_size / 1024);
  
! 	pg_log(PG_PROGRESS, "%*s/%s KB (%d%%) copied",
  		   (int) strlen(fetch_size_str), fetch_done_str, fetch_size_str,
  		   percent);
  	printf("\r");
diff --git a/src/bin/pg_test_fsync/pg_test_fsync.c b/src/bin/pg_test_fsync/pg_test_fsync.c
new file mode 100644
index c842762..5fa1a45
*** a/src/bin/pg_test_fsync/pg_test_fsync.c
--- b/src/bin/pg_test_fsync/pg_test_fsync.c
*************** test_sync(int writes_per_op)
*** 239,247 ****
  	bool		fs_warning = false;
  
  	if (writes_per_op == 1)
! 		printf("\nCompare file sync methods using one %dkB write:\n", XLOG_BLCKSZ_K);
  	else
! 		printf("\nCompare file sync methods using two %dkB writes:\n", XLOG_BLCKSZ_K);
  	printf("(in wal_sync_method preference order, except fdatasync is Linux's default)\n");
  
  	/*
--- 239,247 ----
  	bool		fs_warning = false;
  
  	if (writes_per_op == 1)
! 		printf("\nCompare file sync methods using one %dKB write:\n", XLOG_BLCKSZ_K);
  	else
! 		printf("\nCompare file sync methods using two %dKB writes:\n", XLOG_BLCKSZ_K);
  	printf("(in wal_sync_method preference order, except fdatasync is Linux's default)\n");
  
  	/*
*************** static void
*** 395,408 ****
  test_open_syncs(void)
  {
  	printf("\nCompare open_sync with different write sizes:\n");
! 	printf("(This is designed to compare the cost of writing 16kB in different write\n"
  		   "open_sync sizes.)\n");
  
! 	test_open_sync(" 1 * 16kB open_sync write", 16);
! 	test_open_sync(" 2 *  8kB open_sync writes", 8);
! 	test_open_sync(" 4 *  4kB open_sync writes", 4);
! 	test_open_sync(" 8 *  2kB open_sync writes", 2);
! 	test_open_sync("16 *  1kB open_sync writes", 1);
  }
  
  /*
--- 395,408 ----
  test_open_syncs(void)
  {
  	printf("\nCompare open_sync with different write sizes:\n");
! 	printf("(This is designed to compare the cost of writing 16KB in different write\n"
  		   "open_sync sizes.)\n");
  
! 	test_open_sync(" 1 * 16KB open_sync write", 16);
! 	test_open_sync(" 2 *  8KB open_sync writes", 8);
! 	test_open_sync(" 4 *  4KB open_sync writes", 4);
! 	test_open_sync(" 8 *  2KB open_sync writes", 2);
! 	test_open_sync("16 *  1KB open_sync writes", 1);
  }
  
  /*
*************** test_non_sync(void)
*** 521,527 ****
  	/*
  	 * Test a simple write without fsync
  	 */
! 	printf("\nNon-sync'ed %dkB writes:\n", XLOG_BLCKSZ_K);
  	printf(LABEL_FORMAT, "write");
  	fflush(stdout);
  
--- 521,527 ----
  	/*
  	 * Test a simple write without fsync
  	 */
! 	printf("\nNon-sync'ed %dKB writes:\n", XLOG_BLCKSZ_K);
  	printf(LABEL_FORMAT, "write");
  	fflush(stdout);
  
diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h
new file mode 100644
index 6d0e12b..425768e
*** a/src/include/executor/hashjoin.h
--- b/src/include/executor/hashjoin.h
*************** typedef struct HashSkewBucket
*** 104,110 ****
  
  /*
   * To reduce palloc overhead, the HashJoinTuples for the current batch are
!  * packed in 32kB buffers instead of pallocing each tuple individually.
   */
  typedef struct HashMemoryChunkData
  {
--- 104,110 ----
  
  /*
   * To reduce palloc overhead, the HashJoinTuples for the current batch are
!  * packed in 32KB buffers instead of pallocing each tuple individually.
   */
  typedef struct HashMemoryChunkData
  {
diff --git a/src/test/regress/expected/dbsize.out b/src/test/regress/expected/dbsize.out
new file mode 100644
index 20d8cb5..0cb09a5
*** a/src/test/regress/expected/dbsize.out
--- b/src/test/regress/expected/dbsize.out
*************** SELECT size, pg_size_pretty(size), pg_si
*** 6,15 ****
  ------------------+----------------+----------------
                 10 | 10 bytes       | -10 bytes
               1000 | 1000 bytes     | -1000 bytes
!           1000000 | 977 kB         | -977 kB
!        1000000000 | 954 MB         | -954 MB
!     1000000000000 | 931 GB         | -931 GB
!  1000000000000000 | 909 TB         | -909 TB
  (6 rows)
  
  SELECT size, pg_size_pretty(size), pg_size_pretty(-1 * size) FROM
--- 6,15 ----
  ------------------+----------------+----------------
                 10 | 10 bytes       | -10 bytes
               1000 | 1000 bytes     | -1000 bytes
!           1000000 | 977KB          | -977KB
!        1000000000 | 954MB          | -954MB
!     1000000000000 | 931GB          | -931GB
!  1000000000000000 | 909TB          | -909TB
  (6 rows)
  
  SELECT size, pg_size_pretty(size), pg_size_pretty(-1 * size) FROM
*************** SELECT size, pg_size_pretty(size), pg_si
*** 23,48 ****
  --------------------+----------------+----------------
                   10 | 10 bytes       | -10 bytes
                 1000 | 1000 bytes     | -1000 bytes
!             1000000 | 977 kB         | -977 kB
!          1000000000 | 954 MB         | -954 MB
!       1000000000000 | 931 GB         | -931 GB
!    1000000000000000 | 909 TB         | -909 TB
                 10.5 | 10.5 bytes     | -10.5 bytes
               1000.5 | 1000.5 bytes   | -1000.5 bytes
!           1000000.5 | 977 kB         | -977 kB
!        1000000000.5 | 954 MB         | -954 MB
!     1000000000000.5 | 931 GB         | -931 GB
!  1000000000000000.5 | 909 TB         | -909 TB
  (12 rows)
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1kB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
     size   |  pg_size_bytes   
  ----------+------------------
   1        |                1
   123bytes |              123
!  1kB      |             1024
   1MB      |          1048576
    1 GB    |       1073741824
   1.5 GB   |       1610612736
--- 23,48 ----
  --------------------+----------------+----------------
                   10 | 10 bytes       | -10 bytes
                 1000 | 1000 bytes     | -1000 bytes
!             1000000 | 977KB          | -977KB
!          1000000000 | 954MB          | -954MB
!       1000000000000 | 931GB          | -931GB
!    1000000000000000 | 909TB          | -909TB
                 10.5 | 10.5 bytes     | -10.5 bytes
               1000.5 | 1000.5 bytes   | -1000.5 bytes
!           1000000.5 | 977KB          | -977KB
!        1000000000.5 | 954MB          | -954MB
!     1000000000000.5 | 931GB          | -931GB
!  1000000000000000.5 | 909TB          | -909TB
  (12 rows)
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1KB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
     size   |  pg_size_bytes   
  ----------+------------------
   1        |                1
   123bytes |              123
!  1KB      |             1024
   1MB      |          1048576
    1 GB    |       1073741824
   1.5 GB   |       1610612736
*************** SELECT size, pg_size_bytes(size) FROM
*** 105,119 ****
  SELECT pg_size_bytes('1 AB');
  ERROR:  invalid size: "1 AB"
  DETAIL:  Invalid size unit: "AB".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A');
  ERROR:  invalid size: "1 AB A"
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A    ');
  ERROR:  invalid size: "1 AB A    "
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('9223372036854775807.9');
  ERROR:  bigint out of range
  SELECT pg_size_bytes('1e100');
--- 105,119 ----
  SELECT pg_size_bytes('1 AB');
  ERROR:  invalid size: "1 AB"
  DETAIL:  Invalid size unit: "AB".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A');
  ERROR:  invalid size: "1 AB A"
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('1 AB A    ');
  ERROR:  invalid size: "1 AB A    "
  DETAIL:  Invalid size unit: "AB A".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('9223372036854775807.9');
  ERROR:  bigint out of range
  SELECT pg_size_bytes('1e100');
*************** ERROR:  invalid size: "1e100000000000000
*** 123,129 ****
  SELECT pg_size_bytes('1 byte');  -- the singular "byte" is not supported
  ERROR:  invalid size: "1 byte"
  DETAIL:  Invalid size unit: "byte".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('');
  ERROR:  invalid size: ""
  SELECT pg_size_bytes('kb');
--- 123,129 ----
  SELECT pg_size_bytes('1 byte');  -- the singular "byte" is not supported
  ERROR:  invalid size: "1 byte"
  DETAIL:  Invalid size unit: "byte".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
  SELECT pg_size_bytes('');
  ERROR:  invalid size: ""
  SELECT pg_size_bytes('kb');
*************** SELECT pg_size_bytes('-. kb');
*** 138,146 ****
  ERROR:  invalid size: "-. kb"
  SELECT pg_size_bytes('.+912');
  ERROR:  invalid size: ".+912"
! SELECT pg_size_bytes('+912+ kB');
! ERROR:  invalid size: "+912+ kB"
! DETAIL:  Invalid size unit: "+ kB".
! HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
! SELECT pg_size_bytes('++123 kB');
! ERROR:  invalid size: "++123 kB"
--- 138,146 ----
  ERROR:  invalid size: "-. kb"
  SELECT pg_size_bytes('.+912');
  ERROR:  invalid size: ".+912"
! SELECT pg_size_bytes('+912+ KB');
! ERROR:  invalid size: "+912+ KB"
! DETAIL:  Invalid size unit: "+ KB".
! HINT:  Valid units are "bytes", "KB", "MB", "GB", and "TB".
! SELECT pg_size_bytes('++123 KB');
! ERROR:  invalid size: "++123 KB"
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
new file mode 100644
index d9bbae0..a04a117
*** a/src/test/regress/expected/join.out
--- b/src/test/regress/expected/join.out
*************** reset enable_nestloop;
*** 2365,2371 ****
  --
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
! set work_mem to '64kB';
  set enable_mergejoin to off;
  explain (costs off)
  select count(*) from tenk1 a, tenk1 b
--- 2365,2371 ----
  --
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
! set work_mem to '64KB';
  set enable_mergejoin to off;
  explain (costs off)
  select count(*) from tenk1 a, tenk1 b
diff --git a/src/test/regress/expected/json.out b/src/test/regress/expected/json.out
new file mode 100644
index efcdc41..6679203
*** a/src/test/regress/expected/json.out
--- b/src/test/regress/expected/json.out
*************** LINE 1: SELECT '{"abc":1,3}'::json;
*** 203,215 ****
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::json;			-- OK
--- 203,215 ----
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::json;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::json;			-- OK
diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out
new file mode 100644
index a6d25de..289dae5
*** a/src/test/regress/expected/jsonb.out
--- b/src/test/regress/expected/jsonb.out
*************** LINE 1: SELECT '{"abc":1,3}'::jsonb;
*** 203,215 ****
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100kB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::jsonb;			-- OK
--- 203,215 ----
  DETAIL:  Expected string, but found "3".
  CONTEXT:  JSON data, line 1: {"abc":1,3...
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  SELECT repeat('{"a":', 10000)::jsonb;
  ERROR:  stack depth limit exceeded
! HINT:  Increase the configuration parameter "max_stack_depth" (currently 100KB), after ensuring the platform's stack depth limit is adequate.
  RESET max_stack_depth;
  -- Miscellaneous stuff.
  SELECT 'true'::jsonb;			-- OK
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
new file mode 100644
index f06cfa4..5bc8b58
*** a/src/test/regress/expected/rangefuncs.out
--- b/src/test/regress/expected/rangefuncs.out
*************** create function foo1(n integer, out a te
*** 1772,1778 ****
    returns setof record
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
! set work_mem='64kB';
  select t.a, t, t.a from foo1(10000) t limit 1;
     a   |         t         |   a   
  -------+-------------------+-------
--- 1772,1778 ----
    returns setof record
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
! set work_mem='64KB';
  select t.a, t, t.a from foo1(10000) t limit 1;
     a   |         t         |   a   
  -------+-------------------+-------
diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c
new file mode 100644
index 574f5b8..4271ffa
*** a/src/test/regress/pg_regress.c
--- b/src/test/regress/pg_regress.c
*************** regression_main(int argc, char *argv[],
*** 2248,2254 ****
  		fputs("log_autovacuum_min_duration = 0\n", pg_conf);
  		fputs("log_checkpoints = on\n", pg_conf);
  		fputs("log_lock_waits = on\n", pg_conf);
! 		fputs("log_temp_files = 128kB\n", pg_conf);
  		fputs("max_prepared_transactions = 2\n", pg_conf);
  
  		for (sl = temp_configs; sl != NULL; sl = sl->next)
--- 2248,2254 ----
  		fputs("log_autovacuum_min_duration = 0\n", pg_conf);
  		fputs("log_checkpoints = on\n", pg_conf);
  		fputs("log_lock_waits = on\n", pg_conf);
! 		fputs("log_temp_files = 128KB\n", pg_conf);
  		fputs("max_prepared_transactions = 2\n", pg_conf);
  
  		for (sl = temp_configs; sl != NULL; sl = sl->next)
diff --git a/src/test/regress/sql/dbsize.sql b/src/test/regress/sql/dbsize.sql
new file mode 100644
index d10a4d7..d34d71d
*** a/src/test/regress/sql/dbsize.sql
--- b/src/test/regress/sql/dbsize.sql
*************** SELECT size, pg_size_pretty(size), pg_si
*** 12,18 ****
              (1000000000000000.5::numeric)) x(size);
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1kB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
  
  -- case-insensitive units are supported
--- 12,18 ----
              (1000000000000000.5::numeric)) x(size);
  
  SELECT size, pg_size_bytes(size) FROM
!     (VALUES ('1'), ('123bytes'), ('1KB'), ('1MB'), (' 1 GB'), ('1.5 GB '),
              ('1TB'), ('3000 TB'), ('1e6 MB')) x(size);
  
  -- case-insensitive units are supported
*************** SELECT pg_size_bytes('-.kb');
*** 47,51 ****
  SELECT pg_size_bytes('-. kb');
  
  SELECT pg_size_bytes('.+912');
! SELECT pg_size_bytes('+912+ kB');
! SELECT pg_size_bytes('++123 kB');
--- 47,51 ----
  SELECT pg_size_bytes('-. kb');
  
  SELECT pg_size_bytes('.+912');
! SELECT pg_size_bytes('+912+ KB');
! SELECT pg_size_bytes('++123 KB');
diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql
new file mode 100644
index 97bccec..2e680a6
*** a/src/test/regress/sql/join.sql
--- b/src/test/regress/sql/join.sql
*************** reset enable_nestloop;
*** 484,490 ****
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
  
! set work_mem to '64kB';
  set enable_mergejoin to off;
  
  explain (costs off)
--- 484,490 ----
  -- regression test for bug #13908 (hash join with skew tuples & nbatch increase)
  --
  
! set work_mem to '64KB';
  set enable_mergejoin to off;
  
  explain (costs off)
diff --git a/src/test/regress/sql/json.sql b/src/test/regress/sql/json.sql
new file mode 100644
index 603288b..201689e
*** a/src/test/regress/sql/json.sql
--- b/src/test/regress/sql/json.sql
*************** SELECT '{"abc":1:2}'::json;		-- ERROR, c
*** 42,48 ****
  SELECT '{"abc":1,3}'::json;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::json;
  SELECT repeat('{"a":', 10000)::json;
  RESET max_stack_depth;
--- 42,48 ----
  SELECT '{"abc":1,3}'::json;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::json;
  SELECT repeat('{"a":', 10000)::json;
  RESET max_stack_depth;
diff --git a/src/test/regress/sql/jsonb.sql b/src/test/regress/sql/jsonb.sql
new file mode 100644
index b84bd70..090478d
*** a/src/test/regress/sql/jsonb.sql
--- b/src/test/regress/sql/jsonb.sql
*************** SELECT '{"abc":1:2}'::jsonb;		-- ERROR,
*** 42,48 ****
  SELECT '{"abc":1,3}'::jsonb;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100kB';
  SELECT repeat('[', 10000)::jsonb;
  SELECT repeat('{"a":', 10000)::jsonb;
  RESET max_stack_depth;
--- 42,48 ----
  SELECT '{"abc":1,3}'::jsonb;		-- ERROR, no value
  
  -- Recursion.
! SET max_stack_depth = '100KB';
  SELECT repeat('[', 10000)::jsonb;
  SELECT repeat('{"a":', 10000)::jsonb;
  RESET max_stack_depth;
diff --git a/src/test/regress/sql/rangefuncs.sql b/src/test/regress/sql/rangefuncs.sql
new file mode 100644
index c8edc55..3a08f66
*** a/src/test/regress/sql/rangefuncs.sql
--- b/src/test/regress/sql/rangefuncs.sql
*************** create function foo1(n integer, out a te
*** 484,490 ****
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
  
! set work_mem='64kB';
  select t.a, t, t.a from foo1(10000) t limit 1;
  reset work_mem;
  select t.a, t, t.a from foo1(10000) t limit 1;
--- 484,490 ----
    language sql
    as $$ select 'foo ' || i, 'bar ' || i from generate_series(1,$1) i $$;
  
! set work_mem='64KB';
  select t.a, t, t.a from foo1(10000) t limit 1;
  reset work_mem;
  select t.a, t, t.a from foo1(10000) t limit 1;
diff --git a/src/tools/msvc/config_default.pl b/src/tools/msvc/config_default.pl
new file mode 100644
index f046687..04f9560
*** a/src/tools/msvc/config_default.pl
--- b/src/tools/msvc/config_default.pl
*************** our $config = {
*** 10,17 ****
  	# float8byval=> $platformbits == 64, # --disable-float8-byval,
  	# off by default on 32 bit platforms, on by default on 64 bit platforms
  
! 	# blocksize => 8,         # --with-blocksize, 8kB by default
! 	# wal_blocksize => 8,     # --with-wal-blocksize, 8kB by default
  	# wal_segsize => 16,      # --with-wal-segsize, 16MB by default
  	ldap      => 1,        # --with-ldap
  	extraver  => undef,    # --with-extra-version=<string>
--- 10,17 ----
  	# float8byval=> $platformbits == 64, # --disable-float8-byval,
  	# off by default on 32 bit platforms, on by default on 64 bit platforms
  
! 	# blocksize => 8,         # --with-blocksize, 8KB by default
! 	# wal_blocksize => 8,     # --with-wal-blocksize, 8KB by default
  	# wal_segsize => 16,      # --with-wal-segsize, 16MB by default
  	ldap      => 1,        # --with-ldap
  	extraver  => undef,    # --with-extra-version=<string>
#13Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Bruce Momjian (#9)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On 7/30/16 2:16 PM, Bruce Momjian wrote:

The second patch does what Tom suggests above by outputting only "KB",
and it supports "kB" for backward compatibility. What it doesn't do is
to allow arbitrary case, which I think would be a step backward. The
second patch actually does match the JEDEC standard, except for allowing
"kB".

If we're going to make changes, why not bite the bullet and output KiB?

I have never heard of JEDEC, so I'm less inclined to accept their
"standard" at this point.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Pavel Stehule (#5)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On 7/30/16 1:18 AM, Pavel Stehule wrote:

We talked about this issue, when I wrote function pg_size_bytes. It is
hard to fix these functions after years of usage. The new set of
functions can be better

pg_iso_size_pretty();
pg_iso_size_bytes();

One thing that would actually be nice for other reasons as well is a
version of pg_size_pretty() that lets you specify the output unit, say,
as a second argument. Because sometimes you want to compare two tables
or something, and tells you one is 3GB and the other is 783MB, which
doesn't really help. If I tell it to use 'MB' as the output unit, I
could get comparable output.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Bruce Momjian
bruce@momjian.us
In reply to: Peter Eisentraut (#13)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Mon, Aug 1, 2016 at 02:48:55PM -0400, Peter Eisentraut wrote:

On 7/30/16 2:16 PM, Bruce Momjian wrote:

The second patch does what Tom suggests above by outputting only "KB",
and it supports "kB" for backward compatibility. What it doesn't do is
to allow arbitrary case, which I think would be a step backward. The
second patch actually does match the JEDEC standard, except for allowing
"kB".

If we're going to make changes, why not bite the bullet and output KiB?

I have never heard of JEDEC, so I'm less inclined to accept their
"standard" at this point.

I already address this. While I have never heard of JEDEC either, I
have seen KB, and have never seen KiB, hence my argument that KiB would
lead to more confusion than we have now.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Christoph Berg (#11)
Re: pg_size_pretty, SHOW, and spaces

On 8/1/16 7:35 AM, Christoph Berg wrote:

PostgreSQL uses the spaces inconsistently, though. pg_size_pretty uses spaces:

# select pg_size_pretty((2^20)::bigint);
pg_size_pretty
────────────────
1024 kB

because it's "pretty" :)

SHOW does not:

# show work_mem;
work_mem
──────────
1MB

The original idea might have been to allow that value to be passed back
into the settings system, without having to quote the space. I'm not
sure, but I think changing that might cause some annoyance.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Pavel Stehule
pavel.stehule@gmail.com
In reply to: Peter Eisentraut (#14)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

2016-08-01 20:51 GMT+02:00 Peter Eisentraut <
peter.eisentraut@2ndquadrant.com>:

On 7/30/16 1:18 AM, Pavel Stehule wrote:

We talked about this issue, when I wrote function pg_size_bytes. It is
hard to fix these functions after years of usage. The new set of
functions can be better

pg_iso_size_pretty();
pg_iso_size_bytes();

One thing that would actually be nice for other reasons as well is a
version of pg_size_pretty() that lets you specify the output unit, say,
as a second argument. Because sometimes you want to compare two tables
or something, and tells you one is 3GB and the other is 783MB, which
doesn't really help. If I tell it to use 'MB' as the output unit, I
could get comparable output.

It is looks like some convert function

pg_size_to(size, unit, [others ... rounding, truncating]) returns numeric

select pg_size_to(1024*1024, 'KB')

Regards

Pavel

Show quoted text

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#18Christoph Berg
myon@debian.org
In reply to: Peter Eisentraut (#16)
Re: pg_size_pretty, SHOW, and spaces

Re: Peter Eisentraut 2016-08-01 <f3e021d3-d843-04a5-d816-6921309b3bf1@2ndquadrant.com>

PostgreSQL uses the spaces inconsistently, though. pg_size_pretty uses spaces:

# select pg_size_pretty((2^20)::bigint);
pg_size_pretty
────────────────
1024 kB

because it's "pretty" :)

:)

SHOW does not:

# show work_mem;
work_mem
──────────
1MB

The original idea might have been to allow that value to be passed back
into the settings system, without having to quote the space. I'm not
sure, but I think changing that might cause some annoyance.

That's a good argument for keeping it that way, yes.

Re: Bruce Momjian 2016-08-01 <20160801162508.GA28246@momjian.us>

Looking at the Wikipedia article I posted earlier, that also doesn't use
spaces:

https://en.wikipedia.org/wiki/Binary_prefix

That article has plenty of occurrences of "10 MB" "528 MB/s" and the
like.

I think the only argument _for_ spaces is the output of pg_size_pretty()
now looks odd, e.g.:

10 | 10 bytes | -10 bytes
1000 | 1000 bytes | -1000 bytes
1000000 | 977KB | -977KB
1000000000 | 954MB | -954MB
1000000000000 | 931GB | -931GB
1000000000000000 | 909TB | -909TB
^^^^^ ^^^^^

The issue is that we output "10 bytes", not "10bytes", but for units we
use "977KB". That seems inconsistent, but it is the normal policy
people use. I think this is because "977KB" is really "977K bytes", but
we just append the "B" after the "K" for bevity.

It's the other way round:

https://en.wikipedia.org/wiki/International_System_of_Units#General_rules

| The value of a quantity is written as a number followed by a space
| (representing a multiplication sign) and a unit symbol; e.g., 2.21 kg
[...]

I'd opt to omit the space anywhere where the value is supposed to be
fed back into the config (SHOW, --parameters), but use the "pretty"
format with space everywhere otherwise (documentation, memory counts
in explain output, pg_size_pretty() etc.)

Christoph

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Bruce Momjian
bruce@momjian.us
In reply to: Christoph Berg (#18)
1 attachment(s)
Re: pg_size_pretty, SHOW, and spaces

On Tue, Aug 2, 2016 at 11:29:01AM +0200, Christoph Berg wrote:

The issue is that we output "10 bytes", not "10bytes", but for units we
use "977KB". That seems inconsistent, but it is the normal policy
people use. I think this is because "977KB" is really "977K bytes", but
we just append the "B" after the "K" for bevity.

It's the other way round:

https://en.wikipedia.org/wiki/International_System_of_Units#General_rules

| The value of a quantity is written as a number followed by a space
| (representing a multiplication sign) and a unit symbol; e.g., 2.21 kg
[...]

I'd opt to omit the space anywhere where the value is supposed to be
fed back into the config (SHOW, --parameters), but use the "pretty"
format with space everywhere otherwise (documentation, memory counts
in explain output, pg_size_pretty() etc.)

Yes, that's a strong argument for using a space. I have adjusted the
patch to use spaces in all reasonable places. Patch attached, which I
have gzipped because it was 133 KB. (Ah, see what I did there?) :-)

I am thinking of leaving the 9.6 docs alone as I have already made them
consistent (no space) with minimal changes. We can make it consistent
the other way in PG 10.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

Attachments:

kilo2.diff.gzapplication/gzipDownload
#20Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Bruce Momjian (#19)
Re: pg_size_pretty, SHOW, and spaces

On 8/2/16 12:51 PM, Bruce Momjian wrote:

Yes, that's a strong argument for using a space. I have adjusted the
patch to use spaces in all reasonable places. Patch attached, which I
have gzipped because it was 133 KB. (Ah, see what I did there?) :-)

I am thinking of leaving the 9.6 docs alone as I have already made them
consistent (no space) with minimal changes. We can make it consistent
the other way in PG 10.

I don't think anyone wanted to *remove* the spaces in the documentation.
I think this change makes the documentation harder to read.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Bruce Momjian
bruce@momjian.us
In reply to: Peter Eisentraut (#20)
Re: pg_size_pretty, SHOW, and spaces

On Fri, Aug 5, 2016 at 10:57:35AM -0400, Peter Eisentraut wrote:

On 8/2/16 12:51 PM, Bruce Momjian wrote:

Yes, that's a strong argument for using a space. I have adjusted the
patch to use spaces in all reasonable places. Patch attached, which I
have gzipped because it was 133 KB. (Ah, see what I did there?) :-)

I am thinking of leaving the 9.6 docs alone as I have already made them
consistent (no space) with minimal changes. We can make it consistent
the other way in PG 10.

I don't think anyone wanted to *remove* the spaces in the documentation.
I think this change makes the documentation harder to read.

Well, we had spaces in only a few places in the docs, and as I said, it
is not consistent. Do you want those few put back for 9.6?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#22Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#21)
Re: pg_size_pretty, SHOW, and spaces

On Fri, Aug 5, 2016 at 11:06 AM, Bruce Momjian <bruce@momjian.us> wrote:

On Fri, Aug 5, 2016 at 10:57:35AM -0400, Peter Eisentraut wrote:

On 8/2/16 12:51 PM, Bruce Momjian wrote:

Yes, that's a strong argument for using a space. I have adjusted the
patch to use spaces in all reasonable places. Patch attached, which I
have gzipped because it was 133 KB. (Ah, see what I did there?) :-)

I am thinking of leaving the 9.6 docs alone as I have already made them
consistent (no space) with minimal changes. We can make it consistent
the other way in PG 10.

I don't think anyone wanted to *remove* the spaces in the documentation.
I think this change makes the documentation harder to read.

Well, we had spaces in only a few places in the docs, and as I said, it
is not consistent. Do you want those few put back for 9.6?

+1 for that. I can't see how it's good for 10 to be one way, 9.6 to
be the opposite way, and 9.5 and prior to be someplace in the middle.
That seems like a back-patching mess.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#23Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#22)
Re: pg_size_pretty, SHOW, and spaces

On Fri, Aug 5, 2016 at 02:07:18PM -0400, Robert Haas wrote:

On Fri, Aug 5, 2016 at 11:06 AM, Bruce Momjian <bruce@momjian.us> wrote:

On Fri, Aug 5, 2016 at 10:57:35AM -0400, Peter Eisentraut wrote:

On 8/2/16 12:51 PM, Bruce Momjian wrote:

Yes, that's a strong argument for using a space. I have adjusted the
patch to use spaces in all reasonable places. Patch attached, which I
have gzipped because it was 133 KB. (Ah, see what I did there?) :-)

I am thinking of leaving the 9.6 docs alone as I have already made them
consistent (no space) with minimal changes. We can make it consistent
the other way in PG 10.

I don't think anyone wanted to *remove* the spaces in the documentation.
I think this change makes the documentation harder to read.

Well, we had spaces in only a few places in the docs, and as I said, it
is not consistent. Do you want those few put back for 9.6?

+1 for that. I can't see how it's good for 10 to be one way, 9.6 to
be the opposite way, and 9.5 and prior to be someplace in the middle.
That seems like a back-patching mess.

OK, done.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#24Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#2)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Fri, Jul 29, 2016 at 8:18 PM, Bruce Momjian <bruce@momjian.us> wrote:

The Postgres docs specify that kB is based on 1024 or 2^10:

https://www.postgresql.org/docs/9.6/static/functions-admin.html

Note: The units kB, MB, GB and TB used by the functions
pg_size_pretty and pg_size_bytes are defined using powers of 2 rather
than powers of 10, so 1kB is 1024 bytes, 1MB is 10242 = 1048576 bytes,
and so on.

These prefixes were introduced to GUC variable specification in 2006:

commit b517e653489f733893d61e7a84c118325394471c
Author: Peter Eisentraut <peter_e@gmx.net>
Date: Thu Jul 27 08:30:41 2006 +0000

Allow units to be specified with configuration settings.

and added to postgresql.conf:

# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days

and the units were copied when pg_size_pretty() was implemented. These
units are based on the International System of Units (SI)/metric.
However, the SI system is power-of-10-based, and we just re-purposed
them to be 1024 or 2^10-based.

However, that is not the end of the story.

Sure it is. The behavior of the code matches the documentation. The
documentation describes one of several reasonable behaviors. Full
stop.

I am thinking Postgres 10 would be a good time to switch to KB as a
1024-based prefix. Unfortunately, there is no similar fix for MB, GB,
etc. 'm' is 'milli' so there we never used mB, so in JEDEC and Metric,
MB is ambiguous as 1000-based or 1024-based.

I think this would be a backward compatibility break that would
probably cause confusion for years. I think we can add new functions
that behave differently, but I oppose revising the behavior of the
existing functions ... and I *definitely* oppose adding new
behavior-changing GUCs. The result of that will surely be chaos.

J. Random User: I'm having a problem!
Mailing List: Gee, how big are your tables?
J. Random User: Here's some pg_size_pretty output.
Mailing List: Gosh, we don't know what that means, what do you have
this obscure GUC set to?
J. Random User: Maybe I'll just give up on SQL and use MongoDB.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#25Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#24)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 01:30:29PM -0400, Robert Haas wrote:

On Fri, Jul 29, 2016 at 8:18 PM, Bruce Momjian <bruce@momjian.us> wrote:

and the units were copied when pg_size_pretty() was implemented. These
units are based on the International System of Units (SI)/metric.
However, the SI system is power-of-10-based, and we just re-purposed
them to be 1024 or 2^10-based.

However, that is not the end of the story.

Sure it is. The behavior of the code matches the documentation. The
documentation describes one of several reasonable behaviors. Full
stop.

I am thinking Postgres 10 would be a good time to switch to KB as a
1024-based prefix. Unfortunately, there is no similar fix for MB, GB,
etc. 'm' is 'milli' so there we never used mB, so in JEDEC and Metric,
MB is ambiguous as 1000-based or 1024-based.

I think this would be a backward compatibility break that would
probably cause confusion for years. I think we can add new functions
that behave differently, but I oppose revising the behavior of the
existing functions ... and I *definitely* oppose adding new
behavior-changing GUCs. The result of that will surely be chaos.

Can you read up through August 1 and then reply?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#26Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#25)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 1:43 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Aug 23, 2016 at 01:30:29PM -0400, Robert Haas wrote:

On Fri, Jul 29, 2016 at 8:18 PM, Bruce Momjian <bruce@momjian.us> wrote:

and the units were copied when pg_size_pretty() was implemented. These
units are based on the International System of Units (SI)/metric.
However, the SI system is power-of-10-based, and we just re-purposed
them to be 1024 or 2^10-based.

However, that is not the end of the story.

Sure it is. The behavior of the code matches the documentation. The
documentation describes one of several reasonable behaviors. Full
stop.

I am thinking Postgres 10 would be a good time to switch to KB as a
1024-based prefix. Unfortunately, there is no similar fix for MB, GB,
etc. 'm' is 'milli' so there we never used mB, so in JEDEC and Metric,
MB is ambiguous as 1000-based or 1024-based.

I think this would be a backward compatibility break that would
probably cause confusion for years. I think we can add new functions
that behave differently, but I oppose revising the behavior of the
existing functions ... and I *definitely* oppose adding new
behavior-changing GUCs. The result of that will surely be chaos.

Can you read up through August 1 and then reply?

I have already read the entire thread, and replied only after reading
all messages.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#27Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#26)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 01:45:44PM -0400, Robert Haas wrote:

On Tue, Aug 23, 2016 at 1:43 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Aug 23, 2016 at 01:30:29PM -0400, Robert Haas wrote:

On Fri, Jul 29, 2016 at 8:18 PM, Bruce Momjian <bruce@momjian.us> wrote:

and the units were copied when pg_size_pretty() was implemented. These
units are based on the International System of Units (SI)/metric.
However, the SI system is power-of-10-based, and we just re-purposed
them to be 1024 or 2^10-based.

However, that is not the end of the story.

Sure it is. The behavior of the code matches the documentation. The
documentation describes one of several reasonable behaviors. Full
stop.

I am thinking Postgres 10 would be a good time to switch to KB as a
1024-based prefix. Unfortunately, there is no similar fix for MB, GB,
etc. 'm' is 'milli' so there we never used mB, so in JEDEC and Metric,
MB is ambiguous as 1000-based or 1024-based.

I think this would be a backward compatibility break that would
probably cause confusion for years. I think we can add new functions
that behave differently, but I oppose revising the behavior of the
existing functions ... and I *definitely* oppose adding new
behavior-changing GUCs. The result of that will surely be chaos.

Can you read up through August 1 and then reply?

I have already read the entire thread, and replied only after reading
all messages.

Well, what are you replying to then? There is no GUC used, and
everything is backward compatible. Your hyperbole about a new user
being confused is also not helpful. What is this "chaos" you are
talking about?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#28Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#27)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 1:47 PM, Bruce Momjian <bruce@momjian.us> wrote:

I have already read the entire thread, and replied only after reading
all messages.

Well, what are you replying to then?

Your original message. I'm arguing that we should not change the
behavior, as you proposed to do.

There is no GUC used, and
everything is backward compatible.

Greg Stark proposed a GUC. I don't think that's a good idea. You
proposed to change the behavior in a way that is not
backward-compatible. I don't think that's a good idea either. If you
are saying that you've dropped those proposals, fine, but I think it's
entirely reasonable for me to express my opinion on them. It was not
evident to me that the thread had reached any kind of consensus.

Your hyperbole about a new user
being confused is also not helpful. What is this "chaos" you are
talking about?

Behavior-changing GUCs are bad news for reasons that have been
discussed many times before: they create a requirement that everybody
who writes code intended to run on arbitrary PostgreSQL installation
be prepared to cater to every possible value of that GUC.
pg_size_pretty() is pretty likely to appear in queries that we give
users to run on their systems, so it would be a particularly poor
choice to make its behavior configurable.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#29Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#28)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 01:53:25PM -0400, Robert Haas wrote:

On Tue, Aug 23, 2016 at 1:47 PM, Bruce Momjian <bruce@momjian.us> wrote:

I have already read the entire thread, and replied only after reading
all messages.

Well, what are you replying to then?

Your original message. I'm arguing that we should not change the
behavior, as you proposed to do.

That's why I was asking you to comment on the final patch, which I am
planning to apply to PG 10 soon.

There is no GUC used, and
everything is backward compatible.

Greg Stark proposed a GUC. I don't think that's a good idea. You
proposed to change the behavior in a way that is not
backward-compatible. I don't think that's a good idea either. If you
are saying that you've dropped those proposals, fine, but I think it's
entirely reasonable for me to express my opinion on them. It was not
evident to me that the thread had reached any kind of consensus.

Uh, the patch was the consensus, as I had several versions. It was not
clear from your email what you thought of the patch, or if your comments
applied to the final patch at all. The email you quoted was mine, but
from a very early stage in the discussion.

Your hyperbole about a new user
being confused is also not helpful. What is this "chaos" you are
talking about?

Behavior-changing GUCs are bad news for reasons that have been
discussed many times before: they create a requirement that everybody
who writes code intended to run on arbitrary PostgreSQL installation
be prepared to cater to every possible value of that GUC.
pg_size_pretty() is pretty likely to appear in queries that we give
users to run on their systems, so it would be a particularly poor
choice to make its behavior configurable.

There is no question on that point.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#30Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#29)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 1:57 PM, Bruce Momjian <bruce@momjian.us> wrote:

That's why I was asking you to comment on the final patch, which I am
planning to apply to PG 10 soon.

Oh, OK. I didn't understand that that was what you are asking. I
don't find either of your proposed final patches to be an improvement
over the status quo. I think the selection of kB rather than KB was a
deliberate decision by Peter Eisentraut, and I don't think changing
our practice now buys us anything meaningful. Your first patch
introduces an odd wart into the GUC mechanism, with a strange wording
for the message, to fix something that's not really broken in the
first place. Your second one alters kB to KB in zillions of places
all over the code base, and I am quite sure that there is no consensus
to do anything of that sort.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#31Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#30)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 02:31:26PM -0400, Robert Haas wrote:

On Tue, Aug 23, 2016 at 1:57 PM, Bruce Momjian <bruce@momjian.us> wrote:

That's why I was asking you to comment on the final patch, which I am
planning to apply to PG 10 soon.

Oh, OK. I didn't understand that that was what you are asking. I
don't find either of your proposed final patches to be an improvement
over the status quo. I think the selection of kB rather than KB was a
deliberate decision by Peter Eisentraut, and I don't think changing
our practice now buys us anything meaningful. Your first patch
introduces an odd wart into the GUC mechanism, with a strange wording
for the message, to fix something that's not really broken in the
first place. Your second one alters kB to KB in zillions of places
all over the code base, and I am quite sure that there is no consensus
to do anything of that sort.

Well, the patch was updated several times, and the final version was not
objected to until you objected. Does anyone else want to weigh in?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#32Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#31)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On 2016-08-23 14:33:15 -0400, Bruce Momjian wrote:

On Tue, Aug 23, 2016 at 02:31:26PM -0400, Robert Haas wrote:

On Tue, Aug 23, 2016 at 1:57 PM, Bruce Momjian <bruce@momjian.us> wrote:

That's why I was asking you to comment on the final patch, which I am
planning to apply to PG 10 soon.

Oh, OK. I didn't understand that that was what you are asking. I
don't find either of your proposed final patches to be an improvement
over the status quo. I think the selection of kB rather than KB was a
deliberate decision by Peter Eisentraut, and I don't think changing
our practice now buys us anything meaningful. Your first patch
introduces an odd wart into the GUC mechanism, with a strange wording
for the message, to fix something that's not really broken in the
first place. Your second one alters kB to KB in zillions of places
all over the code base, and I am quite sure that there is no consensus
to do anything of that sort.

Well, the patch was updated several times, and the final version was not
objected to until you objected. Does anyone else want to weigh in?

To me the change doesn't seem beneficial. Noise aside, the added
whitespace seems even seems detrimental to me. But I also don't really
care much.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#33Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#32)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 11:35:35AM -0700, Andres Freund wrote:

On 2016-08-23 14:33:15 -0400, Bruce Momjian wrote:

On Tue, Aug 23, 2016 at 02:31:26PM -0400, Robert Haas wrote:

On Tue, Aug 23, 2016 at 1:57 PM, Bruce Momjian <bruce@momjian.us> wrote:

That's why I was asking you to comment on the final patch, which I am
planning to apply to PG 10 soon.

Oh, OK. I didn't understand that that was what you are asking. I
don't find either of your proposed final patches to be an improvement
over the status quo. I think the selection of kB rather than KB was a
deliberate decision by Peter Eisentraut, and I don't think changing
our practice now buys us anything meaningful. Your first patch
introduces an odd wart into the GUC mechanism, with a strange wording
for the message, to fix something that's not really broken in the
first place. Your second one alters kB to KB in zillions of places
all over the code base, and I am quite sure that there is no consensus
to do anything of that sort.

Well, the patch was updated several times, and the final version was not
objected to until you objected. Does anyone else want to weigh in?

To me the change doesn't seem beneficial. Noise aside, the added
whitespace seems even seems detrimental to me. But I also don't really
care much.

Well, right now we are inconsistent, so we should decide on the spacing
and make it consistent. I think we are consistent on using 'k' instead
of 'K'. There were at least eight people on this thread and when no one
objected to my final patch, I thought people wanted it.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#34Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#31)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

Bruce Momjian wrote:

On Tue, Aug 23, 2016 at 02:31:26PM -0400, Robert Haas wrote:

On Tue, Aug 23, 2016 at 1:57 PM, Bruce Momjian <bruce@momjian.us> wrote:

That's why I was asking you to comment on the final patch, which I am
planning to apply to PG 10 soon.

Oh, OK. I didn't understand that that was what you are asking. I
don't find either of your proposed final patches to be an improvement
over the status quo. I think the selection of kB rather than KB was a
deliberate decision by Peter Eisentraut, and I don't think changing
our practice now buys us anything meaningful. Your first patch
introduces an odd wart into the GUC mechanism, with a strange wording
for the message, to fix something that's not really broken in the
first place. Your second one alters kB to KB in zillions of places
all over the code base, and I am quite sure that there is no consensus
to do anything of that sort.

Well, the patch was updated several times, and the final version was not
objected to until you objected. Does anyone else want to weigh in?

I think this should be left alone -- it looks more like pointless
tinkering than something useful.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#35Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#31)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Tue, Aug 23, 2016 at 2:33 PM, Bruce Momjian <bruce@momjian.us> wrote:

Well, the patch was updated several times, and the final version was not
objected to until you objected.

It is not clear what you mean by "the final version", because you
posted two different final versions. I don't see a clear vote from
anybody in favor of either of those things, and Peter's replies seem
to me to suggest that he does not support either of your proposals.
So I am not sure that I would agree with the statement that nobody
objected, but in any case there certainly wasn't a consensus in favor
of either change.

Also, the subject of this thread is "wrong suffix for pg_size_pretty",
which may not have tipped people off to the fact that you were
proposing to replace "kB" with "KB" everywhere. Even after reading
your email, I didn't realize that you were proposing that until I
actually opened the patch and looked at it. Such widespread changes
tend to draw objections, and IMHO shouldn't be made unless it's
abundantly clear that most people are in favor.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#36Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Bruce Momjian (#9)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On 7/30/16 2:16 PM, Bruce Momjian wrote:

The second patch does what Tom suggests above by outputting only "KB",
and it supports "kB" for backward compatibility. What it doesn't do is
to allow arbitrary case, which I think would be a step backward. The
second patch actually does match the JEDEC standard, except for allowing
"kB".

Btw., just to show that I'm not all crazy, the following programs also
display a small "k" for file sizes and download rates:

apt
curl
dnf
pip
yum
vagrant

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#37Thomas Berger
Thomas.Berger@1und1.de
In reply to: Robert Haas (#24)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

Today, i found the time to read all the mails in this thread, and i think i have to explain, why we decided to open a bug for this behavior.

Pn Tuesday, 23. August 2016, 13:30:29 Robert Haas wrote:

J. Random User: I'm having a problem!
Mailing List: Gee, how big are your tables?
J. Random User: Here's some pg_size_pretty output.
Mailing List: Gosh, we don't know what that means, what do you have
this obscure GUC set to?
J. Random User: Maybe I'll just give up on SQL and use MongoDB.

In fact, we had just the other way around. One of our most critical databases had some extreme bloat.
Some of our internal customers was very confused, about the sizes reported by the database.
This confusion has led to wrong decisions. (And a long discussion about the choice of DBMS btw)

I think there is a point missing in this whole discussion, or i just didn't see it:

Yeah, the behavior of "kB" is defined in the "postgresql.conf" documentation.
But no _user_ reads this. There is no link or hint in the documentation of "pg_size_pretty()" [1]https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE.

[1]: https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE

--
Thomas Berger

PostgreSQL DBA
Database Operations

1&1 Telecommunication SE | Ernst-Frey-Straße 10 | 76135 Karlsruhe | Germany
Phone: +49 721 91374-6566
E-Mail: thomas.berger@1und1.de | Web: www.1und1.de

Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 23963

Vorstand: Markus Huhn, Alessandro Nava, Moritz Roth, Ludger Sieverding, Martin Witt
Aufsichtsratsvorsitzender: Michael Scheeren

Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte Informationen enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat sind oder diese E-Mail irrtümlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgemäßen Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden.

This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#38Robert Haas
robertmhaas@gmail.com
In reply to: Thomas Berger (#37)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On Wed, Sep 14, 2016 at 5:22 AM, Thomas Berger <Thomas.Berger@1und1.de> wrote:

Today, i found the time to read all the mails in this thread, and i think i have to explain, why we decided to open a bug for this behavior.

Pn Tuesday, 23. August 2016, 13:30:29 Robert Haas wrote:

J. Random User: I'm having a problem!
Mailing List: Gee, how big are your tables?
J. Random User: Here's some pg_size_pretty output.
Mailing List: Gosh, we don't know what that means, what do you have
this obscure GUC set to?
J. Random User: Maybe I'll just give up on SQL and use MongoDB.

In fact, we had just the other way around. One of our most critical databases had some extreme bloat.
Some of our internal customers was very confused, about the sizes reported by the database.
This confusion has led to wrong decisions. (And a long discussion about the choice of DBMS btw)

I think there is a point missing in this whole discussion, or i just didn't see it:

Yeah, the behavior of "kB" is defined in the "postgresql.conf" documentation.
But no _user_ reads this. There is no link or hint in the documentation of "pg_size_pretty()" [1].

Interesting. I think that our documentation should only describe the
way we use unit suffixes in one central place, but other places (like
pg_size_pretty) could link to that central place.

I don't believe that there is any general unanimity among users or
developers about the question of which suffixes devote units
denominated in units of 2^10 bytes vs. 10^3 bytes. About once a year,
somebody makes an argument that we're doing it wrong, but the evidence
that I've seen is very mixed. So when people say that there is only
one right way to do this and we are not in compliance with that one
right way, I guess I just don't believe it. Not everybody likes the
way we do it, but I am fairly sure that if we change it, we'll make
some currently-unhappy people happy and some currently-happy people
unhappy. And the people who don't care but wanted to preserve
backward compatibility will all be in the latter camp.

However, that is not to say that the documentation couldn't be better.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#39Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#38)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

Robert Haas <robertmhaas@gmail.com> writes:

Interesting. I think that our documentation should only describe the
way we use unit suffixes in one central place, but other places (like
pg_size_pretty) could link to that central place.

I don't believe that there is any general unanimity among users or
developers about the question of which suffixes devote units
denominated in units of 2^10 bytes vs. 10^3 bytes. About once a year,
somebody makes an argument that we're doing it wrong, but the evidence
that I've seen is very mixed. So when people say that there is only
one right way to do this and we are not in compliance with that one
right way, I guess I just don't believe it. Not everybody likes the
way we do it, but I am fairly sure that if we change it, we'll make
some currently-unhappy people happy and some currently-happy people
unhappy. And the people who don't care but wanted to preserve
backward compatibility will all be in the latter camp.

That's about my position too: I cannot see that changing this is going
to make things better to a degree that would justify breaking backwards
compatibility.

However, that is not to say that the documentation couldn't be better.

+1; your idea above seems sound.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#40Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Robert Haas (#38)
Re: [BUGS] BUG #14244: wrong suffix for pg_size_pretty()

On 15/09/16 03:45, Robert Haas wrote:

On Wed, Sep 14, 2016 at 5:22 AM, Thomas Berger <Thomas.Berger@1und1.de> wrote:

Today, i found the time to read all the mails in this thread, and i think i have to explain, why we decided to open a bug for this behavior.

Pn Tuesday, 23. August 2016, 13:30:29 Robert Haas wrote:

J. Random User: I'm having a problem!
Mailing List: Gee, how big are your tables?
J. Random User: Here's some pg_size_pretty output.
Mailing List: Gosh, we don't know what that means, what do you have
this obscure GUC set to?
J. Random User: Maybe I'll just give up on SQL and use MongoDB.

In fact, we had just the other way around. One of our most critical databases had some extreme bloat.
Some of our internal customers was very confused, about the sizes reported by the database.
This confusion has led to wrong decisions. (And a long discussion about the choice of DBMS btw)

I think there is a point missing in this whole discussion, or i just didn't see it:

Yeah, the behavior of "kB" is defined in the "postgresql.conf" documentation.
But no _user_ reads this. There is no link or hint in the documentation of "pg_size_pretty()" [1].

Interesting. I think that our documentation should only describe the
way we use unit suffixes in one central place, but other places (like
pg_size_pretty) could link to that central place.

I don't believe that there is any general unanimity among users or
developers about the question of which suffixes devote units
denominated in units of 2^10 bytes vs. 10^3 bytes. About once a year,
somebody makes an argument that we're doing it wrong, but the evidence
that I've seen is very mixed. So when people say that there is only
one right way to do this and we are not in compliance with that one
right way, I guess I just don't believe it. Not everybody likes the
way we do it, but I am fairly sure that if we change it, we'll make
some currently-unhappy people happy and some currently-happy people
unhappy. And the people who don't care but wanted to preserve
backward compatibility will all be in the latter camp.

However, that is not to say that the documentation couldn't be better.

Well, I started programming 1968, and was taught that 1 kilobyte was
1024 (2^10).

I object to Johny-come-latelies who try and insist it is 1000.

As regards 'kB' vs 'KB', I'm not too worried either way - I think
consistency is more important

Cheers,
Gavin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers