[Patch v2] Make block and file size for WAL and relations defined at cluster creation

Started by Remi Colinetabout 8 years ago3 messages
#1Remi Colinet
remi.colinet@gmail.com
1 attachment(s)

Hello,

This is version 2 of the patch to make the file and block sizes for WAL and
relations, run-time configurable at initdb.

So far, the relation's block and file sizes have been defined statically at
server build time. This is also the case for the WAL block size. This means
that it is not possible to share the same Postgresql binary if using
different block and file sizes for the WAL and the relations of different
server instances/databases, on the same host.

Recently, the value definition of the WAL file size has been converted from
server build time to cluster creation time. The current patch goes further
in this direction with the relation block and file sizes and with the WAL
block size. And some more could be done with LOBLKSIZE for instance (TBD).

Below patch makes block and file sizes defined at cluster creation for both
the WAL and the relations. This avoids having different Postgresql server
builds for each possible combination of block size and file sizes. With the
patch, the values of the block and file sizes are kept in the control file
(This has been the case so far) and are provided to initdb when creating
the cluster. If no value is specified, the default values are used.

*Names and values*

Values which can be defined at cluster creation time are:

- the WAL block size
- the WAL file size
- the relation block size
- the relation file size

I noticed that the names of these parameters can slightly vary throughout
the source code, whether it is the name or the unit.

Such names are:

BLCKSZ: the relation block size in bytes
RELSEG_SIZE: maximum number of blocks allowed in one disk file
XLOG_BLCKSZ: the WAL block size in bytes
XLOG_SEG_SIZE: the WAL file size in bytes
blcksz (in control file): the relation block size in bytes (same as BLCKSZ)
relseg_size (in control file): the relation file size in blocks (same as
RELSEG_SIZE)
xlog_blcksz (in control file): WAL block size in bytes (same as XLOG_BLCKSZ)
xlog_seg_size (in control file); the WAL file size in bytes (same as
XLOG_SEG_SIZE)
WalSegSz (in pg_resetwal.c): the WAL segment size in bytes
wal_segment_size (in xlog.c): the WAL segment size in bytes
segment_size (in guc.c): the relation segment size

For the current patch, I have defined common names to be used throughout in
the source code, whether this in the server or in the different utilities
with units in:
- bytes for the blocks sizes
- blocks for the files sizes

These are:
- wal_blck_size: which replaces XLOG_BLCKSZ
- wal_file_blck
- wal_file_size: which is wal_blck_size * wal_file_blck. It replaces
XLOG_SEG_SIZE and wal_segment_size

- rel_blck_size: which replaces BLCKSZ
- rel_file_blck: it replaces RELSEG_SIZE and segment_size
- rel_file_size: which is rel_blck_size * rel_file_blck.

Lower case letters are used to remind that these values are not statically
defined at compile time.

*Patch*

The patch is made only of small changes unless a few files which require
some more work with palloc/pfree.

The most concerned files are:

src/backend/access/heap/pruneheap.c
src/backend/access/nbtree/nbtree.c
src/backend/access/nbtree/nbtsearch.c
src/backend/access/transam/generic_xlog.c
src/backend/access/transam/xlog.c
src/backend/nodes/tidbitmap.c
src/bin/initdb/initdb.c

tidbitmap.c is most concerned file because it includes the simplehash.h
header.
But even for this file, the change is eventually straightforward.

Other affected files have tiny changes or changes which do not incur any
hurdle.

The patch is built on top of commit
0772c152b9bd02baeca6920c3371fce95e8f13dc (Mon Nov 27 20:56:46 2017 -0500).
I will rebase with the latest version once I have completed all the initial
tests with the different possible combinations of blocks and files sizes,
both for the relations and the WAL files.

*Rationale*

Justifications are:

- we may test different combinations of file and block sizes, for the
relation and the WAL in order to have the better performances of the server.
Avoiding a compilation for each combination of values seems to make sense.

- the same binary can be used on the same host with several databases
instances/cluster, each using different values for block and file sizes.
This is what I did to test the patch. I have created about 20 different
combinations of values for the file and block sizes of the relation and WAL
files.

- Linux distributions deliver Postgresql with a binary already compiled
with the default values.
This means DBAs need to rebuild the binary for each combination of block
and file sizes, whether this is for the WAL or the relations.

- Selecting the correct values for file and block sizes is a DBA task, and
not a developer task.
For instance, when someone wants to create a Linux filesystem with a given
block size, he is not forced to accept a given value chosed by the
developer of the file system driver when this later was compiled.

- The file and block sizes should depend mostly of the physical server and
physical storage. Not on the database software itself.

Regarding the cost of using run-time configurable values for file and block
sizes of the WAL and relations, this cost is low both :

- from a developer point of view: the source code changes are spread in
many files but only a few one have significant changes.
Mainly the tidbitmap.c is concerned the change. Other changes are minor
changes.

- from a run-time point of view. The overhead is only at the start of the
database instance.
And moreover, the overhead is still very low at the start of the server,
with only a few more dynamic memory allocations.

*Test cases*

Below combinations of values have been tested so far, by creating a cluster
and filling a table with 10 to 200 millions rows.
WAL file and WAL block sizes were nopt changed so far.

rel_blck_size
--rel_blck_size=1024 --rel_file_blck=1048576 --wal_blck_size=8192
--wal_file_blck=32768 ok (1GB file relation files/ 256MB wal files)
--rel_blck_size=2048 --rel_file_blck=524288 --wal_blck_size=8192
--wal_file_blck=32768 ok (1GB file)
--rel_blck_size=4096 --rel_file_blck=262144 --wal_blck_size=8192
--wal_file_blck=32768 ok (1GB file)
--rel_blck_size=8192 --rel_file_blck=131072 --wal_blck_size=8192
--wal_file_blck=32768 ok (1GB file)
--rel_blck_size=16384 --rel_file_blck=65536 --wal_blck_size=8192
--wal_file_blck=32768 ok (1GB file)
--rel_blck_size=32768 --rel_file_blck=32768 --wal_blck_size=8192
--wal_file_blck=32768 ok (1GB file)

rel_file_blck
--rel_blck_size=8192 --rel_file_blck=262144 --wal_blck_size=8192
--wal_file_blck=32768 ok (2GB files)
--rel_blck_size=8192 --rel_file_blck=524288 --wal_blck_size=8192
--wal_file_blck=32768 ok (4GB files)
--rel_blck_size=8192 --rel_file_blck=1048576 --wal_blck_size=8192
--wal_file_blck=32768 ok (8GB files)

Further tests are going to be done with different combinations of block and
file sizes for the WAL.

*To do *

Convert large object compile time block and file size to run-time parameters
Further tests with different combinations of block and file sizes.
Rebase on latest commit of master Postgresql tree.
Remove debugging code introduced by the current patch
Split current patch into smaller chunks.

*Change log*

v2
Fixed bug in simplehash.h caused by &data[i] dereferences for which data is
an array of PageTableentry.
Removed debugging code from tidbitmap.c and simplehash.h
Fixed REL_FILE_SIZE macros overflow warning caused by missing types in
pg_control_def.h macros
Tests done with above cases

v1
Initial version of the patch not yet fully tested

*diffstat*

[root@rco v2]# diffstat blkfilesizes_v2.patch
TODO | 57 ++
configure.in | 94 ---
contrib/amcheck/verify_nbtree.c | 4
contrib/bloom/blinsert.c | 14
contrib/bloom/bloom.h | 26 -
contrib/bloom/blutils.c | 6
contrib/bloom/blvacuum.c | 6
contrib/file_fdw/file_fdw.c | 6
contrib/pageinspect/brinfuncs.c | 8
contrib/pageinspect/btreefuncs.c | 6
contrib/pageinspect/rawpage.c | 12
contrib/pg_prewarm/pg_prewarm.c | 4
contrib/pg_standby/pg_standby.c | 7
contrib/pgstattuple/pgstatapprox.c | 6
contrib/pgstattuple/pgstatindex.c | 4
contrib/pgstattuple/pgstattuple.c | 10
contrib/postgres_fdw/deparse.c | 2
contrib/postgres_fdw/postgres_fdw.c | 2
param.sh | 1
src/backend/access/brin/brin_pageops.c | 4
src/backend/access/common/bufmask.c | 4
src/backend/access/common/reloptions.c | 8
src/backend/access/gin/ginbtree.c | 12
src/backend/access/gin/gindatapage.c | 18
src/backend/access/gin/ginentrypage.c | 2
src/backend/access/gin/ginfast.c | 6
src/backend/access/gin/ginget.c | 6
src/backend/access/gin/ginvacuum.c | 2
src/backend/access/gin/ginxlog.c | 4
src/backend/access/gist/gistbuild.c | 8
src/backend/access/gist/gistbuildbuffers.c | 10
src/backend/access/gist/gistscan.c | 1
src/backend/access/hash/hash.c | 7
src/backend/access/hash/hashpage.c | 4
src/backend/access/heap/README.HOT | 2
src/backend/access/heap/heapam.c | 17
src/backend/access/heap/pruneheap.c | 39 +
src/backend/access/heap/rewriteheap.c | 4
src/backend/access/heap/syncscan.c | 2
src/backend/access/heap/visibilitymap.c | 8
src/backend/access/nbtree/nbtpage.c | 2
src/backend/access/nbtree/nbtree.c | 18
src/backend/access/nbtree/nbtsearch.c | 5
src/backend/access/nbtree/nbtsort.c | 10
src/backend/access/spgist/spgdoinsert.c | 4
src/backend/access/spgist/spginsert.c | 2
src/backend/access/spgist/spgscan.c | 1
src/backend/access/spgist/spgtextproc.c | 10
src/backend/access/spgist/spgutils.c | 4
src/backend/access/transam/README | 2
src/backend/access/transam/clog.c | 10
src/backend/access/transam/commit_ts.c | 4
src/backend/access/transam/generic_xlog.c | 44 +
src/backend/access/transam/multixact.c | 12
src/backend/access/transam/slru.c | 22
src/backend/access/transam/subtrans.c | 5
src/backend/access/transam/timeline.c | 2
src/backend/access/transam/twophase.c | 2
src/backend/access/transam/xlog.c | 603
++++++++++++++----------
src/backend/access/transam/xlogarchive.c | 12
src/backend/access/transam/xlogfuncs.c | 10
src/backend/access/transam/xloginsert.c | 48 +
src/backend/access/transam/xlogreader.c | 141 +++--
src/backend/access/transam/xlogutils.c | 34 -
src/backend/bootstrap/bootstrap.c | 33 -
src/backend/commands/async.c | 15
src/backend/commands/tablecmds.c | 2
src/backend/commands/vacuumlazy.c | 4
src/backend/executor/execGrouping.c | 1
src/backend/nodes/tidbitmap.c | 135 ++++-
src/backend/optimizer/path/costsize.c | 10
src/backend/optimizer/util/plancat.c | 2
src/backend/postmaster/checkpointer.c | 4
src/backend/replication/basebackup.c | 30 -
src/backend/replication/logical/logical.c | 2
src/backend/replication/logical/reorderbuffer.c | 18
src/backend/replication/slot.c | 2
src/backend/replication/walreceiver.c | 14
src/backend/replication/walreceiverfuncs.c | 4
src/backend/replication/walsender.c | 30 -
src/backend/storage/buffer/buf_init.c | 4
src/backend/storage/buffer/bufmgr.c | 8
src/backend/storage/buffer/freelist.c | 6
src/backend/storage/buffer/localbuf.c | 6
src/backend/storage/file/buffile.c | 20
src/backend/storage/file/copydir.c | 2
src/backend/storage/freespace/README | 8
src/backend/storage/freespace/freespace.c | 36 -
src/backend/storage/freespace/indexfsm.c | 7
src/backend/storage/lmgr/predicate.c | 2
src/backend/storage/page/bufpage.c | 27 -
src/backend/storage/smgr/md.c | 104 ++--
src/backend/tcop/postgres.c | 2
src/backend/utils/adt/selfuncs.c | 2
src/backend/utils/init/globals.c | 20
src/backend/utils/init/miscinit.c | 6
src/backend/utils/init/postinit.c | 23
src/backend/utils/misc/guc.c | 175 ++++--
src/backend/utils/misc/pg_controldata.c | 4
src/backend/utils/sort/logtape.c | 49 -
src/backend/utils/sort/tuplesort.c | 6
src/bin/initdb/initdb.c | 305 +++++++++---
src/bin/pg_basebackup/pg_basebackup.c | 18
src/bin/pg_basebackup/pg_receivewal.c | 26 -
src/bin/pg_basebackup/pg_recvlogical.c | 11
src/bin/pg_basebackup/receivelog.c | 28 -
src/bin/pg_basebackup/streamutil.c | 76 +--
src/bin/pg_basebackup/streamutil.h | 6
src/bin/pg_basebackup/walmethods.c | 14
src/bin/pg_controldata/pg_controldata.c | 16
src/bin/pg_resetwal/pg_resetwal.c | 125 +++-
src/bin/pg_rewind/copy_fetch.c | 9
src/bin/pg_rewind/filemap.c | 11
src/bin/pg_rewind/libpq_fetch.c | 7
src/bin/pg_rewind/parsexlog.c | 26 -
src/bin/pg_rewind/pg_rewind.c | 33 -
src/bin/pg_test_fsync/pg_test_fsync.c | 71 +-
src/bin/pg_upgrade/controldata.c | 7
src/bin/pg_upgrade/file.c | 15
src/bin/pg_upgrade/pg_upgrade.c | 3
src/bin/pg_waldump/pg_waldump.c | 69 +-
src/common/controldata_utils.c | 98 +++
src/include/access/brin_page.h | 2
src/include/access/ginblock.h | 6
src/include/access/gist_private.h | 20
src/include/access/hash.h | 5
src/include/access/htup_details.h | 11
src/include/access/itup.h | 2
src/include/access/nbtree.h | 10
src/include/access/relscan.h | 7
src/include/access/slru.h | 2
src/include/access/spgist_private.h | 22
src/include/access/tuptoaster.h | 2
src/include/access/xlog_internal.h | 8
src/include/access/xlogreader.h | 9
src/include/access/xlogrecord.h | 6
src/include/common/controldata_utils.h | 4
src/include/lib/simplehash.h | 65 +-
src/include/nodes/execnodes.h | 1
src/include/nodes/nodes.h | 1
src/include/pg_config.h.in | 31 -
src/include/pg_config_manual.h | 8
src/include/pg_control_def.h | 44 +
src/include/storage/bufmgr.h | 4
src/include/storage/bufpage.h | 5
src/include/storage/checksum_impl.h | 2
src/include/storage/fsm_internals.h | 5
src/include/storage/large_object.h | 4
src/include/storage/md.h | 12
src/include/storage/off.h | 2
src/include/utils/rel.h | 4
src/interfaces/libpq/libpq-int.h | 5
152 files changed, 2257 insertions(+), 1299 deletions(-)
[root@rco v2]#

Attachments:

blkfilesizes_v2.patchtext/x-patch; charset=US-ASCII; name=blkfilesizes_v2.patchDownload
diff --git a/TODO b/TODO
new file mode 100644
index 0000000000..8ed682d3b1
--- /dev/null
+++ b/TODO
@@ -0,0 +1,57 @@
+TODO
+====
+
+Use xlog instead of wal to have a consistent naming throughout the whole source code.
+Get rid of wal_segment_size (the only dynamic parameter)
+
+Create macros for commonly used computations
+Gather common code together
+Move check_file_block_sizes() from initdb to src/common
+
+Dynamic value for LOBLKSIZE
+
+Test with rel_file_size of 2, 4, 8, 16, 32, 64GB
+
+DONE
+====
+
+Added defs in initdb.c
+Added defs in pg_control_def.h
+Added block and file size in control file data structure
+
+Renaming
+
+BLCKSZ -> rel_blck_size (bytes)
+RELSEG_SIZE -> rel_file_size
+XLOG_BLCKSZ -> wal_blck_size -> xlog_blck_size
+XLOG_SEG_SIZE -> wal_file_size -> xlog_file_size -> xlog_file_blck
+blcksz -> rel_blck_size
+relseg_size -> rel_file_size
+xlog_blcksz -> wal_blck_size
+xlog_seg_size -> wal_file_size
+wal_segment_size -> wal_file_size
+
+
+JUSTIFICATION
+=============
+
+Various checks spread all around the source code
+
+src/backend/access/transam/xlog.c
+
+max_wal_size_mb = 1024 (MB)
+min_wal_size_mb = 80 (MB)
+
+Have a consistent naming throughout the whole source code
+wal -> xlog
+
+Various names used for the same parameter
+xlp_xlog_blcksz versus XLOG_BLCKSZ
+xlog versus wal
+blcksz versus BLCKSZ
+file sized in bytes of blocks depending on WAL or relation, control file or guc.c
+
+
+
+
+
diff --git a/configure.in b/configure.in
index d9c4a50b4b..2476ab4cca 100644
--- a/configure.in
+++ b/configure.in
@@ -17,7 +17,7 @@ dnl Read the Autoconf manual for details.
 dnl
 m4_pattern_forbid(^PGAC_)dnl to catch undefined macros
 
-AC_INIT([PostgreSQL], [11devel], [pgsql-bugs@postgresql.org])
+AC_INIT([PostgreSQL], [11devel-blksize], [pgsql-bugs@postgresql.org])
 
 m4_if(m4_defn([m4_PACKAGE_VERSION]), [2.69], [], [m4_fatal([Autoconf version 2.69 is required.
 Untested combinations of 'autoconf' and PostgreSQL versions are not
@@ -251,98 +251,6 @@ PGAC_ARG_BOOL(enable, tap-tests, no,
               [enable TAP tests (requires Perl and IPC::Run)])
 AC_SUBST(enable_tap_tests)
 
-#
-# Block size
-#
-AC_MSG_CHECKING([for block size])
-PGAC_ARG_REQ(with, blocksize, [BLOCKSIZE], [set table block size in kB [8]],
-             [blocksize=$withval],
-             [blocksize=8])
-case ${blocksize} in
-  1) BLCKSZ=1024;;
-  2) BLCKSZ=2048;;
-  4) BLCKSZ=4096;;
-  8) BLCKSZ=8192;;
- 16) BLCKSZ=16384;;
- 32) BLCKSZ=32768;;
-  *) AC_MSG_ERROR([Invalid block size. Allowed values are 1,2,4,8,16,32.])
-esac
-AC_MSG_RESULT([${blocksize}kB])
-
-AC_DEFINE_UNQUOTED([BLCKSZ], ${BLCKSZ}, [
- Size of a disk block --- this also limits the size of a tuple.  You
- can set it bigger if you need bigger tuples (although TOAST should
- reduce the need to have large tuples, since fields can be spread
- across multiple tuples).
-
- BLCKSZ must be a power of 2.  The maximum possible value of BLCKSZ
- is currently 2^15 (32768).  This is determined by the 15-bit widths
- of the lp_off and lp_len fields in ItemIdData (see
- include/storage/itemid.h).
-
- Changing BLCKSZ requires an initdb.
-])
-
-#
-# Relation segment size
-#
-AC_MSG_CHECKING([for segment size])
-PGAC_ARG_REQ(with, segsize, [SEGSIZE], [set table segment size in GB [1]],
-             [segsize=$withval],
-             [segsize=1])
-# this expression is set up to avoid unnecessary integer overflow
-# blocksize is already guaranteed to be a factor of 1024
-RELSEG_SIZE=`expr '(' 1024 / ${blocksize} ')' '*' ${segsize} '*' 1024`
-test $? -eq 0 || exit 1
-AC_MSG_RESULT([${segsize}GB])
-
-AC_DEFINE_UNQUOTED([RELSEG_SIZE], ${RELSEG_SIZE}, [
- RELSEG_SIZE is the maximum number of blocks allowed in one disk file.
- Thus, the maximum size of a single file is RELSEG_SIZE * BLCKSZ;
- relations bigger than that are divided into multiple files.
-
- RELSEG_SIZE * BLCKSZ must be less than your OS' limit on file size.
- This is often 2 GB or 4GB in a 32-bit operating system, unless you
- have large file support enabled.  By default, we make the limit 1 GB
- to avoid any possible integer-overflow problems within the OS.
- A limit smaller than necessary only means we divide a large
- relation into more chunks than necessary, so it seems best to err
- in the direction of a small limit.
-
- A power-of-2 value is recommended to save a few cycles in md.c,
- but is not absolutely required.
-
- Changing RELSEG_SIZE requires an initdb.
-])
-
-#
-# WAL block size
-#
-AC_MSG_CHECKING([for WAL block size])
-PGAC_ARG_REQ(with, wal-blocksize, [BLOCKSIZE], [set WAL block size in kB [8]],
-             [wal_blocksize=$withval],
-             [wal_blocksize=8])
-case ${wal_blocksize} in
-  1) XLOG_BLCKSZ=1024;;
-  2) XLOG_BLCKSZ=2048;;
-  4) XLOG_BLCKSZ=4096;;
-  8) XLOG_BLCKSZ=8192;;
- 16) XLOG_BLCKSZ=16384;;
- 32) XLOG_BLCKSZ=32768;;
- 64) XLOG_BLCKSZ=65536;;
-  *) AC_MSG_ERROR([Invalid WAL block size. Allowed values are 1,2,4,8,16,32,64.])
-esac
-AC_MSG_RESULT([${wal_blocksize}kB])
-
-AC_DEFINE_UNQUOTED([XLOG_BLCKSZ], ${XLOG_BLCKSZ}, [
- Size of a WAL file block.  This need have no particular relation to BLCKSZ.
- XLOG_BLCKSZ must be a power of 2, and if your system supports O_DIRECT I/O,
- XLOG_BLCKSZ must be a multiple of the alignment requirement for direct-I/O
- buffers, else direct I/O may fail.
-
- Changing XLOG_BLCKSZ requires an initdb.
-])
-
 #
 # C compiler
 #
diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c
index 868c14ec8f..8e4bd6314b 100644
--- a/contrib/amcheck/verify_nbtree.c
+++ b/contrib/amcheck/verify_nbtree.c
@@ -1173,7 +1173,7 @@ palloc_btree_page(BtreeCheckState *state, BlockNumber blocknum)
 	Page		page;
 	BTPageOpaque opaque;
 
-	page = palloc(BLCKSZ);
+	page = palloc(rel_blck_size);
 
 	/*
 	 * We copy the page into local storage to avoid holding pin on the buffer
@@ -1190,7 +1190,7 @@ palloc_btree_page(BtreeCheckState *state, BlockNumber blocknum)
 	_bt_checkpage(state->rel, buffer);
 
 	/* Only use copy of page in palloc()'d memory */
-	memcpy(page, BufferGetPage(buffer), BLCKSZ);
+	memcpy(page, BufferGetPage(buffer), rel_blck_size);
 	UnlockReleaseBuffer(buffer);
 
 	opaque = (BTPageOpaque) PageGetSpecialPointer(page);
diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c
index 1fcb281508..c947031a3f 100644
--- a/contrib/bloom/blinsert.c
+++ b/contrib/bloom/blinsert.c
@@ -33,10 +33,9 @@ PG_MODULE_MAGIC;
 typedef struct
 {
 	BloomState	blstate;		/* bloom index state */
-	MemoryContext tmpCtx;		/* temporary memory context reset after each
-								 * tuple */
-	char		data[BLCKSZ];	/* cached page */
+	MemoryContext tmpCtx;		/* temporary memory context reset after each tuple */
 	int64		count;			/* number of tuples in cached page */
+	char*		data;	/* cached page */
 } BloomBuildState;
 
 /*
@@ -51,7 +50,7 @@ flushCachedPage(Relation index, BloomBuildState *buildstate)
 
 	state = GenericXLogStart(index);
 	page = GenericXLogRegisterBuffer(state, buffer, GENERIC_XLOG_FULL_IMAGE);
-	memcpy(page, buildstate->data, BLCKSZ);
+	memcpy(page, buildstate->data, rel_blck_size);
 	GenericXLogFinish(state);
 	UnlockReleaseBuffer(buffer);
 }
@@ -62,7 +61,7 @@ flushCachedPage(Relation index, BloomBuildState *buildstate)
 static void
 initCachedPage(BloomBuildState *buildstate)
 {
-	memset(buildstate->data, 0, BLCKSZ);
+	memset(buildstate->data, 0, rel_blck_size);
 	BloomInitPage(buildstate->data, 0);
 	buildstate->count = 0;
 }
@@ -126,7 +125,9 @@ blbuild(Relation heap, Relation index, IndexInfo *indexInfo)
 	BloomInitMetapage(index);
 
 	/* Initialize the bloom build state */
+	buildstate.data = palloc(rel_blck_size);
 	memset(&buildstate, 0, sizeof(buildstate));
+	memset(buildstate.data, 0, rel_blck_size);
 	initBloomState(&buildstate.blstate, index);
 	buildstate.tmpCtx = AllocSetContextCreate(CurrentMemoryContext,
 											  "Bloom build temporary context",
@@ -145,6 +146,7 @@ blbuild(Relation heap, Relation index, IndexInfo *indexInfo)
 		flushCachedPage(index, &buildstate);
 
 	MemoryContextDelete(buildstate.tmpCtx);
+	pfree(buildstate.data);
 
 	result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));
 	result->heap_tuples = result->index_tuples = reltuples;
@@ -161,7 +163,7 @@ blbuildempty(Relation index)
 	Page		metapage;
 
 	/* Construct metapage. */
-	metapage = (Page) palloc(BLCKSZ);
+	metapage = (Page) palloc(rel_blck_size);
 	BloomFillMetapage(index, metapage);
 
 	/*
diff --git a/contrib/bloom/bloom.h b/contrib/bloom/bloom.h
index f3df1af781..6649ce3b1e 100644
--- a/contrib/bloom/bloom.h
+++ b/contrib/bloom/bloom.h
@@ -100,22 +100,11 @@ typedef uint16 BloomSignatureWord;
 typedef struct BloomOptions
 {
 	int32		vl_len_;		/* varlena header (do not touch directly!) */
-	int			bloomLength;	/* length of signature in words (not bits!) */
-	int			bitSize[INDEX_MAX_KEYS];	/* # of bits generated for each
+	int		bloomLength;	/* length of signature in words (not bits!) */
+	int		bitSize[INDEX_MAX_KEYS];	/* # of bits generated for each
 											 * index key */
 } BloomOptions;
 
-/*
- * FreeBlockNumberArray - array of block numbers sized so that metadata fill
- * all space in metapage.
- */
-typedef BlockNumber FreeBlockNumberArray[
-										 MAXALIGN_DOWN(
-													   BLCKSZ - SizeOfPageHeaderData - MAXALIGN(sizeof(BloomPageOpaqueData))
-													   - MAXALIGN(sizeof(uint16) * 2 + sizeof(uint32) + sizeof(BloomOptions))
-													   ) / sizeof(BlockNumber)
-];
-
 /* Metadata of bloom index */
 typedef struct BloomMetaPageData
 {
@@ -123,15 +112,20 @@ typedef struct BloomMetaPageData
 	uint16		nStart;
 	uint16		nEnd;
 	BloomOptions opts;
-	FreeBlockNumberArray notFullPage;
+	BlockNumber notFullPage[FLEXIBLE_ARRAY_MEMBER];
 } BloomMetaPageData;
 
 /* Magic number to distinguish bloom pages among anothers */
 #define BLOOM_MAGICK_NUMBER (0xDBAC0DED)
 
 /* Number of blocks numbers fit in BloomMetaPageData */
-#define BloomMetaBlockN		(sizeof(FreeBlockNumberArray) / sizeof(BlockNumber))
+#define BloomMetaBlockN									\
+	(MAXALIGN_DOWN(rel_blck_size - SizeOfPageHeaderData 				\
+		- MAXALIGN(sizeof(BloomPageOpaqueData))					\
+		- MAXALIGN(sizeof(uint16) * 2 + sizeof(uint32) + sizeof(BloomOptions)))	\
+                / sizeof(BlockNumber))
 
+#define SizeOfBloomMetaPageData	(offsetof(BloomMetaPageData, notFullPage) + sizeof(BlockNumber) * BloomMetaBlockN)
 #define BloomPageGetMeta(page)	((BloomMetaPageData *) PageGetContents(page))
 
 typedef struct BloomState
@@ -148,7 +142,7 @@ typedef struct BloomState
 } BloomState;
 
 #define BloomPageGetFreeSpace(state, page) \
-	(BLCKSZ - MAXALIGN(SizeOfPageHeaderData) \
+	(rel_blck_size - MAXALIGN(SizeOfPageHeaderData) \
 		- BloomPageGetMaxOffset(page) * (state)->sizeOfBloomTuple \
 		- MAXALIGN(sizeof(BloomPageOpaqueData)))
 
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67e0a..ee6e4f608a 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -399,7 +399,7 @@ BloomInitPage(Page page, uint16 flags)
 {
 	BloomPageOpaque opaque;
 
-	PageInit(page, BLCKSZ, sizeof(BloomPageOpaqueData));
+	PageInit(page, rel_blck_size, sizeof(BloomPageOpaqueData));
 
 	opaque = BloomPageGetOpaque(page);
 	memset(opaque, 0, sizeof(BloomPageOpaqueData));
@@ -430,10 +430,10 @@ BloomFillMetapage(Relation index, Page metaPage)
 	 */
 	BloomInitPage(metaPage, BLOOM_META);
 	metadata = BloomPageGetMeta(metaPage);
-	memset(metadata, 0, sizeof(BloomMetaPageData));
+	memset(metadata, 0, SizeOfBloomMetaPageData);
 	metadata->magickNumber = BLOOM_MAGICK_NUMBER;
 	metadata->opts = *opts;
-	((PageHeader) metaPage)->pd_lower += sizeof(BloomMetaPageData);
+	((PageHeader) metaPage)->pd_lower += SizeOfBloomMetaPageData;
 
 	/* If this fails, probably FreeBlockNumberArray size calc is wrong: */
 	Assert(((PageHeader) metaPage)->pd_lower <= ((PageHeader) metaPage)->pd_upper);
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index b0e44330ff..77719b97dc 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -37,7 +37,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 	Relation	index = info->index;
 	BlockNumber blkno,
 				npages;
-	FreeBlockNumberArray notFullPage;
+	BlockNumber* notFullPage;
 	int			countPage = 0;
 	BloomState	state;
 	Buffer		buffer;
@@ -48,6 +48,8 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 	if (stats == NULL)
 		stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
 
+	notFullPage = (BlockNumber*) palloc(sizeof(BlockNumber) * BloomMetaBlockN);
+
 	initBloomState(&state, index);
 
 	/*
@@ -157,6 +159,8 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 	GenericXLogFinish(gxlogState);
 	UnlockReleaseBuffer(buffer);
 
+	pfree(notFullPage);
+
 	return stats;
 }
 
diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index 370cc365d6..53141090ff 100644
--- a/contrib/file_fdw/file_fdw.c
+++ b/contrib/file_fdw/file_fdw.c
@@ -798,7 +798,7 @@ fileAnalyzeForeignTable(Relation relation,
 	 * Convert size to pages.  Must return at least 1 so that we can tell
 	 * later on that pg_class.relpages is not default.
 	 */
-	*totalpages = (stat_buf.st_size + (BLCKSZ - 1)) / BLCKSZ;
+	*totalpages = (stat_buf.st_size + (rel_blck_size - 1)) / rel_blck_size;
 	if (*totalpages < 1)
 		*totalpages = 1;
 
@@ -960,12 +960,12 @@ estimate_size(PlannerInfo *root, RelOptInfo *baserel,
 	 * back to the default if using a program as the input.
 	 */
 	if (fdw_private->is_program || stat(fdw_private->filename, &stat_buf) < 0)
-		stat_buf.st_size = 10 * BLCKSZ;
+		stat_buf.st_size = 10 * rel_blck_size;
 
 	/*
 	 * Convert size to pages for use in I/O cost estimate later.
 	 */
-	pages = (stat_buf.st_size + (BLCKSZ - 1)) / BLCKSZ;
+	pages = (stat_buf.st_size + (rel_blck_size - 1)) / rel_blck_size;
 	if (pages < 1)
 		pages = 1;
 	fdw_private->pages = pages;
diff --git a/contrib/pageinspect/brinfuncs.c b/contrib/pageinspect/brinfuncs.c
index 13da7616e7..8f3a7ffb6f 100644
--- a/contrib/pageinspect/brinfuncs.c
+++ b/contrib/pageinspect/brinfuncs.c
@@ -58,12 +58,12 @@ brin_page_type(PG_FUNCTION_ARGS)
 
 	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
 
-	if (raw_page_size != BLCKSZ)
+	if (raw_page_size != rel_blck_size)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 				 errmsg("input page too small"),
 				 errdetail("Expected size %d, got %d",
-						   BLCKSZ, raw_page_size)));
+						   rel_blck_size, raw_page_size)));
 
 	switch (BrinPageType(page))
 	{
@@ -96,12 +96,12 @@ verify_brin_page(bytea *raw_page, uint16 type, const char *strtype)
 
 	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
 
-	if (raw_page_size != BLCKSZ)
+	if (raw_page_size != rel_blck_size)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 				 errmsg("input page too small"),
 				 errdetail("Expected size %d, got %d",
-						   BLCKSZ, raw_page_size)));
+						   rel_blck_size, raw_page_size)));
 
 	page = VARDATA(raw_page);
 
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index 4f834676ea..ce61a5eae3 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -98,7 +98,7 @@ GetBTPageStatistics(BlockNumber blkno, Buffer buffer, BTPageStat *stat)
 
 	stat->blkno = blkno;
 
-	stat->max_avail = BLCKSZ - (BLCKSZ - phdr->pd_special + SizeOfPageHeaderData);
+	stat->max_avail = rel_blck_size - (rel_blck_size - phdr->pd_special + SizeOfPageHeaderData);
 
 	stat->dead_items = stat->live_items = 0;
 
@@ -365,8 +365,8 @@ bt_page_items(PG_FUNCTION_ARGS)
 
 		uargs = palloc(sizeof(struct user_args));
 
-		uargs->page = palloc(BLCKSZ);
-		memcpy(uargs->page, BufferGetPage(buffer), BLCKSZ);
+		uargs->page = palloc(rel_blck_size);
+		memcpy(uargs->page, BufferGetPage(buffer), rel_blck_size);
 
 		UnlockReleaseBuffer(buffer);
 		relation_close(rel, AccessShareLock);
diff --git a/contrib/pageinspect/rawpage.c b/contrib/pageinspect/rawpage.c
index 25af22f453..d9c64e1274 100644
--- a/contrib/pageinspect/rawpage.c
+++ b/contrib/pageinspect/rawpage.c
@@ -147,8 +147,8 @@ get_raw_page_internal(text *relname, ForkNumber forknum, BlockNumber blkno)
 						blkno, RelationGetRelationName(rel))));
 
 	/* Initialize buffer to copy to */
-	raw_page = (bytea *) palloc(BLCKSZ + VARHDRSZ);
-	SET_VARSIZE(raw_page, BLCKSZ + VARHDRSZ);
+	raw_page = (bytea *) palloc(rel_blck_size + VARHDRSZ);
+	SET_VARSIZE(raw_page, rel_blck_size + VARHDRSZ);
 	raw_page_data = VARDATA(raw_page);
 
 	/* Take a verbatim copy of the page */
@@ -156,7 +156,7 @@ get_raw_page_internal(text *relname, ForkNumber forknum, BlockNumber blkno)
 	buf = ReadBufferExtended(rel, forknum, blkno, RBM_NORMAL, NULL);
 	LockBuffer(buf, BUFFER_LOCK_SHARE);
 
-	memcpy(raw_page_data, BufferGetPage(buf), BLCKSZ);
+	memcpy(raw_page_data, BufferGetPage(buf), rel_blck_size);
 
 	LockBuffer(buf, BUFFER_LOCK_UNLOCK);
 	ReleaseBuffer(buf);
@@ -187,12 +187,12 @@ get_page_from_raw(bytea *raw_page)
 
 	raw_page_size = VARSIZE_ANY_EXHDR(raw_page);
 
-	if (raw_page_size != BLCKSZ)
+	if (raw_page_size != rel_blck_size)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 				 errmsg("invalid page size"),
 				 errdetail("Expected %d bytes, got %d.",
-						   BLCKSZ, raw_page_size)));
+						   rel_blck_size, raw_page_size)));
 
 	page = palloc(raw_page_size);
 
@@ -308,7 +308,7 @@ page_checksum(PG_FUNCTION_ARGS)
 	/*
 	 * Check that the supplied page is of the right size.
 	 */
-	if (raw_page_size != BLCKSZ)
+	if (raw_page_size != rel_blck_size)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 				 errmsg("incorrect size of input page (%d bytes)", raw_page_size)));
diff --git a/contrib/pg_prewarm/pg_prewarm.c b/contrib/pg_prewarm/pg_prewarm.c
index fec62b1a54..8d81419e42 100644
--- a/contrib/pg_prewarm/pg_prewarm.c
+++ b/contrib/pg_prewarm/pg_prewarm.c
@@ -37,7 +37,7 @@ typedef enum
 	PREWARM_BUFFER
 } PrewarmType;
 
-static char blockbuffer[BLCKSZ];
+static char* blockbuffer;
 
 /*
  * pg_prewarm(regclass, mode text, fork text,
@@ -176,12 +176,14 @@ pg_prewarm(PG_FUNCTION_ARGS)
 		 * buffers.  This is more portable than prefetch mode (it works
 		 * everywhere) and is synchronous.
 		 */
+		blockbuffer = palloc(rel_blck_size);
 		for (block = first_block; block <= last_block; ++block)
 		{
 			CHECK_FOR_INTERRUPTS();
 			smgrread(rel->rd_smgr, forkNumber, block, blockbuffer);
 			++blocks_done;
 		}
+		pfree(blockbuffer);
 	}
 	else if (ptype == PREWARM_BUFFER)
 	{
diff --git a/contrib/pg_standby/pg_standby.c b/contrib/pg_standby/pg_standby.c
index cb785971a9..6d15e187e4 100644
--- a/contrib/pg_standby/pg_standby.c
+++ b/contrib/pg_standby/pg_standby.c
@@ -33,6 +33,7 @@
 #include "pg_getopt.h"
 
 #include "access/xlog_internal.h"
+#include "storage/md.h"
 
 const char *progname;
 
@@ -105,6 +106,8 @@ struct stat stat_buf;
 static bool SetWALFileNameForCleanup(void);
 static bool SetWALSegSize(void);
 
+unsigned int wal_blck_size = 0;
+
 
 /* =====================================================================
  *
@@ -410,7 +413,7 @@ SetWALSegSize(void)
 	int			fd;
 
 	/* malloc this buffer to ensure sufficient alignment: */
-	char	   *buf = (char *) pg_malloc(XLOG_BLCKSZ);
+	char	   *buf = (char *) pg_malloc(wal_blck_size);
 
 	Assert(WalSegSz == -1);
 
@@ -423,7 +426,7 @@ SetWALSegSize(void)
 	}
 
 	errno = 0;
-	if (read(fd, buf, XLOG_BLCKSZ) == XLOG_BLCKSZ)
+	if (read(fd, buf, wal_blck_size) == wal_blck_size)
 	{
 		XLogLongPageHeader longhdr = (XLogLongPageHeader) buf;
 
diff --git a/contrib/pgstattuple/pgstatapprox.c b/contrib/pgstattuple/pgstatapprox.c
index 5bf06138a5..f8c3896283 100644
--- a/contrib/pgstattuple/pgstatapprox.c
+++ b/contrib/pgstattuple/pgstatapprox.c
@@ -93,7 +93,7 @@ statapprox_heap(Relation rel, output_type *stat)
 		if (VM_ALL_VISIBLE(rel, blkno, &vmbuffer))
 		{
 			freespace = GetRecordedFreeSpace(rel, blkno);
-			stat->tuple_len += BLCKSZ - freespace;
+			stat->tuple_len += rel_blck_size - freespace;
 			stat->free_space += freespace;
 			continue;
 		}
@@ -112,7 +112,7 @@ statapprox_heap(Relation rel, output_type *stat)
 		if (!PageIsNew(page))
 			stat->free_space += PageGetHeapFreeSpace(page);
 		else
-			stat->free_space += BLCKSZ - SizeOfPageHeaderData;
+			stat->free_space += rel_blck_size - SizeOfPageHeaderData;
 
 		if (PageIsNew(page) || PageIsEmpty(page))
 		{
@@ -182,7 +182,7 @@ statapprox_heap(Relation rel, output_type *stat)
 		UnlockReleaseBuffer(buf);
 	}
 
-	stat->table_len = (uint64) nblocks * BLCKSZ;
+	stat->table_len = (uint64) nblocks * rel_blck_size;
 
 	stat->tuple_count = vac_estimate_reltuples(rel, false, nblocks, scanned,
 											   stat->tuple_count + misc_count);
diff --git a/contrib/pgstattuple/pgstatindex.c b/contrib/pgstattuple/pgstatindex.c
index 75317b96a2..5c2bcf8d6c 100644
--- a/contrib/pgstattuple/pgstatindex.c
+++ b/contrib/pgstattuple/pgstatindex.c
@@ -292,7 +292,7 @@ pgstatindex_impl(Relation rel, FunctionCallInfo fcinfo)
 		{
 			int			max_avail;
 
-			max_avail = BLCKSZ - (BLCKSZ - ((PageHeader) page)->pd_special + SizeOfPageHeaderData);
+			max_avail = rel_blck_size - (rel_blck_size - ((PageHeader) page)->pd_special + SizeOfPageHeaderData);
 			indexStat.max_avail += max_avail;
 			indexStat.free_space += PageGetFreeSpace(page);
 
@@ -337,7 +337,7 @@ pgstatindex_impl(Relation rel, FunctionCallInfo fcinfo)
 								indexStat.leaf_pages +
 								indexStat.internal_pages +
 								indexStat.deleted_pages +
-								indexStat.empty_pages) * BLCKSZ);
+								indexStat.empty_pages) * rel_blck_size);
 		values[j++] = psprintf("%u", indexStat.root_blkno);
 		values[j++] = psprintf(INT64_FORMAT, indexStat.internal_pages);
 		values[j++] = psprintf(INT64_FORMAT, indexStat.leaf_pages);
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
index 7ca1bb24d2..c085d8a067 100644
--- a/contrib/pgstattuple/pgstattuple.c
+++ b/contrib/pgstattuple/pgstattuple.c
@@ -386,7 +386,7 @@ pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
 	heap_endscan(scan);
 	relation_close(rel, AccessShareLock);
 
-	stat.table_len = (uint64) nblocks * BLCKSZ;
+	stat.table_len = (uint64) nblocks * rel_blck_size;
 
 	return build_pgstattuple_type(&stat, fcinfo);
 }
@@ -409,7 +409,7 @@ pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno,
 	if (PageIsNew(page))
 	{
 		/* fully empty page */
-		stat->free_space += BLCKSZ;
+		stat->free_space += rel_blck_size;
 	}
 	else
 	{
@@ -419,7 +419,7 @@ pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno,
 		if (P_IGNORE(opaque))
 		{
 			/* recyclable page */
-			stat->free_space += BLCKSZ;
+			stat->free_space += rel_blck_size;
 		}
 		else if (P_ISLEAF(opaque))
 		{
@@ -456,7 +456,7 @@ pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno,
 		switch (opaque->hasho_flag & LH_PAGE_TYPE)
 		{
 			case LH_UNUSED_PAGE:
-				stat->free_space += BLCKSZ;
+				stat->free_space += rel_blck_size;
 				break;
 			case LH_BUCKET_PAGE:
 			case LH_OVERFLOW_PAGE:
@@ -531,7 +531,7 @@ pgstat_index(Relation rel, BlockNumber start, pgstat_page pagefn,
 		/* Quit if we've scanned the whole relation */
 		if (blkno >= nblocks)
 		{
-			stat.table_len = (uint64) nblocks * BLCKSZ;
+			stat.table_len = (uint64) nblocks * rel_blck_size;
 
 			break;
 		}
diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c
index 0876589fe5..9ce67ce32d 100644
--- a/contrib/postgres_fdw/deparse.c
+++ b/contrib/postgres_fdw/deparse.c
@@ -1825,7 +1825,7 @@ deparseAnalyzeSizeSql(StringInfo buf, Relation rel)
 
 	appendStringInfoString(buf, "SELECT pg_catalog.pg_relation_size(");
 	deparseStringLiteral(buf, relname.data);
-	appendStringInfo(buf, "::pg_catalog.regclass) / %d", BLCKSZ);
+	appendStringInfo(buf, "::pg_catalog.regclass) / %d", rel_blck_size);
 }
 
 /*
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index fb65e2eb20..58ce668c5d 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -620,7 +620,7 @@ postgresGetForeignRelSize(PlannerInfo *root,
 		{
 			baserel->pages = 10;
 			baserel->tuples =
-				(10 * BLCKSZ) / (baserel->reltarget->width +
+				(10 * rel_blck_size) / (baserel->reltarget->width +
 								 MAXALIGN(SizeofHeapTupleHeader));
 		}
 
diff --git a/param.sh b/param.sh
new file mode 100755
index 0000000000..09f48ef89b
--- /dev/null
+++ b/param.sh
@@ -0,0 +1 @@
+egrep -rn 'BLCKSZ|RELSEG_SIZE|XLOG_BLCKSZ|XLOG_SEG_SIZE|wal_segsize|rel_blck_size|rel_file_blck|wal_blck_size|wal_file_blck|blcksz|relseg_size|xlog_blcksz|xlog_seg_size|wal_segment_size' $1
diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c
index 09db5c6f8f..b119e10af5 100644
--- a/src/backend/access/brin/brin_pageops.c
+++ b/src/backend/access/brin/brin_pageops.c
@@ -28,7 +28,7 @@
  * a single item per page, unlike other index AMs.
  */
 #define BrinMaxItemSize \
-	MAXALIGN_DOWN(BLCKSZ - \
+	MAXALIGN_DOWN(rel_blck_size - \
 				  (MAXALIGN(SizeOfPageHeaderData + \
 							sizeof(ItemIdData)) + \
 				   MAXALIGN(sizeof(BrinSpecialSpace))))
@@ -470,7 +470,7 @@ brin_doinsert(Relation idxrel, BlockNumber pagesPerRange,
 void
 brin_page_init(Page page, uint16 type)
 {
-	PageInit(page, BLCKSZ, sizeof(BrinSpecialSpace));
+	PageInit(page, rel_blck_size, sizeof(BrinSpecialSpace));
 
 	BrinPageType(page) = type;
 }
diff --git a/src/backend/access/common/bufmask.c b/src/backend/access/common/bufmask.c
index d880aef7ba..9e9007e4a8 100644
--- a/src/backend/access/common/bufmask.c
+++ b/src/backend/access/common/bufmask.c
@@ -76,7 +76,7 @@ mask_unused_space(Page page)
 
 	/* Sanity check */
 	if (pd_lower > pd_upper || pd_special < pd_upper ||
-		pd_lower < SizeOfPageHeaderData || pd_special > BLCKSZ)
+		pd_lower < SizeOfPageHeaderData || pd_special > rel_blck_size)
 	{
 		elog(ERROR, "invalid page pd_lower %u pd_upper %u pd_special %u\n",
 			 pd_lower, pd_upper, pd_special);
@@ -120,7 +120,7 @@ mask_page_content(Page page)
 {
 	/* Mask Page Content */
 	memset(page + SizeOfPageHeaderData, MASK_MARKER,
-		   BLCKSZ - SizeOfPageHeaderData);
+		   rel_blck_size - SizeOfPageHeaderData);
 
 	/* Mask pd_lower and pd_upper */
 	memset(&((PageHeader) page)->pd_lower, MASK_MARKER,
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index aa9c0f1bb9..e25e4aca51 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -298,7 +298,7 @@ static relopt_int intRelOpts[] =
 			RELOPT_KIND_HEAP,
 			ShareUpdateExclusiveLock
 		},
-		TOAST_TUPLE_TARGET, 128, TOAST_TUPLE_TARGET_MAIN
+		-1, 128, -1	// default and max values are set during initialization
 	},
 	{
 		{
@@ -468,6 +468,12 @@ initialize_reloptions(void)
 	{
 		Assert(DoLockModesConflict(intRelOpts[i].gen.lockmode,
 								   intRelOpts[i].gen.lockmode));
+
+		if (strcmp(intRelOpts[i].gen.name, "toast_tuple_target") == 0) {
+			intRelOpts[i].default_val = TOAST_TUPLE_TARGET;
+			intRelOpts[i].max = TOAST_TUPLE_TARGET_MAIN;
+		}
+
 		j++;
 	}
 	for (i = 0; realRelOpts[i].gen.name; i++)
diff --git a/src/backend/access/gin/ginbtree.c b/src/backend/access/gin/ginbtree.c
index 1b920facc2..25dcad7b9c 100644
--- a/src/backend/access/gin/ginbtree.c
+++ b/src/backend/access/gin/ginbtree.c
@@ -510,7 +510,7 @@ ginPlaceToPage(GinBtree btree, GinBtreeStack *stack,
 			 * critical section yet.)
 			 */
 			newrootpg = PageGetTempPage(newrpage);
-			GinInitPage(newrootpg, GinPageGetOpaque(newlpage)->flags & ~(GIN_LEAF | GIN_COMPRESSED), BLCKSZ);
+			GinInitPage(newrootpg, GinPageGetOpaque(newlpage)->flags & ~(GIN_LEAF | GIN_COMPRESSED), rel_blck_size);
 
 			btree->fillRoot(btree, newrootpg,
 							BufferGetBlockNumber(lbuffer), newlpage,
@@ -547,15 +547,15 @@ ginPlaceToPage(GinBtree btree, GinBtreeStack *stack,
 		{
 			/* Splitting the root, three pages to update */
 			MarkBufferDirty(lbuffer);
-			memcpy(page, newrootpg, BLCKSZ);
-			memcpy(BufferGetPage(lbuffer), newlpage, BLCKSZ);
-			memcpy(BufferGetPage(rbuffer), newrpage, BLCKSZ);
+			memcpy(page, newrootpg, rel_blck_size);
+			memcpy(BufferGetPage(lbuffer), newlpage, rel_blck_size);
+			memcpy(BufferGetPage(rbuffer), newrpage, rel_blck_size);
 		}
 		else
 		{
 			/* Normal split, only two pages to update */
-			memcpy(page, newlpage, BLCKSZ);
-			memcpy(BufferGetPage(rbuffer), newrpage, BLCKSZ);
+			memcpy(page, newlpage, rel_blck_size);
+			memcpy(BufferGetPage(rbuffer), newrpage, rel_blck_size);
 		}
 
 		/* We also clear childbuf's INCOMPLETE_SPLIT flag, if passed */
diff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gin/gindatapage.c
index 9c6cba4825..4ad88fe7ed 100644
--- a/src/backend/access/gin/gindatapage.c
+++ b/src/backend/access/gin/gindatapage.c
@@ -651,7 +651,7 @@ dataBeginPlaceToPageLeaf(GinBtree btree, Buffer buf, GinBtreeStack *stack,
 						break;
 					if (append)
 					{
-						if ((leaf->lsize - segsize) < (BLCKSZ * 3) / 4)
+						if ((leaf->lsize - segsize) < (rel_blck_size * 3) / 4)
 							break;
 					}
 
@@ -677,8 +677,8 @@ dataBeginPlaceToPageLeaf(GinBtree btree, Buffer buf, GinBtreeStack *stack,
 		/*
 		 * Now allocate a couple of temporary page images, and fill them.
 		 */
-		*newlpage = palloc(BLCKSZ);
-		*newrpage = palloc(BLCKSZ);
+		*newlpage = palloc(rel_blck_size);
+		*newrpage = palloc(rel_blck_size);
 
 		dataPlaceToPageLeafSplit(leaf, lbound, rbound,
 								 *newlpage, *newrpage);
@@ -883,7 +883,7 @@ computeLeafRecompressWALData(disassembledLeaf *leaf)
 
 	walbufbegin =
 		palloc(sizeof(ginxlogRecompressDataLeaf) +
-			   BLCKSZ +			/* max size needed to hold the segment data */
+			   rel_blck_size +			/* max size needed to hold the segment data */
 			   nmodified * 2	/* (segno + action) per action */
 		);
 	walbufend = walbufbegin;
@@ -1037,8 +1037,8 @@ dataPlaceToPageLeafSplit(disassembledLeaf *leaf,
 	leafSegmentInfo *seginfo;
 
 	/* Initialize temporary pages to hold the new left and right pages */
-	GinInitPage(lpage, GIN_DATA | GIN_LEAF | GIN_COMPRESSED, BLCKSZ);
-	GinInitPage(rpage, GIN_DATA | GIN_LEAF | GIN_COMPRESSED, BLCKSZ);
+	GinInitPage(lpage, GIN_DATA | GIN_LEAF | GIN_COMPRESSED, rel_blck_size);
+	GinInitPage(rpage, GIN_DATA | GIN_LEAF | GIN_COMPRESSED, rel_blck_size);
 
 	/*
 	 * Copy the segments that go to the left page.
@@ -1255,7 +1255,7 @@ dataSplitPageInternal(GinBtree btree, Buffer origbuf,
 	Page		lpage;
 	Page		rpage;
 	OffsetNumber separator;
-	PostingItem allitems[(BLCKSZ / sizeof(PostingItem)) + 1];
+	PostingItem allitems[(rel_blck_size / sizeof(PostingItem)) + 1];
 
 	lpage = PageGetTempPage(oldpage);
 	rpage = PageGetTempPage(oldpage);
@@ -1770,8 +1770,8 @@ createPostingTree(Relation index, ItemPointerData *items, uint32 nitems,
 	int			rootsize;
 
 	/* Construct the new root page in memory first. */
-	tmppage = (Page) palloc(BLCKSZ);
-	GinInitPage(tmppage, GIN_DATA | GIN_LEAF | GIN_COMPRESSED, BLCKSZ);
+	tmppage = (Page) palloc(rel_blck_size);
+	GinInitPage(tmppage, GIN_DATA | GIN_LEAF | GIN_COMPRESSED, rel_blck_size);
 	GinPageGetOpaque(tmppage)->rightlink = InvalidBlockNumber;
 
 	/*
diff --git a/src/backend/access/gin/ginentrypage.c b/src/backend/access/gin/ginentrypage.c
index bf7b05107b..f3a381460a 100644
--- a/src/backend/access/gin/ginentrypage.c
+++ b/src/backend/access/gin/ginentrypage.c
@@ -616,7 +616,7 @@ entrySplitPage(GinBtree btree, Buffer origbuf,
 	Page		lpage = PageGetTempPageCopy(BufferGetPage(origbuf));
 	Page		rpage = PageGetTempPageCopy(BufferGetPage(origbuf));
 	Size		pageSize = PageGetPageSize(lpage);
-	char		tupstore[2 * BLCKSZ];
+	char		tupstore[2 * rel_blck_size];
 
 	entryPreparePage(btree, lpage, off, insertData, updateblkno);
 
diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c
index 95c8bd7b43..aa752412d7 100644
--- a/src/backend/access/gin/ginfast.c
+++ b/src/backend/access/gin/ginfast.c
@@ -37,7 +37,7 @@
 int			gin_pending_list_limit = 0;
 
 #define GIN_PAGE_FREESIZE \
-	( BLCKSZ - MAXALIGN(SizeOfPageHeaderData) - MAXALIGN(sizeof(GinPageOpaqueData)) )
+	( rel_blck_size - MAXALIGN(SizeOfPageHeaderData) - MAXALIGN(sizeof(GinPageOpaqueData)) )
 
 typedef struct KeyArray
 {
@@ -67,7 +67,7 @@ writeListPage(Relation index, Buffer buffer,
 	char	   *ptr;
 
 	/* workspace could be a local array; we use palloc for alignment */
-	workspace = palloc(BLCKSZ);
+	workspace = palloc(rel_blck_size);
 
 	START_CRIT_SECTION();
 
@@ -93,7 +93,7 @@ writeListPage(Relation index, Buffer buffer,
 		off++;
 	}
 
-	Assert(size <= BLCKSZ);		/* else we overran workspace */
+	Assert(size <= rel_blck_size);		/* else we overran workspace */
 
 	GinPageGetOpaque(page)->rightlink = rightlink;
 
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 1ecf97507d..c30c758f87 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -1516,9 +1516,9 @@ collectMatchesForHeapRow(IndexScanDesc scan, pendingPosition *pos)
 	 */
 	for (;;)
 	{
-		Datum		datum[BLCKSZ / sizeof(IndexTupleData)];
-		GinNullCategory category[BLCKSZ / sizeof(IndexTupleData)];
-		bool		datumExtracted[BLCKSZ / sizeof(IndexTupleData)];
+		Datum		datum[rel_blck_size / sizeof(IndexTupleData)];
+		GinNullCategory category[rel_blck_size / sizeof(IndexTupleData)];
+		bool		datumExtracted[rel_blck_size / sizeof(IndexTupleData)];
 
 		Assert(pos->lastOffset > pos->firstOffset);
 		memset(datumExtracted + pos->firstOffset - 1, 0,
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index 394bc832a4..09789eb9b0 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -548,7 +548,7 @@ ginbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 	BlockNumber blkno = GIN_ROOT_BLKNO;
 	GinVacuumState gvs;
 	Buffer		buffer;
-	BlockNumber rootOfPostingTree[BLCKSZ / (sizeof(IndexTupleData) + sizeof(ItemId))];
+	BlockNumber rootOfPostingTree[rel_blck_size / (sizeof(IndexTupleData) + sizeof(ItemId))];
 	uint32		nRoot;
 
 	gvs.tmpCxt = AllocSetContextCreate(CurrentMemoryContext,
diff --git a/src/backend/access/gin/ginxlog.c b/src/backend/access/gin/ginxlog.c
index 1bf3f0a88a..4c4377dc99 100644
--- a/src/backend/access/gin/ginxlog.c
+++ b/src/backend/access/gin/ginxlog.c
@@ -156,7 +156,7 @@ ginRedoRecompress(Page page, ginxlogRecompressDataLeaf *data)
 		GinPostingList *plist;
 
 		plist = ginCompressPostingList(uncompressed, nuncompressed,
-									   BLCKSZ, &npacked);
+									   rel_blck_size, &npacked);
 		Assert(npacked == nuncompressed);
 
 		totalsize = SizeOfGinPostingList(plist);
@@ -230,7 +230,7 @@ ginRedoRecompress(Page page, ginxlogRecompressDataLeaf *data)
 			Assert(nnewitems == nolditems + nitems);
 
 			newseg = ginCompressPostingList(newitems, nnewitems,
-											BLCKSZ, &npacked);
+											rel_blck_size, &npacked);
 			Assert(npacked == nnewitems);
 
 			newsegsize = SizeOfGinPostingList(newseg);
diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c
index 2415f00e06..f4bbfd357b 100644
--- a/src/backend/access/gist/gistbuild.c
+++ b/src/backend/access/gist/gistbuild.c
@@ -147,7 +147,7 @@ gistbuild(Relation heap, Relation index, IndexInfo *indexInfo)
 		fillfactor = GIST_DEFAULT_FILLFACTOR;
 	}
 	/* Calculate target amount of free space to leave on pages */
-	buildstate.freespace = BLCKSZ * (100 - fillfactor) / 100;
+	buildstate.freespace = rel_blck_size * (100 - fillfactor) / 100;
 
 	/*
 	 * We expect to be called exactly once for any index relation. If that's
@@ -274,7 +274,7 @@ gistInitBuffering(GISTBuildState *buildstate)
 	int			levelStep;
 
 	/* Calc space of index page which is available for index tuples */
-	pageFreeSpace = BLCKSZ - SizeOfPageHeaderData - sizeof(GISTPageOpaqueData)
+	pageFreeSpace = rel_blck_size - SizeOfPageHeaderData - sizeof(GISTPageOpaqueData)
 		- sizeof(ItemIdData)
 		- buildstate->freespace;
 
@@ -371,7 +371,7 @@ gistInitBuffering(GISTBuildState *buildstate)
 			break;
 
 		/* each node in the lowest level of a subtree has one page in memory */
-		if (maxlowestlevelpages > ((double) maintenance_work_mem * 1024) / BLCKSZ)
+		if (maxlowestlevelpages > ((double) maintenance_work_mem * 1024) / rel_blck_size)
 			break;
 
 		/* Good, we can handle this levelStep. See if we can go one higher. */
@@ -430,7 +430,7 @@ calculatePagesPerBuffer(GISTBuildState *buildstate, int levelStep)
 	Size		pageFreeSpace;
 
 	/* Calc space of index page which is available for index tuples */
-	pageFreeSpace = BLCKSZ - SizeOfPageHeaderData - sizeof(GISTPageOpaqueData)
+	pageFreeSpace = rel_blck_size - SizeOfPageHeaderData - sizeof(GISTPageOpaqueData)
 		- sizeof(ItemIdData)
 		- buildstate->freespace;
 
diff --git a/src/backend/access/gist/gistbuildbuffers.c b/src/backend/access/gist/gistbuildbuffers.c
index 88cee2028d..11c6b16564 100644
--- a/src/backend/access/gist/gistbuildbuffers.c
+++ b/src/backend/access/gist/gistbuildbuffers.c
@@ -187,11 +187,11 @@ gistAllocateNewPageBuffer(GISTBuildBuffers *gfbb)
 	GISTNodeBufferPage *pageBuffer;
 
 	pageBuffer = (GISTNodeBufferPage *) MemoryContextAlloc(gfbb->context,
-														   BLCKSZ);
+														   rel_blck_size);
 	pageBuffer->prev = InvalidBlockNumber;
 
 	/* Set page free space */
-	PAGE_FREE_SPACE(pageBuffer) = BLCKSZ - BUFFER_PAGE_DATA_OFFSET;
+	PAGE_FREE_SPACE(pageBuffer) = rel_blck_size - BUFFER_PAGE_DATA_OFFSET;
 	return pageBuffer;
 }
 
@@ -379,7 +379,7 @@ gistPushItupToNodeBuffer(GISTBuildBuffers *gfbb, GISTNodeBuffer *nodeBuffer,
 		 * the new page by storing its block number in the prev-link.
 		 */
 		PAGE_FREE_SPACE(nodeBuffer->pageBuffer) =
-			BLCKSZ - MAXALIGN(offsetof(GISTNodeBufferPage, tupledata));
+			rel_blck_size - MAXALIGN(offsetof(GISTNodeBufferPage, tupledata));
 		nodeBuffer->pageBuffer->prev = blkno;
 
 		/* We've just added one more page */
@@ -758,7 +758,7 @@ ReadTempFileBlock(BufFile *file, long blknum, void *ptr)
 {
 	if (BufFileSeekBlock(file, blknum) != 0)
 		elog(ERROR, "could not seek temporary file: %m");
-	if (BufFileRead(file, ptr, BLCKSZ) != BLCKSZ)
+	if (BufFileRead(file, ptr, rel_blck_size) != rel_blck_size)
 		elog(ERROR, "could not read temporary file: %m");
 }
 
@@ -767,7 +767,7 @@ WriteTempFileBlock(BufFile *file, long blknum, void *ptr)
 {
 	if (BufFileSeekBlock(file, blknum) != 0)
 		elog(ERROR, "could not seek temporary file: %m");
-	if (BufFileWrite(file, ptr, BLCKSZ) != BLCKSZ)
+	if (BufFileWrite(file, ptr, rel_blck_size) != rel_blck_size)
 	{
 		/*
 		 * the other errors in Read/WriteTempFileBlock shouldn't happen, but
diff --git a/src/backend/access/gist/gistscan.c b/src/backend/access/gist/gistscan.c
index 058544e2ae..a776cdeffd 100644
--- a/src/backend/access/gist/gistscan.c
+++ b/src/backend/access/gist/gistscan.c
@@ -75,6 +75,7 @@ gistbeginscan(Relation r, int nkeys, int norderbys)
 
 	/* initialize opaque data */
 	so = (GISTScanOpaque) palloc0(sizeof(GISTScanOpaqueData));
+	so->pageData = (GISTSearchHeapItem*) palloc0(SIZEOF_GIST_SEARCH_HEAP_ITEM);
 	so->giststate = giststate;
 	giststate->tempCxt = createTempGistContext();
 	so->queue = NULL;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 0fef60a858..ea7e7618e2 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -142,7 +142,7 @@ hashbuild(Relation heap, Relation index, IndexInfo *indexInfo)
 	 * one page.  Also, "initial index size" accounting does not include the
 	 * metapage, nor the first bitmap page.
 	 */
-	sort_threshold = (maintenance_work_mem * 1024L) / BLCKSZ;
+	sort_threshold = (maintenance_work_mem * 1024L) / rel_blck_size;
 	if (index->rd_rel->relpersistence != RELPERSISTENCE_TEMP)
 		sort_threshold = Min(sort_threshold, NBuffers);
 	else
@@ -360,6 +360,8 @@ hashbeginscan(Relation rel, int nkeys, int norderbys)
 	scan = RelationGetIndexScan(rel, nkeys, norderbys);
 
 	so = (HashScanOpaque) palloc(sizeof(HashScanOpaqueData));
+	so->currPos.items = (HashScanPosItem*) palloc(SIZEOF_HASH_SCAN_POS_ITEM); 
+
 	HashScanPosInvalidate(so->currPos);
 	so->hashso_bucket_buf = InvalidBuffer;
 	so->hashso_split_bucket_buf = InvalidBuffer;
@@ -429,7 +431,10 @@ hashendscan(IndexScanDesc scan)
 
 	if (so->killedItems != NULL)
 		pfree(so->killedItems);
+
+	pfree(so->currPos.items);
 	pfree(so);
+
 	scan->opaque = NULL;
 }
 
diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c
index a50e35dfcb..38faafffa4 100644
--- a/src/backend/access/hash/hashpage.c
+++ b/src/backend/access/hash/hashpage.c
@@ -999,7 +999,7 @@ static bool
 _hash_alloc_buckets(Relation rel, BlockNumber firstblock, uint32 nblocks)
 {
 	BlockNumber lastblock;
-	char		zerobuf[BLCKSZ];
+	char		zerobuf[rel_blck_size];
 	Page		page;
 	HashPageOpaque ovflopaque;
 
@@ -1019,7 +1019,7 @@ _hash_alloc_buckets(Relation rel, BlockNumber firstblock, uint32 nblocks)
 	 * _hash_freeovflpage for similar usage.  We take care to make the special
 	 * space valid for the benefit of tools such as pageinspect.
 	 */
-	_hash_pageinit(page, BLCKSZ);
+	_hash_pageinit(page, rel_blck_size);
 
 	ovflopaque = (HashPageOpaque) PageGetSpecialPointer(page);
 
diff --git a/src/backend/access/heap/README.HOT b/src/backend/access/heap/README.HOT
index 4cf3c3a0d4..2df63b5db1 100644
--- a/src/backend/access/heap/README.HOT
+++ b/src/backend/access/heap/README.HOT
@@ -233,7 +233,7 @@ large enough to accept any extra maintenance burden for.
 The currently planned heuristic is to prune and defrag when first accessing
 a page that potentially has prunable tuples (as flagged by the pd_prune_xid
 page hint field) and that either has free space less than MAX(fillfactor
-target free space, BLCKSZ/10) *or* has recently had an UPDATE fail to
+target free space, rel_blck_size/10) *or* has recently had an UPDATE fail to
 find enough free space to store an updated tuple version.  (These rules
 are subject to change.)
 
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 3acef279f4..dc1f5f82ab 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1464,7 +1464,7 @@ heap_beginscan_internal(Relation relation, Snapshot snapshot,
 	/*
 	 * allocate and initialize scan descriptor
 	 */
-	scan = (HeapScanDesc) palloc(sizeof(HeapScanDescData));
+	scan = (HeapScanDesc) palloc(SizeOfHeapScanDescData);
 
 	scan->rs_rd = relation;
 	scan->rs_snapshot = snapshot;
@@ -2704,7 +2704,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	 * beforehand.
 	 */
 	if (needwal)
-		scratch = palloc(BLCKSZ);
+		scratch = palloc(rel_blck_size);
 
 	/*
 	 * We're about to do the actual inserts -- but check for conflict first,
@@ -2857,7 +2857,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 				scratchptr += datalen;
 			}
 			totaldatalen = scratchptr - tupledata;
-			Assert((scratchptr - scratch) < BLCKSZ);
+			Assert((scratchptr - scratch) < rel_blck_size);
 
 			if (need_tuple_data)
 				xlrec->flags |= XLH_INSERT_CONTAINS_NEW_TUPLE;
@@ -6035,8 +6035,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	htup = (HeapTupleHeader) PageGetItem(page, lp);
 
 	/* SpecTokenOffsetNumber should be distinguishable from any real offset */
-	StaticAssertStmt(MaxOffsetNumber < SpecTokenOffsetNumber,
-					 "invalid speculative token constant");
+	Assert(MaxOffsetNumber < SpecTokenOffsetNumber);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -8121,7 +8120,7 @@ heap_xlog_visible(XLogReaderState *record)
 
 		/* initialize the page if it was read as zeros */
 		if (PageIsNew(vmpage))
-			PageInit(vmpage, BLCKSZ, 0);
+			PageInit(vmpage, rel_blck_size, 0);
 
 		/*
 		 * XLogReadBufferForRedoExtended locked the buffer. But
@@ -8415,7 +8414,7 @@ heap_xlog_insert(XLogReaderState *record)
 	 * don't bother to update the FSM in that case, it doesn't need to be
 	 * totally accurate anyway.
 	 */
-	if (action == BLK_NEEDS_REDO && freespace < BLCKSZ / 5)
+	if (action == BLK_NEEDS_REDO && freespace < rel_blck_size / 5)
 		XLogRecordPageWithFreeSpace(target_node, blkno, freespace);
 }
 
@@ -8554,7 +8553,7 @@ heap_xlog_multi_insert(XLogReaderState *record)
 	 * don't bother to update the FSM in that case, it doesn't need to be
 	 * totally accurate anyway.
 	 */
-	if (action == BLK_NEEDS_REDO && freespace < BLCKSZ / 5)
+	if (action == BLK_NEEDS_REDO && freespace < rel_blck_size / 5)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
@@ -8829,7 +8828,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	 * don't bother to update the FSM in that case, it doesn't need to be
 	 * totally accurate anyway.
 	 */
-	if (newaction == BLK_NEEDS_REDO && !hot_update && freespace < BLCKSZ / 5)
+	if (newaction == BLK_NEEDS_REDO && !hot_update && freespace < rel_blck_size / 5)
 		XLogRecordPageWithFreeSpace(rnode, newblk, freespace);
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 9f33e0ce07..bb67b27800 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -36,13 +36,34 @@ typedef struct
 	int			ndead;
 	int			nunused;
 	/* arrays that accumulate indexes of items to be changed */
-	OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
-	OffsetNumber nowdead[MaxHeapTuplesPerPage];
-	OffsetNumber nowunused[MaxHeapTuplesPerPage];
+	OffsetNumber* redirected;
+	OffsetNumber* nowdead;
+	OffsetNumber* nowunused;
 	/* marked[i] is true if item i is entered in one of the above arrays */
-	bool		marked[MaxHeapTuplesPerPage + 1];
+	bool*		marked;
 } PruneState;
 
+/*
+ * Memory management of PruneState fields.
+ */
+#define SIZEOF_REDIRECTED	(sizeof(OffsetNumber) * (MaxHeapTuplesPerPage * 2))
+#define SIZEOF_NOWDEAD		(sizeof(OffsetNumber) * (MaxHeapTuplesPerPage))
+#define SIZEOF_NOWUNUSED	(sizeof(OffsetNumber) * (MaxHeapTuplesPerPage))
+#define SIZEOF_MARKED		(sizeof(bool) * (MaxHeapTuplesPerPage + 1))
+
+#define PRUNE_STATE_ALLOC(p)				\
+	(p)->redirected = palloc(SIZEOF_REDIRECTED);	\
+	(p)->nowdead = palloc(SIZEOF_NOWDEAD);		\
+	(p)->nowunused = palloc(SIZEOF_NOWUNUSED);	\
+	(p)->marked = palloc(SIZEOF_MARKED)
+
+#define PRUNE_STATE_FREE(p)				\
+	pfree((p)->redirected);				\
+	pfree((p)->nowdead);				\
+	pfree((p)->nowunused);				\
+	pfree((p)->marked)
+
+
 /* Local functions */
 static int heap_prune_chain(Relation relation, Buffer buffer,
 				 OffsetNumber rootoffnum,
@@ -132,7 +153,7 @@ heap_page_prune_opt(Relation relation, Buffer buffer)
 	 */
 	minfree = RelationGetTargetPageFreeSpace(relation,
 											 HEAP_DEFAULT_FILLFACTOR);
-	minfree = Max(minfree, BLCKSZ / 10);
+	minfree = Max(minfree, rel_blck_size / 10);
 
 	if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)
 	{
@@ -187,6 +208,11 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 				maxoff;
 	PruneState	prstate;
 
+	/*
+	 * Allocate memory for the PruneState structure fields
+	 */
+	PRUNE_STATE_ALLOC(&prstate);
+
 	/*
 	 * Our strategy is to scan the page and make lists of items to change,
 	 * then apply the changes within a critical section.  This keeps as much
@@ -201,7 +227,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	prstate.new_prune_xid = InvalidTransactionId;
 	prstate.latestRemovedXid = *latestRemovedXid;
 	prstate.nredirected = prstate.ndead = prstate.nunused = 0;
-	memset(prstate.marked, 0, sizeof(prstate.marked));
+	memset(prstate.marked, 0, SIZEOF_MARKED);
 
 	/* Scan the page */
 	maxoff = PageGetMaxOffsetNumber(page);
@@ -319,6 +345,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	 * One possibility is to leave "fillfactor" worth of space in this page
 	 * and update FSM with the remaining space.
 	 */
+	PRUNE_STATE_FREE(&prstate);
 
 	return ndeleted;
 }
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index f93c194e18..97c2f1752f 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -268,7 +268,7 @@ begin_heap_rewrite(Relation old_heap, Relation new_heap, TransactionId oldest_xm
 
 	state->rs_old_rel = old_heap;
 	state->rs_new_rel = new_heap;
-	state->rs_buffer = (Page) palloc(BLCKSZ);
+	state->rs_buffer = (Page) palloc(rel_blck_size);
 	/* new_heap needn't be empty, just locked */
 	state->rs_blockno = RelationGetNumberOfBlocks(new_heap);
 	state->rs_buffer_valid = false;
@@ -708,7 +708,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 	if (!state->rs_buffer_valid)
 	{
 		/* Initialize a new empty page */
-		PageInit(page, BLCKSZ, 0);
+		PageInit(page, rel_blck_size, 0);
 		state->rs_buffer_valid = true;
 	}
 
diff --git a/src/backend/access/heap/syncscan.c b/src/backend/access/heap/syncscan.c
index 20640cbbaf..a1a45dcb50 100644
--- a/src/backend/access/heap/syncscan.c
+++ b/src/backend/access/heap/syncscan.c
@@ -80,7 +80,7 @@ bool		trace_syncscan = false;
  * the buffer cache anyway, and on the other hand the page is most likely
  * still in the OS cache.
  */
-#define SYNC_SCAN_REPORT_INTERVAL (128 * 1024 / BLCKSZ)
+#define SYNC_SCAN_REPORT_INTERVAL (128 * 1024 / rel_blck_size)
 
 
 /*
diff --git a/src/backend/access/heap/visibilitymap.c b/src/backend/access/heap/visibilitymap.c
index 4c2a13aeba..83c3482293 100644
--- a/src/backend/access/heap/visibilitymap.c
+++ b/src/backend/access/heap/visibilitymap.c
@@ -102,7 +102,7 @@
  * extra headers, so the whole page minus the standard page header is
  * used for the bitmap.
  */
-#define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData))
+#define MAPSIZE (rel_blck_size - MAXALIGN(SizeOfPageHeaderData))
 
 /* Number of heap blocks we can represent in one byte */
 #define HEAPBLOCKS_PER_BYTE (BITS_PER_BYTE / BITS_PER_HEAPBLOCK)
@@ -614,7 +614,7 @@ vm_readbuf(Relation rel, BlockNumber blkno, bool extend)
 	buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,
 							 RBM_ZERO_ON_ERROR, NULL);
 	if (PageIsNew(BufferGetPage(buf)))
-		PageInit(BufferGetPage(buf), BLCKSZ, 0);
+		PageInit(BufferGetPage(buf), rel_blck_size, 0);
 	return buf;
 }
 
@@ -628,8 +628,8 @@ vm_extend(Relation rel, BlockNumber vm_nblocks)
 	BlockNumber vm_nblocks_now;
 	Page		pg;
 
-	pg = (Page) palloc(BLCKSZ);
-	PageInit(pg, BLCKSZ, 0);
+	pg = (Page) palloc(rel_blck_size);
+	PageInit(pg, rel_blck_size, 0);
 
 	/*
 	 * We use the relation extension lock to lock out other backends trying to
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index c77434904e..bb5a63caa8 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -51,7 +51,7 @@ _bt_initmetapage(Page page, BlockNumber rootbknum, uint32 level)
 	BTMetaPageData *metad;
 	BTPageOpaque metaopaque;
 
-	_bt_pageinit(page, BLCKSZ);
+	_bt_pageinit(page, rel_blck_size);
 
 	metad = BTPageGetMeta(page);
 	metad->btm_magic = BTREE_MAGIC;
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 399e6a1ae5..0aa5446d2c 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -284,7 +284,7 @@ btbuildempty(Relation index)
 	Page		metapage;
 
 	/* Construct metapage. */
-	metapage = (Page) palloc(BLCKSZ);
+	metapage = (Page) palloc(rel_blck_size);
 	_bt_initmetapage(metapage, P_NONE, 0);
 
 	/*
@@ -483,6 +483,9 @@ btbeginscan(Relation rel, int nkeys, int norderbys)
 
 	/* allocate private workspace */
 	so = (BTScanOpaque) palloc(sizeof(BTScanOpaqueData));
+	so->currPos.items = (BTScanPosItem*) palloc(SIZEOF_BT_SCAN_POST_ITEM);
+	so->markPos.items = (BTScanPosItem*) palloc(SIZEOF_BT_SCAN_POST_ITEM);
+
 	BTScanPosInvalidate(so->currPos);
 	BTScanPosInvalidate(so->markPos);
 	if (scan->numberOfKeys > 0)
@@ -554,8 +557,8 @@ btrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
 	 */
 	if (scan->xs_want_itup && so->currTuples == NULL)
 	{
-		so->currTuples = (char *) palloc(BLCKSZ * 2);
-		so->markTuples = so->currTuples + BLCKSZ;
+		so->currTuples = (char *) palloc(rel_blck_size * 2);
+		so->markTuples = so->currTuples + rel_blck_size;
 	}
 
 	/*
@@ -605,6 +608,8 @@ btendscan(IndexScanDesc scan)
 	if (so->currTuples != NULL)
 		pfree(so->currTuples);
 	/* so->markTuples should not be pfree'd, see btrescan */
+	pfree(so->currPos.items);
+	pfree(so->markPos.items);
 	pfree(so);
 }
 
@@ -682,9 +687,10 @@ btrestrpos(IndexScanDesc scan)
 			/* bump pin on mark buffer for assignment to current buffer */
 			if (BTScanPosIsPinned(so->markPos))
 				IncrBufferRefCount(so->markPos.buf);
-			memcpy(&so->currPos, &so->markPos,
-				   offsetof(BTScanPosData, items[1]) +
-				   so->markPos.lastItem * sizeof(BTScanPosItem));
+
+			memcpy(&so->currPos, &so->markPos, offsetof(BTScanPosData, items));
+                	memcpy((&so->currPos)->items, (&so->markPos)->items, (so->markPos.lastItem + 1) * sizeof(BTScanPosItem));
+
 			if (so->currTuples)
 				memcpy(so->currTuples, so->markTuples,
 					   so->markPos.nextTupleOffset);
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index 558113bd13..b977a798ec 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -1360,9 +1360,8 @@ _bt_steppage(IndexScanDesc scan, ScanDirection dir)
 		/* bump pin on current buffer for assignment to mark buffer */
 		if (BTScanPosIsPinned(so->currPos))
 			IncrBufferRefCount(so->currPos.buf);
-		memcpy(&so->markPos, &so->currPos,
-			   offsetof(BTScanPosData, items[1]) +
-			   so->currPos.lastItem * sizeof(BTScanPosItem));
+		memcpy(&so->markPos, &so->currPos, offsetof(BTScanPosData, items));
+		memcpy((&so->markPos)->items, (&so->currPos)->items, (so->currPos.lastItem + 1) * sizeof(BTScanPosItem));
 		if (so->markTuples)
 			memcpy(so->markTuples, so->currTuples,
 				   so->currPos.nextTupleOffset);
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index bf6c03c7b2..4fa1c7c498 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -246,10 +246,10 @@ _bt_blnewpage(uint32 level)
 	Page		page;
 	BTPageOpaque opaque;
 
-	page = (Page) palloc(BLCKSZ);
+	page = (Page) palloc(rel_blck_size);
 
 	/* Zero the page and set up standard page header info */
-	_bt_pageinit(page, BLCKSZ);
+	_bt_pageinit(page, rel_blck_size);
 
 	/* Initialize BT opaque state */
 	opaque = (BTPageOpaque) PageGetSpecialPointer(page);
@@ -290,7 +290,7 @@ _bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)
 	while (blkno > wstate->btws_pages_written)
 	{
 		if (!wstate->btws_zeropage)
-			wstate->btws_zeropage = (Page) palloc0(BLCKSZ);
+			wstate->btws_zeropage = (Page) palloc0(rel_blck_size);
 		/* don't set checksum for all-zero page */
 		smgrextend(wstate->index->rd_smgr, MAIN_FORKNUM,
 				   wstate->btws_pages_written++,
@@ -342,7 +342,7 @@ _bt_pagestate(BTWriteState *wstate, uint32 level)
 	state->btps_level = level;
 	/* set "full" threshold based on level.  See notes at head of file. */
 	if (level > 0)
-		state->btps_full = (BLCKSZ * (100 - BTREE_NONLEAF_FILLFACTOR) / 100);
+		state->btps_full = (rel_blck_size * (100 - BTREE_NONLEAF_FILLFACTOR) / 100);
 	else
 		state->btps_full = RelationGetTargetPageFreeSpace(wstate->index,
 														  BTREE_DEFAULT_FILLFACTOR);
@@ -664,7 +664,7 @@ _bt_uppershutdown(BTWriteState *wstate, BTPageState *state)
 	 * set to point to "P_NONE").  This changes the index to the "valid" state
 	 * by filling in a valid magic number in the metapage.
 	 */
-	metapage = (Page) palloc(BLCKSZ);
+	metapage = (Page) palloc(rel_blck_size);
 	_bt_initmetapage(metapage, rootblkno, rootlevel);
 	_bt_blwritepage(wstate, metapage, BTREE_METAPAGE);
 }
diff --git a/src/backend/access/spgist/spgdoinsert.c b/src/backend/access/spgist/spgdoinsert.c
index a5f4c4059c..6b40c61b70 100644
--- a/src/backend/access/spgist/spgdoinsert.c
+++ b/src/backend/access/spgist/spgdoinsert.c
@@ -340,8 +340,8 @@ checkSplitConditions(Relation index, SpGistState *state,
 	if (SpGistBlockIsRoot(current->blkno))
 	{
 		/* return impossible values to force split */
-		*nToSplit = BLCKSZ;
-		return BLCKSZ;
+		*nToSplit = rel_blck_size;
+		return rel_blck_size;
 	}
 
 	i = current->offnum;
diff --git a/src/backend/access/spgist/spginsert.c b/src/backend/access/spgist/spginsert.c
index 80b82e1602..01506909e8 100644
--- a/src/backend/access/spgist/spginsert.c
+++ b/src/backend/access/spgist/spginsert.c
@@ -159,7 +159,7 @@ spgbuildempty(Relation index)
 	Page		page;
 
 	/* Construct metapage. */
-	page = (Page) palloc(BLCKSZ);
+	page = (Page) palloc(rel_blck_size);
 	SpGistInitMetapage(page);
 
 	/*
diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c
index 7965b5846d..9a151e66a3 100644
--- a/src/backend/access/spgist/spgscan.c
+++ b/src/backend/access/spgist/spgscan.c
@@ -186,6 +186,7 @@ spgbeginscan(Relation rel, int keysz, int orderbysz)
 	scan = RelationGetIndexScan(rel, keysz, 0);
 
 	so = (SpGistScanOpaque) palloc0(sizeof(SpGistScanOpaqueData));
+	SP_GIST_SCAN_ALLOC(so); 
 	if (keysz > 0)
 		so->keyData = (ScanKey) palloc(sizeof(ScanKeyData) * keysz);
 	else
diff --git a/src/backend/access/spgist/spgtextproc.c b/src/backend/access/spgist/spgtextproc.c
index 53f298b6c2..17e7e4b950 100644
--- a/src/backend/access/spgist/spgtextproc.c
+++ b/src/backend/access/spgist/spgtextproc.c
@@ -52,20 +52,20 @@
  * In the worst case, an inner tuple in a text radix tree could have as many
  * as 258 nodes (one for each possible byte value, plus the two special
  * cases).  Each node can take 16 bytes on MAXALIGN=8 machines.  The inner
- * tuple must fit on an index page of size BLCKSZ.  Rather than assuming we
+ * tuple must fit on an index page of size rel_blck_size.  Rather than assuming we
  * know the exact amount of overhead imposed by page headers, tuple headers,
  * etc, we leave 100 bytes for that (the actual overhead should be no more
  * than 56 bytes at this writing, so there is slop in this number).
- * So we can safely create prefixes up to BLCKSZ - 258 * 16 - 100 bytes long.
+ * So we can safely create prefixes up to rel_blck_size - 258 * 16 - 100 bytes long.
  * Unfortunately, because 258 * 16 is over 4K, there is no safe prefix length
- * when BLCKSZ is less than 8K; it is always possible to get "SPGiST inner
+ * when rel_blck_size is less than 8K; it is always possible to get "SPGiST inner
  * tuple size exceeds maximum" if there are too many distinct next-byte values
  * at a given place in the tree.  Since use of nonstandard block sizes appears
  * to be negligible in the field, we just live with that fact for now,
- * choosing a max prefix size of 32 bytes when BLCKSZ is configured smaller
+ * choosing a max prefix size of 32 bytes when rel_blck_size is configured smaller
  * than default.
  */
-#define SPGIST_MAX_PREFIX_LENGTH	Max((int) (BLCKSZ - 258 * 16 - 100), 32)
+#define SPGIST_MAX_PREFIX_LENGTH	Max((int) (rel_blck_size - 258 * 16 - 100), 32)
 
 /* Struct for sorting values in picksplit */
 typedef struct spgNodePtr
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index bd5301f383..6a4032ba49 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -512,7 +512,7 @@ SpGistInitPage(Page page, uint16 f)
 {
 	SpGistPageOpaque opaque;
 
-	PageInit(page, BLCKSZ, MAXALIGN(sizeof(SpGistPageOpaqueData)));
+	PageInit(page, rel_blck_size, MAXALIGN(sizeof(SpGistPageOpaqueData)));
 	opaque = SpGistPageGetOpaque(page);
 	memset(opaque, 0, sizeof(SpGistPageOpaqueData));
 	opaque->flags = f;
@@ -525,7 +525,7 @@ SpGistInitPage(Page page, uint16 f)
 void
 SpGistInitBuffer(Buffer b, uint16 f)
 {
-	Assert(BufferGetPageSize(b) == BLCKSZ);
+	Assert(BufferGetPageSize(b) == rel_blck_size);
 	SpGistInitPage(BufferGetPage(b), f);
 }
 
diff --git a/src/backend/access/transam/README b/src/backend/access/transam/README
index ad4083eb6b..a05589e7b2 100644
--- a/src/backend/access/transam/README
+++ b/src/backend/access/transam/README
@@ -794,7 +794,7 @@ we won't be able to hint its outputs until the second xact is sync'd, up to
 three walwriter cycles later.  This argues for keeping N (the group size)
 as small as possible.  For the moment we are setting the group size to 32,
 which makes the LSN cache space the same size as the actual clog buffer
-space (independently of BLCKSZ).
+space (independently of rel_blck_size).
 
 It is useful that we can run both synchronous and asynchronous commit
 transactions concurrently, but the safety of this is perhaps not
diff --git a/src/backend/access/transam/clog.c b/src/backend/access/transam/clog.c
index bbf9ce1a3a..f3a20a708d 100644
--- a/src/backend/access/transam/clog.c
+++ b/src/backend/access/transam/clog.c
@@ -44,7 +44,7 @@
 #include "storage/proc.h"
 
 /*
- * Defines for CLOG page sizes.  A page is the same BLCKSZ as is used
+ * Defines for CLOG page sizes.  A page is the same rel_blck_size as is used
  * everywhere else in Postgres.
  *
  * Note: because TransactionIds are 32 bits and wrap around at 0xFFFFFFFF,
@@ -57,12 +57,12 @@
 
 /* We need two bits per xact, so four xacts fit in a byte */
 #define CLOG_BITS_PER_XACT	2
-#define CLOG_XACTS_PER_BYTE 4
-#define CLOG_XACTS_PER_PAGE (BLCKSZ * CLOG_XACTS_PER_BYTE)
+#define CLOG_XACTS_PER_BYTE	4
+#define CLOG_XACTS_PER_PAGE	(rel_blck_size * CLOG_XACTS_PER_BYTE)
 #define CLOG_XACT_BITMASK	((1 << CLOG_BITS_PER_XACT) - 1)
 
 #define TransactionIdToPage(xid)	((xid) / (TransactionId) CLOG_XACTS_PER_PAGE)
-#define TransactionIdToPgIndex(xid) ((xid) % (TransactionId) CLOG_XACTS_PER_PAGE)
+#define TransactionIdToPgIndex(xid)	((xid) % (TransactionId) CLOG_XACTS_PER_PAGE)
 #define TransactionIdToByte(xid)	(TransactionIdToPgIndex(xid) / CLOG_XACTS_PER_BYTE)
 #define TransactionIdToBIndex(xid)	((xid) % (TransactionId) CLOG_XACTS_PER_BYTE)
 
@@ -808,7 +808,7 @@ TrimCLOG(void)
 		/* Zero so-far-unused positions in the current byte */
 		*byteptr &= (1 << bshift) - 1;
 		/* Zero the rest of the page */
-		MemSet(byteptr + 1, 0, BLCKSZ - byteno - 1);
+		MemSet(byteptr + 1, 0, rel_blck_size - byteno - 1);
 
 		ClogCtl->shared->page_dirty[slotno] = true;
 	}
diff --git a/src/backend/access/transam/commit_ts.c b/src/backend/access/transam/commit_ts.c
index 7b7bf2b2bf..620b08af26 100644
--- a/src/backend/access/transam/commit_ts.c
+++ b/src/backend/access/transam/commit_ts.c
@@ -38,7 +38,7 @@
 #include "utils/timestamp.h"
 
 /*
- * Defines for CommitTs page sizes.  A page is the same BLCKSZ as is used
+ * Defines for CommitTs page sizes.  A page is the same rel_blck_size as is used
  * everywhere else in Postgres.
  *
  * Note: because TransactionIds are 32 bits and wrap around at 0xFFFFFFFF,
@@ -64,7 +64,7 @@ typedef struct CommitTimestampEntry
 									sizeof(RepOriginId))
 
 #define COMMIT_TS_XACTS_PER_PAGE \
-	(BLCKSZ / SizeOfCommitTimestampEntry)
+	(rel_blck_size / SizeOfCommitTimestampEntry)
 
 #define TransactionIdToCTsPage(xid) \
 	((xid) / (TransactionId) COMMIT_TS_XACTS_PER_PAGE)
diff --git a/src/backend/access/transam/generic_xlog.c b/src/backend/access/transam/generic_xlog.c
index 3adbf7b949..5f154a0866 100644
--- a/src/backend/access/transam/generic_xlog.c
+++ b/src/backend/access/transam/generic_xlog.c
@@ -45,7 +45,7 @@
  */
 #define FRAGMENT_HEADER_SIZE	(2 * sizeof(OffsetNumber))
 #define MATCH_THRESHOLD			FRAGMENT_HEADER_SIZE
-#define MAX_DELTA_SIZE			(BLCKSZ + 2 * FRAGMENT_HEADER_SIZE)
+#define MAX_DELTA_SIZE			(rel_blck_size + 2 * FRAGMENT_HEADER_SIZE)
 
 /* Struct of generic xlog data for single page */
 typedef struct
@@ -55,9 +55,12 @@ typedef struct
 	int			deltaLen;		/* space consumed in delta field */
 	char	   *image;			/* copy of page image for modification, do not
 								 * do it in-place to have aligned memory chunk */
-	char		delta[MAX_DELTA_SIZE];	/* delta between page images */
+	char*		delta;			/* delta between page images */
 } PageData;
 
+#define SIZEOF_DELTA	(sizeof(char) * MAX_DELTA_SIZE)
+
+
 /* State of generic xlog record construction */
 struct GenericXLogState
 {
@@ -66,11 +69,13 @@ struct GenericXLogState
 	 * images addresses, because some code working with pages directly aligns
 	 * addresses, not offsets from beginning of page
 	 */
-	char		images[MAX_GENERIC_XLOG_PAGES * BLCKSZ];
+	char*		images;
 	PageData	pages[MAX_GENERIC_XLOG_PAGES];
 	bool		isLogged;
 };
 
+#define SIZEOF_IMAGES	(sizeof(char) * MAX_GENERIC_XLOG_PAGES * rel_blck_size)
+
 static void writeFragment(PageData *pageData, OffsetNumber offset,
 			  OffsetNumber len, const char *data);
 static void computeRegionDelta(PageData *pageData,
@@ -241,8 +246,8 @@ computeDelta(PageData *pageData, Page curpage, Page targetpage)
 					   0, curLower);
 	/* ... and for upper part, ignoring what's between */
 	computeRegionDelta(pageData, curpage, targetpage,
-					   targetUpper, BLCKSZ,
-					   curUpper, BLCKSZ);
+					   targetUpper, rel_blck_size,
+					   curUpper, rel_blck_size);
 
 	/*
 	 * If xlog debug is enabled, then check produced delta.  Result of delta
@@ -251,13 +256,13 @@ computeDelta(PageData *pageData, Page curpage, Page targetpage)
 #ifdef WAL_DEBUG
 	if (XLOG_DEBUG)
 	{
-		char		tmp[BLCKSZ];
+		char		tmp[rel_blck_size];
 
-		memcpy(tmp, curpage, BLCKSZ);
+		memcpy(tmp, curpage, rel_blck_size);
 		applyPageRedo(tmp, pageData->delta, pageData->deltaLen);
 		if (memcmp(tmp, targetpage, targetLower) != 0 ||
 			memcmp(tmp + targetUpper, targetpage + targetUpper,
-				   BLCKSZ - targetUpper) != 0)
+				   rel_blck_size - targetUpper) != 0)
 			elog(ERROR, "result of generic xlog apply does not match");
 	}
 #endif
@@ -273,11 +278,16 @@ GenericXLogStart(Relation relation)
 	int			i;
 
 	state = (GenericXLogState *) palloc(sizeof(GenericXLogState));
+	state->images = (char*) palloc(SIZEOF_IMAGES);
+
+	for (i = 0; i < MAX_GENERIC_XLOG_PAGES; i++) 
+		state->pages[i].delta = (char*) palloc(SIZEOF_DELTA);
+
 	state->isLogged = RelationNeedsWAL(relation);
 
 	for (i = 0; i < MAX_GENERIC_XLOG_PAGES; i++)
 	{
-		state->pages[i].image = state->images + BLCKSZ * i;
+		state->pages[i].image = state->images + rel_blck_size * i;
 		state->pages[i].buffer = InvalidBuffer;
 	}
 
@@ -309,7 +319,7 @@ GenericXLogRegisterBuffer(GenericXLogState *state, Buffer buffer, int flags)
 			/* Empty slot, so use it (there cannot be a match later) */
 			page->buffer = buffer;
 			page->flags = flags;
-			memcpy(page->image, BufferGetPage(buffer), BLCKSZ);
+			memcpy(page->image, BufferGetPage(buffer), rel_blck_size);
 			return (Page) page->image;
 		}
 		else if (page->buffer == buffer)
@@ -371,7 +381,7 @@ GenericXLogFinish(GenericXLogState *state)
 					   pageHeader->pd_upper - pageHeader->pd_lower);
 				memcpy(page + pageHeader->pd_upper,
 					   pageData->image + pageHeader->pd_upper,
-					   BLCKSZ - pageHeader->pd_upper);
+					   rel_blck_size - pageHeader->pd_upper);
 
 				XLogRegisterBuffer(i, pageData->buffer,
 								   REGBUF_FORCE_IMAGE | REGBUF_STANDARD);
@@ -390,7 +400,7 @@ GenericXLogFinish(GenericXLogState *state)
 					   pageHeader->pd_upper - pageHeader->pd_lower);
 				memcpy(page + pageHeader->pd_upper,
 					   pageData->image + pageHeader->pd_upper,
-					   BLCKSZ - pageHeader->pd_upper);
+					   rel_blck_size - pageHeader->pd_upper);
 
 				XLogRegisterBuffer(i, pageData->buffer, REGBUF_STANDARD);
 				XLogRegisterBufData(i, pageData->delta, pageData->deltaLen);
@@ -424,7 +434,7 @@ GenericXLogFinish(GenericXLogState *state)
 				continue;
 			memcpy(BufferGetPage(pageData->buffer),
 				   pageData->image,
-				   BLCKSZ);
+				   rel_blck_size);
 			/* We don't worry about zeroing the "hole" in this case */
 			MarkBufferDirty(pageData->buffer);
 		}
@@ -433,7 +443,7 @@ GenericXLogFinish(GenericXLogState *state)
 		lsn = InvalidXLogRecPtr;
 	}
 
-	pfree(state);
+	GenericXLogAbort(state);
 
 	return lsn;
 }
@@ -446,6 +456,12 @@ GenericXLogFinish(GenericXLogState *state)
 void
 GenericXLogAbort(GenericXLogState *state)
 {
+	int i;
+
+	for (i = 0; i < MAX_GENERIC_XLOG_PAGES; i++)
+		pfree(state->pages[i].delta);
+
+	pfree(state->images);
 	pfree(state);
 }
 
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
index 0fb6bf2f02..c0cc7b5fd4 100644
--- a/src/backend/access/transam/multixact.c
+++ b/src/backend/access/transam/multixact.c
@@ -93,7 +93,7 @@
 
 
 /*
- * Defines for MultiXactOffset page sizes.  A page is the same BLCKSZ as is
+ * Defines for MultiXactOffset page sizes.  A page is the same rel_blck_size as is
  * used everywhere else in Postgres.
  *
  * Note: because MultiXactOffsets are 32 bits and wrap around at 0xFFFFFFFF,
@@ -106,7 +106,7 @@
  */
 
 /* We need four bytes per offset */
-#define MULTIXACT_OFFSETS_PER_PAGE (BLCKSZ / sizeof(MultiXactOffset))
+#define MULTIXACT_OFFSETS_PER_PAGE (rel_blck_size / sizeof(MultiXactOffset))
 
 #define MultiXactIdToOffsetPage(xid) \
 	((xid) / (MultiXactOffset) MULTIXACT_OFFSETS_PER_PAGE)
@@ -119,7 +119,7 @@
  * additional flag bits for each TransactionId.  To do this without getting
  * into alignment issues, we store four bytes of flags, and then the
  * corresponding 4 Xids.  Each such 5-word (20-byte) set we call a "group", and
- * are stored as a whole in pages.  Thus, with 8kB BLCKSZ, we keep 409 groups
+ * are stored as a whole in pages.  Thus, with 8kB rel_blck_size, we keep 409 groups
  * per page.  This wastes 12 bytes per page, but that's OK -- simplicity (and
  * performance) trumps space efficiency here.
  *
@@ -138,7 +138,7 @@
 /* size in bytes of a complete group */
 #define MULTIXACT_MEMBERGROUP_SIZE \
 	(sizeof(TransactionId) * MULTIXACT_MEMBERS_PER_MEMBERGROUP + MULTIXACT_FLAGBYTES_PER_GROUP)
-#define MULTIXACT_MEMBERGROUPS_PER_PAGE (BLCKSZ / MULTIXACT_MEMBERGROUP_SIZE)
+#define MULTIXACT_MEMBERGROUPS_PER_PAGE (rel_blck_size / MULTIXACT_MEMBERGROUP_SIZE)
 #define MULTIXACT_MEMBERS_PER_PAGE	\
 	(MULTIXACT_MEMBERGROUPS_PER_PAGE * MULTIXACT_MEMBERS_PER_MEMBERGROUP)
 
@@ -2044,7 +2044,7 @@ TrimMultiXact(void)
 		offptr = (MultiXactOffset *) MultiXactOffsetCtl->shared->page_buffer[slotno];
 		offptr += entryno;
 
-		MemSet(offptr, 0, BLCKSZ - (entryno * sizeof(MultiXactOffset)));
+		MemSet(offptr, 0, rel_blck_size - (entryno * sizeof(MultiXactOffset)));
 
 		MultiXactOffsetCtl->shared->page_dirty[slotno] = true;
 	}
@@ -2076,7 +2076,7 @@ TrimMultiXact(void)
 		xidptr = (TransactionId *)
 			(MultiXactMemberCtl->shared->page_buffer[slotno] + memberoff);
 
-		MemSet(xidptr, 0, BLCKSZ - memberoff);
+		MemSet(xidptr, 0, rel_blck_size - memberoff);
 
 		/*
 		 * Note: we don't need to zero out the flag bits in the remaining
diff --git a/src/backend/access/transam/slru.c b/src/backend/access/transam/slru.c
index 94b6e6612a..6b44918a9f 100644
--- a/src/backend/access/transam/slru.c
+++ b/src/backend/access/transam/slru.c
@@ -158,7 +158,7 @@ SimpleLruShmemSize(int nslots, int nlsns)
 	if (nlsns > 0)
 		sz += MAXALIGN(nslots * nlsns * sizeof(XLogRecPtr));	/* group_lsn[] */
 
-	return BUFFERALIGN(sz) + BLCKSZ * nslots;
+	return BUFFERALIGN(sz) + rel_blck_size * nslots;
 }
 
 void
@@ -229,7 +229,7 @@ SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns,
 			shared->page_status[slotno] = SLRU_PAGE_EMPTY;
 			shared->page_dirty[slotno] = false;
 			shared->page_lru_count[slotno] = 0;
-			ptr += BLCKSZ;
+			ptr += rel_blck_size;
 		}
 
 		/* Should fit to estimated shmem size */
@@ -279,7 +279,7 @@ SimpleLruZeroPage(SlruCtl ctl, int pageno)
 	SlruRecentlyUsed(shared, slotno);
 
 	/* Set the buffer to zeroes */
-	MemSet(shared->page_buffer[slotno], 0, BLCKSZ);
+	MemSet(shared->page_buffer[slotno], 0, rel_blck_size);
 
 	/* Set the LSNs for this new page to zero */
 	SimpleLruZeroLSNs(ctl, slotno);
@@ -591,7 +591,7 @@ SimpleLruDoesPhysicalPageExist(SlruCtl ctl, int pageno)
 {
 	int			segno = pageno / SLRU_PAGES_PER_SEGMENT;
 	int			rpageno = pageno % SLRU_PAGES_PER_SEGMENT;
-	int			offset = rpageno * BLCKSZ;
+	int			offset = rpageno * rel_blck_size;
 	char		path[MAXPGPATH];
 	int			fd;
 	bool		result;
@@ -619,7 +619,7 @@ SimpleLruDoesPhysicalPageExist(SlruCtl ctl, int pageno)
 		SlruReportIOError(ctl, pageno, 0);
 	}
 
-	result = endpos >= (off_t) (offset + BLCKSZ);
+	result = endpos >= (off_t) (offset + rel_blck_size);
 
 	CloseTransientFile(fd);
 	return result;
@@ -641,7 +641,7 @@ SlruPhysicalReadPage(SlruCtl ctl, int pageno, int slotno)
 	SlruShared	shared = ctl->shared;
 	int			segno = pageno / SLRU_PAGES_PER_SEGMENT;
 	int			rpageno = pageno % SLRU_PAGES_PER_SEGMENT;
-	int			offset = rpageno * BLCKSZ;
+	int			offset = rpageno * rel_blck_size;
 	char		path[MAXPGPATH];
 	int			fd;
 
@@ -667,7 +667,7 @@ SlruPhysicalReadPage(SlruCtl ctl, int pageno, int slotno)
 		ereport(LOG,
 				(errmsg("file \"%s\" doesn't exist, reading as zeroes",
 						path)));
-		MemSet(shared->page_buffer[slotno], 0, BLCKSZ);
+		MemSet(shared->page_buffer[slotno], 0, rel_blck_size);
 		return true;
 	}
 
@@ -681,7 +681,7 @@ SlruPhysicalReadPage(SlruCtl ctl, int pageno, int slotno)
 
 	errno = 0;
 	pgstat_report_wait_start(WAIT_EVENT_SLRU_READ);
-	if (read(fd, shared->page_buffer[slotno], BLCKSZ) != BLCKSZ)
+	if (read(fd, shared->page_buffer[slotno], rel_blck_size) != rel_blck_size)
 	{
 		pgstat_report_wait_end();
 		slru_errcause = SLRU_READ_FAILED;
@@ -721,7 +721,7 @@ SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno, SlruFlush fdata)
 	SlruShared	shared = ctl->shared;
 	int			segno = pageno / SLRU_PAGES_PER_SEGMENT;
 	int			rpageno = pageno % SLRU_PAGES_PER_SEGMENT;
-	int			offset = rpageno * BLCKSZ;
+	int			offset = rpageno * rel_blck_size;
 	char		path[MAXPGPATH];
 	int			fd = -1;
 
@@ -842,7 +842,7 @@ SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno, SlruFlush fdata)
 
 	errno = 0;
 	pgstat_report_wait_start(WAIT_EVENT_SLRU_WRITE);
-	if (write(fd, shared->page_buffer[slotno], BLCKSZ) != BLCKSZ)
+	if (write(fd, shared->page_buffer[slotno], rel_blck_size) != rel_blck_size)
 	{
 		pgstat_report_wait_end();
 		/* if write didn't set errno, assume problem is no disk space */
@@ -893,7 +893,7 @@ SlruReportIOError(SlruCtl ctl, int pageno, TransactionId xid)
 {
 	int			segno = pageno / SLRU_PAGES_PER_SEGMENT;
 	int			rpageno = pageno % SLRU_PAGES_PER_SEGMENT;
-	int			offset = rpageno * BLCKSZ;
+	int			offset = rpageno * rel_blck_size;
 	char		path[MAXPGPATH];
 
 	SlruFileName(ctl, path, segno);
diff --git a/src/backend/access/transam/subtrans.c b/src/backend/access/transam/subtrans.c
index f640661130..005fa76d5e 100644
--- a/src/backend/access/transam/subtrans.c
+++ b/src/backend/access/transam/subtrans.c
@@ -33,10 +33,11 @@
 #include "access/transam.h"
 #include "pg_trace.h"
 #include "utils/snapmgr.h"
+#include "storage/md.h"
 
 
 /*
- * Defines for SubTrans page sizes.  A page is the same BLCKSZ as is used
+ * Defines for SubTrans page sizes.  A page is the same rel_blck_size as is used
  * everywhere else in Postgres.
  *
  * Note: because TransactionIds are 32 bits and wrap around at 0xFFFFFFFF,
@@ -49,7 +50,7 @@
  */
 
 /* We need four bytes per xact */
-#define SUBTRANS_XACTS_PER_PAGE (BLCKSZ / sizeof(TransactionId))
+#define SUBTRANS_XACTS_PER_PAGE (rel_blck_size / sizeof(TransactionId))
 
 #define TransactionIdToPage(xid) ((xid) / (TransactionId) SUBTRANS_XACTS_PER_PAGE)
 #define TransactionIdToEntry(xid) ((xid) % (TransactionId) SUBTRANS_XACTS_PER_PAGE)
diff --git a/src/backend/access/transam/timeline.c b/src/backend/access/transam/timeline.c
index 3d65e5624a..3312100f15 100644
--- a/src/backend/access/transam/timeline.c
+++ b/src/backend/access/transam/timeline.c
@@ -292,7 +292,7 @@ writeTimeLineHistory(TimeLineID newTLI, TimeLineID parentTLI,
 	char		path[MAXPGPATH];
 	char		tmppath[MAXPGPATH];
 	char		histfname[MAXFNAMELEN];
-	char		buffer[BLCKSZ];
+	char		buffer[rel_blck_size];
 	int			srcfd;
 	int			fd;
 	int			nbytes;
diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c
index b715152e8d..5750554ab2 100644
--- a/src/backend/access/transam/twophase.c
+++ b/src/backend/access/transam/twophase.c
@@ -1299,7 +1299,7 @@ XlogReadTwoPhaseData(XLogRecPtr lsn, char **buf, int *len)
 	XLogReaderState *xlogreader;
 	char	   *errormsg;
 
-	xlogreader = XLogReaderAllocate(wal_segment_size, &read_local_xlog_page,
+	xlogreader = XLogReaderAllocate(wal_file_size, &read_local_xlog_page,
 									NULL);
 	if (!xlogreader)
 		ereport(ERROR,
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index e729180f82..9f1244224f 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -65,6 +65,7 @@
 #include "storage/reinit.h"
 #include "storage/smgr.h"
 #include "storage/spin.h"
+#include "storage/md.h"
 #include "utils/backend_random.h"
 #include "utils/builtins.h"
 #include "utils/guc.h"
@@ -76,6 +77,13 @@
 #include "utils/timestamp.h"
 #include "pg_trace.h"
 
+#define DEBUG_XLOG		0
+
+#define debug_xlog(format, ...)      \
+       if (DEBUG_XLOG)               \
+               fprintf(stderr, "xlog --> " format, ##__VA_ARGS__);
+
+
 extern uint32 bootstrap_data_checksum_version;
 
 /* File path names (all relative to $PGDATA) */
@@ -87,7 +95,7 @@ extern uint32 bootstrap_data_checksum_version;
 
 /* User-settable parameters */
 int			max_wal_size_mb = 1024; /* 1 GB */
-int			min_wal_size_mb = 80;	/* 80 MB */
+int			min_wal_size_mb = 80;   /* 80 MB */
 int			wal_keep_segments = 0;
 int			XLOGbuffers = -1;
 int			XLogArchiveTimeout = 0;
@@ -110,7 +118,12 @@ int			wal_retrieve_retry_interval = 5000;
 bool		XLOG_DEBUG = false;
 #endif
 
-int			wal_segment_size = DEFAULT_XLOG_SEG_SIZE;
+/*
+ * This kept temporarily because wal_file_size is an unsigned int which cannot be
+ * reflected as a GUC parameter.
+ */
+int wal_segment_size;
+
 
 /*
  * Number of WAL insertion locks to use. A higher value allows more insertions
@@ -251,7 +264,7 @@ bool		InArchiveRecovery = false;
 /* Was the last xlog file restored from archive, or local? */
 static bool restoredFromArchive = false;
 
-/* Buffers dedicated to consistency checks of size BLCKSZ */
+/* Buffers dedicated to consistency checks of size rel_blck_size */
 static char *replay_image_masked = NULL;
 static char *master_image_masked = NULL;
 
@@ -618,7 +631,7 @@ typedef struct XLogCtlData
 	 * WALBufMappingLock.
 	 */
 	char	   *pages;			/* buffers for unwritten XLOG pages */
-	XLogRecPtr *xlblocks;		/* 1st byte ptr-s + XLOG_BLCKSZ */
+	XLogRecPtr *xlblocks;		/* 1st byte ptr-s + wal_blck_size */
 	int			XLogCacheBlck;	/* highest allocated xlog buffer index */
 
 	/*
@@ -719,7 +732,7 @@ static ControlFileData *ControlFile = NULL;
  * multiple evaluation!
  */
 #define INSERT_FREESPACE(endptr)	\
-	(((endptr) % XLOG_BLCKSZ == 0) ? 0 : (XLOG_BLCKSZ - (endptr) % XLOG_BLCKSZ))
+	(((endptr) % wal_blck_size == 0) ? 0 : (wal_blck_size - (endptr) % wal_blck_size))
 
 /* Macro to advance to next buffer index. */
 #define NextBufIdx(idx)		\
@@ -730,12 +743,12 @@ static ControlFileData *ControlFile = NULL;
  * would hold if it was in cache, the page containing 'recptr'.
  */
 #define XLogRecPtrToBufIdx(recptr)	\
-	(((recptr) / XLOG_BLCKSZ) % (XLogCtl->XLogCacheBlck + 1))
+	(((recptr) / wal_blck_size) % (XLogCtl->XLogCacheBlck + 1))
 
 /*
  * These are the number of bytes in a WAL page usable for WAL data.
  */
-#define UsableBytesInPage (XLOG_BLCKSZ - SizeOfXLogShortPHD)
+#define UsableBytesInPage (wal_blck_size - SizeOfXLogShortPHD)
 
 /* Convert min_wal_size_mb and max wal_size_mb to equivalent segment count */
 #define ConvertToXSegs(x, segsize)	\
@@ -1110,7 +1123,7 @@ XLogInsertRecord(XLogRecData *rdata,
 	/*
 	 * Update shared LogwrtRqst.Write, if we crossed page boundary.
 	 */
-	if (StartPos / XLOG_BLCKSZ != EndPos / XLOG_BLCKSZ)
+	if (StartPos / wal_blck_size != EndPos / wal_blck_size)
 	{
 		SpinLockAcquire(&XLogCtl->info_lck);
 		/* advance global request to include new block(s) */
@@ -1139,11 +1152,11 @@ XLogInsertRecord(XLogRecData *rdata,
 		if (inserted)
 		{
 			EndPos = StartPos + SizeOfXLogRecord;
-			if (StartPos / XLOG_BLCKSZ != EndPos / XLOG_BLCKSZ)
+			if (StartPos / wal_blck_size != EndPos / wal_blck_size)
 			{
-				uint64		offset = XLogSegmentOffset(EndPos, wal_segment_size);
+				uint64		offset = XLogSegmentOffset(EndPos, wal_file_size);
 
-				if (offset == EndPos % XLOG_BLCKSZ)
+				if (offset == EndPos % wal_blck_size)
 					EndPos += SizeOfXLogLongPHD;
 				else
 					EndPos += SizeOfXLogShortPHD;
@@ -1176,7 +1189,7 @@ XLogInsertRecord(XLogRecData *rdata,
 			appendBinaryStringInfo(&recordBuf, rdata->data, rdata->len);
 
 		if (!debug_reader)
-			debug_reader = XLogReaderAllocate(wal_segment_size, NULL, NULL);
+			debug_reader = XLogReaderAllocate(wal_file_size, NULL, NULL);
 
 		if (!debug_reader)
 		{
@@ -1302,7 +1315,7 @@ ReserveXLogSwitch(XLogRecPtr *StartPos, XLogRecPtr *EndPos, XLogRecPtr *PrevPtr)
 	startbytepos = Insert->CurrBytePos;
 
 	ptr = XLogBytePosToEndRecPtr(startbytepos);
-	if (XLogSegmentOffset(ptr, wal_segment_size) == 0)
+	if (XLogSegmentOffset(ptr, wal_file_size) == 0)
 	{
 		SpinLockRelease(&Insert->insertpos_lck);
 		*EndPos = *StartPos = ptr;
@@ -1315,8 +1328,8 @@ ReserveXLogSwitch(XLogRecPtr *StartPos, XLogRecPtr *EndPos, XLogRecPtr *PrevPtr)
 	*StartPos = XLogBytePosToRecPtr(startbytepos);
 	*EndPos = XLogBytePosToEndRecPtr(endbytepos);
 
-	segleft = wal_segment_size - XLogSegmentOffset(*EndPos, wal_segment_size);
-	if (segleft != wal_segment_size)
+	segleft = wal_file_size - XLogSegmentOffset(*EndPos, wal_file_size);
+	if (segleft != wal_file_size)
 	{
 		/* consume the rest of the segment */
 		*EndPos += segleft;
@@ -1329,7 +1342,7 @@ ReserveXLogSwitch(XLogRecPtr *StartPos, XLogRecPtr *EndPos, XLogRecPtr *PrevPtr)
 
 	*PrevPtr = XLogBytePosToRecPtr(prevbytepos);
 
-	Assert(XLogSegmentOffset(*EndPos, wal_segment_size) == 0);
+	Assert(XLogSegmentOffset(*EndPos, wal_file_size) == 0);
 	Assert(XLogRecPtrToBytePos(*EndPos) == endbytepos);
 	Assert(XLogRecPtrToBytePos(*StartPos) == startbytepos);
 	Assert(XLogRecPtrToBytePos(*PrevPtr) == prevbytepos);
@@ -1402,7 +1415,7 @@ checkXLogConsistency(XLogReaderState *record)
 		 * Take a copy of the local page where WAL has been applied to have a
 		 * comparison base before masking it...
 		 */
-		memcpy(replay_image_masked, page, BLCKSZ);
+		memcpy(replay_image_masked, page, rel_blck_size);
 
 		/* No need for this page anymore now that a copy is in. */
 		UnlockReleaseBuffer(buf);
@@ -1435,7 +1448,7 @@ checkXLogConsistency(XLogReaderState *record)
 		}
 
 		/* Time to compare the master and replay images. */
-		if (memcmp(replay_image_masked, master_image_masked, BLCKSZ) != 0)
+		if (memcmp(replay_image_masked, master_image_masked, rel_blck_size) != 0)
 		{
 			elog(FATAL,
 				 "inconsistent page found, rel %u/%u/%u, forknum %u, blkno %u",
@@ -1485,7 +1498,7 @@ CopyXLogRecordToWAL(int write_len, bool isLogSwitch, XLogRecData *rdata,
 			/*
 			 * Write what fits on this page, and continue on the next page.
 			 */
-			Assert(CurrPos % XLOG_BLCKSZ >= SizeOfXLogShortPHD || freespace == 0);
+			Assert(CurrPos % wal_blck_size >= SizeOfXLogShortPHD || freespace == 0);
 			memcpy(currpos, rdata_data, freespace);
 			rdata_data += freespace;
 			rdata_len -= freespace;
@@ -1507,7 +1520,7 @@ CopyXLogRecordToWAL(int write_len, bool isLogSwitch, XLogRecData *rdata,
 			pagehdr->xlp_info |= XLP_FIRST_IS_CONTRECORD;
 
 			/* skip over the page header */
-			if (XLogSegmentOffset(CurrPos, wal_segment_size) == 0)
+			if (XLogSegmentOffset(CurrPos, wal_file_size) == 0)
 			{
 				CurrPos += SizeOfXLogLongPHD;
 				currpos += SizeOfXLogLongPHD;
@@ -1520,7 +1533,7 @@ CopyXLogRecordToWAL(int write_len, bool isLogSwitch, XLogRecData *rdata,
 			freespace = INSERT_FREESPACE(CurrPos);
 		}
 
-		Assert(CurrPos % XLOG_BLCKSZ >= SizeOfXLogShortPHD || rdata_len == 0);
+		Assert(CurrPos % wal_blck_size >= SizeOfXLogShortPHD || rdata_len == 0);
 		memcpy(currpos, rdata_data, rdata_len);
 		currpos += rdata_len;
 		CurrPos += rdata_len;
@@ -1538,16 +1551,16 @@ CopyXLogRecordToWAL(int write_len, bool isLogSwitch, XLogRecData *rdata,
 	 * allocated and zeroed in the WAL buffers so that when the caller (or
 	 * someone else) does XLogWrite(), it can really write out all the zeros.
 	 */
-	if (isLogSwitch && XLogSegmentOffset(CurrPos, wal_segment_size) != 0)
+	if (isLogSwitch && XLogSegmentOffset(CurrPos, wal_file_size) != 0)
 	{
 		/* An xlog-switch record doesn't contain any data besides the header */
 		Assert(write_len == SizeOfXLogRecord);
 
 		/*
 		 * We do this one page at a time, to make sure we don't deadlock
-		 * against ourselves if wal_buffers < wal_segment_size.
+		 * against ourselves if wal_buffers < wal_file_size.
 		 */
-		Assert(XLogSegmentOffset(EndPos, wal_segment_size) == 0);
+		Assert(XLogSegmentOffset(EndPos, wal_file_size) == 0);
 
 		/* Use up all the remaining space on the first page */
 		CurrPos += freespace;
@@ -1557,7 +1570,7 @@ CopyXLogRecordToWAL(int write_len, bool isLogSwitch, XLogRecData *rdata,
 			/* initialize the next page (if not initialized already) */
 			WALInsertLockUpdateInsertingAt(CurrPos);
 			AdvanceXLInsertBuffer(CurrPos, false);
-			CurrPos += XLOG_BLCKSZ;
+			CurrPos += wal_blck_size;
 		}
 	}
 	else
@@ -1818,11 +1831,11 @@ GetXLogBuffer(XLogRecPtr ptr)
 	 * Fast path for the common case that we need to access again the same
 	 * page as last time.
 	 */
-	if (ptr / XLOG_BLCKSZ == cachedPage)
+	if (ptr / wal_blck_size == cachedPage)
 	{
 		Assert(((XLogPageHeader) cachedPos)->xlp_magic == XLOG_PAGE_MAGIC);
-		Assert(((XLogPageHeader) cachedPos)->xlp_pageaddr == ptr - (ptr % XLOG_BLCKSZ));
-		return cachedPos + ptr % XLOG_BLCKSZ;
+		Assert(((XLogPageHeader) cachedPos)->xlp_pageaddr == ptr - (ptr % wal_blck_size));
+		return cachedPos + ptr % wal_blck_size;
 	}
 
 	/*
@@ -1850,7 +1863,7 @@ GetXLogBuffer(XLogRecPtr ptr)
 	 * holding the lock.
 	 */
 	expectedEndPtr = ptr;
-	expectedEndPtr += XLOG_BLCKSZ - ptr % XLOG_BLCKSZ;
+	expectedEndPtr += wal_blck_size - ptr % wal_blck_size;
 
 	endptr = XLogCtl->xlblocks[idx];
 	if (expectedEndPtr != endptr)
@@ -1871,11 +1884,11 @@ GetXLogBuffer(XLogRecPtr ptr)
 		 * sure that it's initialized, before we let insertingAt to move past
 		 * the page header.
 		 */
-		if (ptr % XLOG_BLCKSZ == SizeOfXLogShortPHD &&
-			XLogSegmentOffset(ptr, wal_segment_size) > XLOG_BLCKSZ)
+		if (ptr % wal_blck_size == SizeOfXLogShortPHD &&
+			XLogSegmentOffset(ptr, wal_file_size) > wal_blck_size)
 			initializedUpto = ptr - SizeOfXLogShortPHD;
-		else if (ptr % XLOG_BLCKSZ == SizeOfXLogLongPHD &&
-				 XLogSegmentOffset(ptr, wal_segment_size) < XLOG_BLCKSZ)
+		else if (ptr % wal_blck_size == SizeOfXLogLongPHD &&
+				 XLogSegmentOffset(ptr, wal_file_size) < wal_blck_size)
 			initializedUpto = ptr - SizeOfXLogLongPHD;
 		else
 			initializedUpto = ptr;
@@ -1902,13 +1915,13 @@ GetXLogBuffer(XLogRecPtr ptr)
 	 * Found the buffer holding this page. Return a pointer to the right
 	 * offset within the page.
 	 */
-	cachedPage = ptr / XLOG_BLCKSZ;
-	cachedPos = XLogCtl->pages + idx * (Size) XLOG_BLCKSZ;
+	cachedPage = ptr / wal_blck_size;
+	cachedPos = XLogCtl->pages + idx * (Size) wal_blck_size;
 
 	Assert(((XLogPageHeader) cachedPos)->xlp_magic == XLOG_PAGE_MAGIC);
-	Assert(((XLogPageHeader) cachedPos)->xlp_pageaddr == ptr - (ptr % XLOG_BLCKSZ));
+	Assert(((XLogPageHeader) cachedPos)->xlp_pageaddr == ptr - (ptr % wal_blck_size));
 
-	return cachedPos + ptr % XLOG_BLCKSZ;
+	return cachedPos + ptr % wal_blck_size;
 }
 
 /*
@@ -1928,7 +1941,7 @@ XLogBytePosToRecPtr(uint64 bytepos)
 	fullsegs = bytepos / UsableBytesInSegment;
 	bytesleft = bytepos % UsableBytesInSegment;
 
-	if (bytesleft < XLOG_BLCKSZ - SizeOfXLogLongPHD)
+	if (bytesleft < wal_blck_size - SizeOfXLogLongPHD)
 	{
 		/* fits on first page of segment */
 		seg_offset = bytesleft + SizeOfXLogLongPHD;
@@ -1936,16 +1949,16 @@ XLogBytePosToRecPtr(uint64 bytepos)
 	else
 	{
 		/* account for the first page on segment with long header */
-		seg_offset = XLOG_BLCKSZ;
-		bytesleft -= XLOG_BLCKSZ - SizeOfXLogLongPHD;
+		seg_offset = wal_blck_size;
+		bytesleft -= wal_blck_size - SizeOfXLogLongPHD;
 
 		fullpages = bytesleft / UsableBytesInPage;
 		bytesleft = bytesleft % UsableBytesInPage;
 
-		seg_offset += fullpages * XLOG_BLCKSZ + bytesleft + SizeOfXLogShortPHD;
+		seg_offset += fullpages * wal_blck_size + bytesleft + SizeOfXLogShortPHD;
 	}
 
-	XLogSegNoOffsetToRecPtr(fullsegs, seg_offset, result, wal_segment_size);
+	XLogSegNoOffsetToRecPtr(fullsegs, seg_offset, result, wal_file_size);
 
 	return result;
 }
@@ -1968,7 +1981,7 @@ XLogBytePosToEndRecPtr(uint64 bytepos)
 	fullsegs = bytepos / UsableBytesInSegment;
 	bytesleft = bytepos % UsableBytesInSegment;
 
-	if (bytesleft < XLOG_BLCKSZ - SizeOfXLogLongPHD)
+	if (bytesleft < wal_blck_size - SizeOfXLogLongPHD)
 	{
 		/* fits on first page of segment */
 		if (bytesleft == 0)
@@ -1979,19 +1992,19 @@ XLogBytePosToEndRecPtr(uint64 bytepos)
 	else
 	{
 		/* account for the first page on segment with long header */
-		seg_offset = XLOG_BLCKSZ;
-		bytesleft -= XLOG_BLCKSZ - SizeOfXLogLongPHD;
+		seg_offset = wal_blck_size;
+		bytesleft -= wal_blck_size - SizeOfXLogLongPHD;
 
 		fullpages = bytesleft / UsableBytesInPage;
 		bytesleft = bytesleft % UsableBytesInPage;
 
 		if (bytesleft == 0)
-			seg_offset += fullpages * XLOG_BLCKSZ + bytesleft;
+			seg_offset += fullpages * wal_blck_size + bytesleft;
 		else
-			seg_offset += fullpages * XLOG_BLCKSZ + bytesleft + SizeOfXLogShortPHD;
+			seg_offset += fullpages * wal_blck_size + bytesleft + SizeOfXLogShortPHD;
 	}
 
-	XLogSegNoOffsetToRecPtr(fullsegs, seg_offset, result, wal_segment_size);
+	XLogSegNoOffsetToRecPtr(fullsegs, seg_offset, result, wal_file_size);
 
 	return result;
 }
@@ -2007,10 +2020,10 @@ XLogRecPtrToBytePos(XLogRecPtr ptr)
 	uint32		offset;
 	uint64		result;
 
-	XLByteToSeg(ptr, fullsegs, wal_segment_size);
+	XLByteToSeg(ptr, fullsegs, wal_file_size);
 
-	fullpages = (XLogSegmentOffset(ptr, wal_segment_size)) / XLOG_BLCKSZ;
-	offset = ptr % XLOG_BLCKSZ;
+	fullpages = (XLogSegmentOffset(ptr, wal_file_size)) / wal_blck_size;
+	offset = ptr % wal_blck_size;
 
 	if (fullpages == 0)
 	{
@@ -2024,7 +2037,7 @@ XLogRecPtrToBytePos(XLogRecPtr ptr)
 	else
 	{
 		result = fullsegs * UsableBytesInSegment +
-			(XLOG_BLCKSZ - SizeOfXLogLongPHD) + /* account for first page */
+			(wal_blck_size - SizeOfXLogLongPHD) + /* account for first page */
 			(fullpages - 1) * UsableBytesInPage;	/* full pages */
 		if (offset > 0)
 		{
@@ -2132,17 +2145,17 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, bool opportunistic)
 		 * next output page.
 		 */
 		NewPageBeginPtr = XLogCtl->InitializedUpTo;
-		NewPageEndPtr = NewPageBeginPtr + XLOG_BLCKSZ;
+		NewPageEndPtr = NewPageBeginPtr + wal_blck_size;
 
 		Assert(XLogRecPtrToBufIdx(NewPageBeginPtr) == nextidx);
 
-		NewPage = (XLogPageHeader) (XLogCtl->pages + nextidx * (Size) XLOG_BLCKSZ);
+		NewPage = (XLogPageHeader) (XLogCtl->pages + nextidx * (Size) wal_blck_size);
 
 		/*
 		 * Be sure to re-zero the buffer so that bytes beyond what we've
 		 * written will look like zeroes and not valid XLOG records...
 		 */
-		MemSet((char *) NewPage, 0, XLOG_BLCKSZ);
+		MemSet((char *) NewPage, 0, wal_blck_size);
 
 		/*
 		 * Fill the new page's header
@@ -2174,13 +2187,13 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, bool opportunistic)
 		/*
 		 * If first page of an XLOG segment file, make it a long header.
 		 */
-		if ((XLogSegmentOffset(NewPage->xlp_pageaddr, wal_segment_size)) == 0)
+		if ((XLogSegmentOffset(NewPage->xlp_pageaddr, wal_file_size)) == 0)
 		{
 			XLogLongPageHeader NewLongPage = (XLogLongPageHeader) NewPage;
 
 			NewLongPage->xlp_sysid = ControlFile->system_identifier;
-			NewLongPage->xlp_seg_size = wal_segment_size;
-			NewLongPage->xlp_xlog_blcksz = XLOG_BLCKSZ;
+			NewLongPage->xlp_seg_size = wal_file_size;
+			NewLongPage->xlp_xlog_blcksz = wal_blck_size;
 			NewPage->xlp_info |= XLP_LONG_HEADER;
 		}
 
@@ -2231,7 +2244,7 @@ CalculateCheckpointSegments(void)
 	 *	  number of segments consumed between checkpoints.
 	 *-------
 	 */
-	target = (double) ConvertToXSegs(max_wal_size_mb, wal_segment_size) /
+	target = (double) ConvertToXSegs(max_wal_size_mb, wal_file_size) /
 		(1.0 + CheckPointCompletionTarget);
 
 	/* round down */
@@ -2244,6 +2257,7 @@ CalculateCheckpointSegments(void)
 void
 assign_max_wal_size(int newval, void *extra)
 {
+	//fprintf(stderr, "assign_max_wal_size\n");
 	max_wal_size_mb = newval;
 	CalculateCheckpointSegments();
 }
@@ -2272,10 +2286,10 @@ XLOGfileslop(XLogRecPtr PriorRedoPtr)
 	 * correspond to. Always recycle enough segments to meet the minimum, and
 	 * remove enough segments to stay below the maximum.
 	 */
-	minSegNo = PriorRedoPtr / wal_segment_size +
-		ConvertToXSegs(min_wal_size_mb, wal_segment_size) - 1;
-	maxSegNo = PriorRedoPtr / wal_segment_size +
-		ConvertToXSegs(max_wal_size_mb, wal_segment_size) - 1;
+	minSegNo = PriorRedoPtr / wal_file_size + 
+		ConvertToXSegs(min_wal_size_mb, wal_file_size) - 1;
+	maxSegNo = PriorRedoPtr / wal_file_size +
+		ConvertToXSegs(max_wal_size_mb, wal_file_size) - 1;
 
 	/*
 	 * Between those limits, recycle enough segments to get us through to the
@@ -2290,7 +2304,7 @@ XLOGfileslop(XLogRecPtr PriorRedoPtr)
 	distance *= 1.10;
 
 	recycleSegNo = (XLogSegNo) ceil(((double) PriorRedoPtr + distance) /
-									wal_segment_size);
+									wal_file_size);
 
 	if (recycleSegNo < minSegNo)
 		recycleSegNo = minSegNo;
@@ -2314,7 +2328,7 @@ XLogCheckpointNeeded(XLogSegNo new_segno)
 {
 	XLogSegNo	old_segno;
 
-	XLByteToSeg(RedoRecPtr, old_segno, wal_segment_size);
+	XLByteToSeg(RedoRecPtr, old_segno, wal_file_size);
 
 	if (new_segno >= old_segno + (uint64) (CheckPointSegments - 1))
 		return true;
@@ -2393,7 +2407,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)
 		ispartialpage = WriteRqst.Write < LogwrtResult.Write;
 
 		if (!XLByteInPrevSeg(LogwrtResult.Write, openLogSegNo,
-							 wal_segment_size))
+							 wal_file_size))
 		{
 			/*
 			 * Switch to new logfile segment.  We cannot have any pending
@@ -2403,7 +2417,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)
 			if (openLogFile >= 0)
 				XLogFileClose();
 			XLByteToPrevSeg(LogwrtResult.Write, openLogSegNo,
-							wal_segment_size);
+							wal_file_size);
 
 			/* create/use new log file */
 			use_existent = true;
@@ -2415,7 +2429,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)
 		if (openLogFile < 0)
 		{
 			XLByteToPrevSeg(LogwrtResult.Write, openLogSegNo,
-							wal_segment_size);
+							wal_file_size);
 			openLogFile = XLogFileOpen(openLogSegNo);
 			openLogOff = 0;
 		}
@@ -2425,8 +2439,8 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)
 		{
 			/* first of group */
 			startidx = curridx;
-			startoffset = XLogSegmentOffset(LogwrtResult.Write - XLOG_BLCKSZ,
-											wal_segment_size);
+			startoffset = XLogSegmentOffset(LogwrtResult.Write - wal_blck_size,
+											wal_file_size);
 		}
 		npages++;
 
@@ -2439,7 +2453,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)
 		last_iteration = WriteRqst.Write <= LogwrtResult.Write;
 
 		finishing_seg = !ispartialpage &&
-			(startoffset + npages * XLOG_BLCKSZ) >= wal_segment_size;
+			(startoffset + npages * wal_blck_size) >= wal_file_size;
 
 		if (last_iteration ||
 			curridx == XLogCtl->XLogCacheBlck ||
@@ -2463,8 +2477,8 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)
 			}
 
 			/* OK to write the page(s) */
-			from = XLogCtl->pages + startidx * (Size) XLOG_BLCKSZ;
-			nbytes = npages * (Size) XLOG_BLCKSZ;
+			from = XLogCtl->pages + startidx * (Size) wal_blck_size;
+			nbytes = npages * (Size) wal_blck_size;
 			nleft = nbytes;
 			do
 			{
@@ -2567,12 +2581,12 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)
 		{
 			if (openLogFile >= 0 &&
 				!XLByteInPrevSeg(LogwrtResult.Write, openLogSegNo,
-								 wal_segment_size))
+								 wal_file_size))
 				XLogFileClose();
 			if (openLogFile < 0)
 			{
 				XLByteToPrevSeg(LogwrtResult.Write, openLogSegNo,
-								wal_segment_size);
+								wal_file_size);
 				openLogFile = XLogFileOpen(openLogSegNo);
 				openLogOff = 0;
 			}
@@ -2630,7 +2644,7 @@ XLogSetAsyncXactLSN(XLogRecPtr asyncXactLSN)
 	if (!sleeping)
 	{
 		/* back off to last completed page boundary */
-		WriteRqstPtr -= WriteRqstPtr % XLOG_BLCKSZ;
+		WriteRqstPtr -= WriteRqstPtr % wal_blck_size;
 
 		/* if we have already flushed that far, we're done */
 		if (WriteRqstPtr <= LogwrtResult.Flush)
@@ -2968,7 +2982,7 @@ XLogBackgroundFlush(void)
 	SpinLockRelease(&XLogCtl->info_lck);
 
 	/* back off to last completed page boundary */
-	WriteRqst.Write -= WriteRqst.Write % XLOG_BLCKSZ;
+	WriteRqst.Write -= WriteRqst.Write % wal_blck_size;
 
 	/* if we have already flushed that far, consider async commit records */
 	if (WriteRqst.Write <= LogwrtResult.Flush)
@@ -2989,7 +3003,7 @@ XLogBackgroundFlush(void)
 		if (openLogFile >= 0)
 		{
 			if (!XLByteInPrevSeg(LogwrtResult.Write, openLogSegNo,
-								 wal_segment_size))
+								 wal_file_size))
 			{
 				XLogFileClose();
 			}
@@ -3003,7 +3017,7 @@ XLogBackgroundFlush(void)
 	 */
 	now = GetCurrentTimestamp();
 	flushbytes =
-		WriteRqst.Write / XLOG_BLCKSZ - LogwrtResult.Flush / XLOG_BLCKSZ;
+		WriteRqst.Write / wal_blck_size - LogwrtResult.Flush / wal_blck_size;
 
 	if (WalWriterFlushAfter == 0 || lastflush == 0)
 	{
@@ -3161,14 +3175,16 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 {
 	char		path[MAXPGPATH];
 	char		tmppath[MAXPGPATH];
-	char		zbuffer_raw[XLOG_BLCKSZ + MAXIMUM_ALIGNOF];
+	char		zbuffer_raw[wal_blck_size + MAXIMUM_ALIGNOF];
 	char	   *zbuffer;
 	XLogSegNo	installed_segno;
 	XLogSegNo	max_segno;
 	int			fd;
 	int			nbytes;
 
-	XLogFilePath(path, ThisTimeLineID, logsegno, wal_segment_size);
+	debug_xlog("XLogFileInit start\n");
+
+	XLogFilePath(path, ThisTimeLineID, logsegno, wal_file_size);
 
 	/*
 	 * Try to use existent file (checkpoint maker may have created it already)
@@ -3183,8 +3199,10 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 						(errcode_for_file_access(),
 						 errmsg("could not open file \"%s\": %m", path)));
 		}
-		else
+		else {
+			debug_xlog("XLogFileInit fd = %d\n", fd);
 			return fd;
+		}
 	}
 
 	/*
@@ -3193,6 +3211,7 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 	 * pre-creating an extra log segment.  That seems OK, and better than
 	 * holding the lock throughout this lengthy process.
 	 */
+	debug_xlog("creating and filling new WAL file\n");
 	elog(DEBUG2, "creating and filling new WAL file");
 
 	snprintf(tmppath, MAXPGPATH, XLOGDIR "/xlogtemp.%d", (int) getpid());
@@ -3201,10 +3220,14 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 
 	/* do not use get_sync_bit() here --- want to fsync only at end of fill */
 	fd = BasicOpenFile(tmppath, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);
-	if (fd < 0)
+	if (fd < 0) {
+		debug_xlog("could not create file %s", tmppath);
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not create file \"%s\": %m", tmppath)));
+	} 
+
+	debug_xlog("file %s opened\n", tmppath);
 
 	/*
 	 * Zero-fill the file.  We have to do this the hard way to ensure that all
@@ -3219,18 +3242,19 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 	 * cycles transferring data to the kernel.
 	 */
 	zbuffer = (char *) MAXALIGN(zbuffer_raw);
-	memset(zbuffer, 0, XLOG_BLCKSZ);
-	for (nbytes = 0; nbytes < wal_segment_size; nbytes += XLOG_BLCKSZ)
+	memset(zbuffer, 0, wal_blck_size);
+	for (nbytes = 0; nbytes < wal_file_size; nbytes += wal_blck_size)
 	{
 		errno = 0;
 		pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_WRITE);
-		if ((int) write(fd, zbuffer, XLOG_BLCKSZ) != (int) XLOG_BLCKSZ)
+		if ((int) write(fd, zbuffer, wal_blck_size) != (int) wal_blck_size)
 		{
 			int			save_errno = errno;
 
 			/*
 			 * If we fail to make the file, delete it to release disk space
 			 */
+			debug_xlog("failed to write to file %s\n", tmppath);
 			unlink(tmppath);
 
 			close(fd);
@@ -3242,8 +3266,10 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 					(errcode_for_file_access(),
 					 errmsg("could not write to file \"%s\": %m", tmppath)));
 		}
+
 		pgstat_report_wait_end();
 	}
+	debug_xlog("file %s written\n", tmppath);
 
 	pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);
 	if (pg_fsync(fd) != 0)
@@ -3255,6 +3281,8 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 	}
 	pgstat_report_wait_end();
 
+	debug_xlog("file %s synced\n", tmppath);
+
 	if (close(fd))
 		ereport(ERROR,
 				(errcode_for_file_access(),
@@ -3279,6 +3307,7 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 	 * the prior checkpoint's redo location. So somewhat arbitrarily, just use
 	 * CheckPointSegments.
 	 */
+	debug_xlog("step 1\n");
 	max_segno = logsegno + CheckPointSegments;
 	if (!InstallXLogFileSegment(&installed_segno, tmppath,
 								*use_existent, max_segno,
@@ -3296,6 +3325,7 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 	*use_existent = false;
 
 	/* Now open original target segment (might not be file I just made) */
+	debug_xlog("step 2\n");
 	fd = BasicOpenFile(path, O_RDWR | PG_BINARY | get_sync_bit(sync_method));
 	if (fd < 0)
 		ereport(ERROR,
@@ -3304,6 +3334,9 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 
 	elog(DEBUG2, "done creating and filling new WAL file");
 
+	
+	debug_xlog("step 3\n");
+
 	return fd;
 }
 
@@ -3328,7 +3361,7 @@ XLogFileCopy(XLogSegNo destsegno, TimeLineID srcTLI, XLogSegNo srcsegno,
 {
 	char		path[MAXPGPATH];
 	char		tmppath[MAXPGPATH];
-	char		buffer[XLOG_BLCKSZ];
+	char		buffer[wal_blck_size];
 	int			srcfd;
 	int			fd;
 	int			nbytes;
@@ -3336,7 +3369,7 @@ XLogFileCopy(XLogSegNo destsegno, TimeLineID srcTLI, XLogSegNo srcsegno,
 	/*
 	 * Open the source file
 	 */
-	XLogFilePath(path, srcTLI, srcsegno, wal_segment_size);
+	XLogFilePath(path, srcTLI, srcsegno, wal_file_size);
 	srcfd = OpenTransientFile(path, O_RDONLY | PG_BINARY);
 	if (srcfd < 0)
 		ereport(ERROR,
@@ -3360,7 +3393,7 @@ XLogFileCopy(XLogSegNo destsegno, TimeLineID srcTLI, XLogSegNo srcsegno,
 	/*
 	 * Do the data copying.
 	 */
-	for (nbytes = 0; nbytes < wal_segment_size; nbytes += sizeof(buffer))
+	for (nbytes = 0; nbytes < wal_file_size; nbytes += sizeof(buffer))
 	{
 		int			nread;
 
@@ -3470,7 +3503,13 @@ InstallXLogFileSegment(XLogSegNo *segno, char *tmppath,
 	char		path[MAXPGPATH];
 	struct stat stat_buf;
 
-	XLogFilePath(path, ThisTimeLineID, *segno, wal_segment_size);
+	debug_xlog("InstallXLogFileSegment\n");
+	debug_xlog("InstallXLogFileSegment segno = %lu\n", *segno);
+
+	XLogFilePath(path, ThisTimeLineID, *segno, wal_file_size);
+
+	debug_xlog("InstallXLogFileSegment path = %s\n", path);
+	debug_xlog("InstallXLogFileSegment tmppath = %s\n", tmppath);
 
 	/*
 	 * We want to be sure that only one process does this at a time.
@@ -3481,13 +3520,16 @@ InstallXLogFileSegment(XLogSegNo *segno, char *tmppath,
 	if (!find_free)
 	{
 		/* Force installation: get rid of any pre-existing segment file */
+		debug_xlog("InstallXLogFileSegment durable_unlink\n");
 		durable_unlink(path, DEBUG1);
 	}
 	else
 	{
 		/* Find a free slot to put it in */
+		debug_xlog("InstallXLogFileSegment find free slot\n");
 		while (stat(path, &stat_buf) == 0)
 		{
+			debug_xlog("InstallXLogFileSegment stat\n");
 			if ((*segno) >= max_segno)
 			{
 				/* Failed to find a free slot within specified range */
@@ -3496,7 +3538,8 @@ InstallXLogFileSegment(XLogSegNo *segno, char *tmppath,
 				return false;
 			}
 			(*segno)++;
-			XLogFilePath(path, ThisTimeLineID, *segno, wal_segment_size);
+			XLogFilePath(path, ThisTimeLineID, *segno, wal_file_size);
+			debug_xlog("InstallXLogFileSegment path = %s\n", path);
 		}
 	}
 
@@ -3506,6 +3549,7 @@ InstallXLogFileSegment(XLogSegNo *segno, char *tmppath,
 	 */
 	if (durable_link_or_rename(tmppath, path, LOG) != 0)
 	{
+		debug_xlog("InstallXLogFileSegment done install\n");
 		if (use_lock)
 			LWLockRelease(ControlFileLock);
 		/* durable_link_or_rename already emitted log message */
@@ -3515,6 +3559,8 @@ InstallXLogFileSegment(XLogSegNo *segno, char *tmppath,
 	if (use_lock)
 		LWLockRelease(ControlFileLock);
 
+	debug_xlog("InstallXLogFileSegment end\n");
+
 	return true;
 }
 
@@ -3527,7 +3573,7 @@ XLogFileOpen(XLogSegNo segno)
 	char		path[MAXPGPATH];
 	int			fd;
 
-	XLogFilePath(path, ThisTimeLineID, segno, wal_segment_size);
+	XLogFilePath(path, ThisTimeLineID, segno, wal_file_size);
 
 	fd = BasicOpenFile(path, O_RDWR | PG_BINARY | get_sync_bit(sync_method));
 	if (fd < 0)
@@ -3553,7 +3599,10 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,
 	char		path[MAXPGPATH];
 	int			fd;
 
-	XLogFileName(xlogfname, tli, segno, wal_segment_size);
+	XLogFileName(xlogfname, tli, segno, wal_file_size);
+
+	debug_xlog("XLogFileRead: xlogfname = %s, tli = %d, segno = %lu, wal_file_size = %lu\n",
+			xlogfname, tli, segno, wal_file_size);
 
 	switch (source)
 	{
@@ -3565,7 +3614,7 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,
 
 			restoredFromArchive = RestoreArchivedFile(path, xlogfname,
 													  "RECOVERYXLOG",
-													  wal_segment_size,
+													  wal_file_size,
 													  InRedo);
 			if (!restoredFromArchive)
 				return -1;
@@ -3573,7 +3622,7 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,
 
 		case XLOG_FROM_PG_WAL:
 		case XLOG_FROM_STREAM:
-			XLogFilePath(path, tli, segno, wal_segment_size);
+			XLogFilePath(path, tli, segno, wal_file_size);
 			restoredFromArchive = false;
 			break;
 
@@ -3581,6 +3630,8 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,
 			elog(ERROR, "invalid XLogFileRead source %d", source);
 	}
 
+	debug_xlog("XLogFilePath = %s\n", path);
+
 	/*
 	 * If the segment was fetched from archival storage, replace the existing
 	 * xlog segment (if any) with the archival version.
@@ -3598,6 +3649,8 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,
 	fd = BasicOpenFile(path, O_RDONLY | PG_BINARY);
 	if (fd >= 0)
 	{
+		debug_xlog("BasicOpenFile successful\n");
+
 		/* Success! */
 		curFileTLI = tli;
 
@@ -3614,7 +3667,10 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,
 			XLogReceiptTime = GetCurrentTimestamp();
 
 		return fd;
+	} else {
+		debug_xlog("failed to open %s\n", path);
 	}
+
 	if (errno != ENOENT || !notfoundOk) /* unexpected failure? */
 		ereport(PANIC,
 				(errcode_for_file_access(),
@@ -3635,6 +3691,9 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, int source)
 	int			fd;
 	List	   *tles;
 
+	debug_xlog("XLogFileReadAnyTLI: XLogSegNo = %lu, emode = %d, source = %d\n",
+		segno, emode, source);
+
 	/*
 	 * Loop looking for a suitable timeline ID: we might need to read any of
 	 * the timelines listed in expectedTLEs.
@@ -3662,11 +3721,13 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, int source)
 	{
 		TimeLineID	tli = ((TimeLineHistoryEntry *) lfirst(cell))->tli;
 
+		debug_xlog("tli = %d\n", tli);
 		if (tli < curFileTLI)
 			break;				/* don't bother looking at too-old TLIs */
 
 		if (source == XLOG_FROM_ANY || source == XLOG_FROM_ARCHIVE)
 		{
+			debug_xlog("XLogFileRead 1\n");
 			fd = XLogFileRead(segno, emode, tli,
 							  XLOG_FROM_ARCHIVE, true);
 			if (fd != -1)
@@ -3680,19 +3741,22 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, int source)
 
 		if (source == XLOG_FROM_ANY || source == XLOG_FROM_PG_WAL)
 		{
+			debug_xlog("XLogFileRead 2\n");
 			fd = XLogFileRead(segno, emode, tli,
 							  XLOG_FROM_PG_WAL, true);
 			if (fd != -1)
 			{
+				debug_xlog("XLogFileRead 2 1\n");
 				if (!expectedTLEs)
 					expectedTLEs = tles;
 				return fd;
 			}
+			debug_xlog("fd = %d\n", fd);
 		}
 	}
 
 	/* Couldn't find it.  For simplicity, complain about front timeline */
-	XLogFilePath(path, recoveryTargetTLI, segno, wal_segment_size);
+	XLogFilePath(path, recoveryTargetTLI, segno, wal_file_size);
 	errno = ENOENT;
 	ereport(emode,
 			(errcode_for_file_access(),
@@ -3745,9 +3809,9 @@ PreallocXlogFiles(XLogRecPtr endptr)
 	bool		use_existent;
 	uint64		offset;
 
-	XLByteToPrevSeg(endptr, _logSegNo, wal_segment_size);
-	offset = XLogSegmentOffset(endptr - 1, wal_segment_size);
-	if (offset >= (uint32) (0.75 * wal_segment_size))
+	XLByteToPrevSeg(endptr, _logSegNo, wal_file_size);
+	offset = XLogSegmentOffset(endptr - 1, wal_file_size);
+	if (offset >= (uint32) (0.75 * wal_file_size))
 	{
 		_logSegNo++;
 		use_existent = true;
@@ -3778,7 +3842,7 @@ CheckXLogRemoved(XLogSegNo segno, TimeLineID tli)
 	{
 		char		filename[MAXFNAMELEN];
 
-		XLogFileName(filename, tli, segno, wal_segment_size);
+		XLogFileName(filename, tli, segno, wal_file_size);
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("requested WAL segment %s has already been removed",
@@ -3815,7 +3879,7 @@ UpdateLastRemovedPtr(char *filename)
 	uint32		tli;
 	XLogSegNo	segno;
 
-	XLogFromFileName(filename, &tli, &segno, wal_segment_size);
+	XLogFromFileName(filename, &tli, &segno, wal_file_size);
 
 	SpinLockAcquire(&XLogCtl->info_lck);
 	if (segno > XLogCtl->lastRemovedSegNo)
@@ -3849,7 +3913,7 @@ RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
 	 * doesn't matter, we ignore that in the comparison. (During recovery,
 	 * ThisTimeLineID isn't set, so we can't use that.)
 	 */
-	XLogFileName(lastoff, 0, segno, wal_segment_size);
+	XLogFileName(lastoff, 0, segno, wal_file_size);
 
 	elog(DEBUG2, "attempting to remove WAL segments older than log file %s",
 		 lastoff);
@@ -3910,7 +3974,7 @@ RemoveNonParentXlogFiles(XLogRecPtr switchpoint, TimeLineID newTLI)
 	char		switchseg[MAXFNAMELEN];
 	XLogSegNo	endLogSegNo;
 
-	XLByteToPrevSeg(switchpoint, endLogSegNo, wal_segment_size);
+	XLByteToPrevSeg(switchpoint, endLogSegNo, wal_file_size);
 
 	xldir = AllocateDir(XLOGDIR);
 	if (xldir == NULL)
@@ -3922,7 +3986,7 @@ RemoveNonParentXlogFiles(XLogRecPtr switchpoint, TimeLineID newTLI)
 	/*
 	 * Construct a filename of the last segment to be kept.
 	 */
-	XLogFileName(switchseg, newTLI, endLogSegNo, wal_segment_size);
+	XLogFileName(switchseg, newTLI, endLogSegNo, wal_file_size);
 
 	elog(DEBUG2, "attempting to remove WAL segments newer than log file %s",
 		 switchseg);
@@ -3978,7 +4042,7 @@ RemoveXlogFile(const char *segname, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
 	/*
 	 * Initialize info about where to try to recycle to.
 	 */
-	XLByteToSeg(endptr, endlogSegNo, wal_segment_size);
+	XLByteToSeg(endptr, endlogSegNo, wal_file_size);
 	if (PriorRedoPtr == InvalidXLogRecPtr)
 		recycleSegNo = endlogSegNo + 10;
 	else
@@ -4153,6 +4217,7 @@ ReadRecord(XLogReaderState *xlogreader, XLogRecPtr RecPtr, int emode,
 	XLogPageReadPrivate *private = (XLogPageReadPrivate *) xlogreader->private_data;
 
 	/* Pass through parameters to XLogPageRead */
+	debug_xlog("private building\n");
 	private->fetching_ckpt = fetching_ckpt;
 	private->emode = emode;
 	private->randAccess = (RecPtr != InvalidXLogRecPtr);
@@ -4164,11 +4229,13 @@ ReadRecord(XLogReaderState *xlogreader, XLogRecPtr RecPtr, int emode,
 	{
 		char	   *errormsg;
 
+		debug_xlog("XLogReadRecord\n");
 		record = XLogReadRecord(xlogreader, RecPtr, &errormsg);
 		ReadRecPtr = xlogreader->ReadRecPtr;
 		EndRecPtr = xlogreader->EndRecPtr;
 		if (record == NULL)
 		{
+			debug_xlog("record is null\n");
 			if (readFile >= 0)
 			{
 				close(readFile);
@@ -4196,11 +4263,12 @@ ReadRecord(XLogReaderState *xlogreader, XLogRecPtr RecPtr, int emode,
 			XLogSegNo	segno;
 			int32		offset;
 
-			XLByteToSeg(xlogreader->latestPagePtr, segno, wal_segment_size);
+			debug_xlog("Check page TLI is one of the expected values\n");
+			XLByteToSeg(xlogreader->latestPagePtr, segno, wal_file_size);
 			offset = XLogSegmentOffset(xlogreader->latestPagePtr,
-									   wal_segment_size);
+									   wal_file_size);
 			XLogFileName(fname, xlogreader->readPageTLI, segno,
-						 wal_segment_size);
+						 wal_file_size);
 			ereport(emode_for_corrupt_record(emode,
 											 RecPtr ? RecPtr : EndRecPtr),
 					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
@@ -4213,6 +4281,7 @@ ReadRecord(XLogReaderState *xlogreader, XLogRecPtr RecPtr, int emode,
 		if (record)
 		{
 			/* Great, got a record */
+			debug_xlog("return record\n");
 			return record;
 		}
 		else
@@ -4389,9 +4458,9 @@ WriteControlFile(void)
 	 * comments for these symbols in pg_control.h.
 	 */
 	StaticAssertStmt(sizeof(ControlFileData) <= PG_CONTROL_MAX_SAFE_SIZE,
-					 "pg_control is too large for atomic disk writes");
+		 "pg_control is too large for atomic disk writes");
 	StaticAssertStmt(sizeof(ControlFileData) <= PG_CONTROL_FILE_SIZE,
-					 "sizeof(ControlFileData) exceeds PG_CONTROL_FILE_SIZE");
+		 "sizeof(ControlFileData) exceeds PG_CONTROL_FILE_SIZE");
 
 	/*
 	 * Initialize version and compatibility-check fields
@@ -4402,15 +4471,16 @@ WriteControlFile(void)
 	ControlFile->maxAlign = MAXIMUM_ALIGNOF;
 	ControlFile->floatFormat = FLOATFORMAT_VALUE;
 
-	ControlFile->blcksz = BLCKSZ;
-	ControlFile->relseg_size = RELSEG_SIZE;
-	ControlFile->xlog_blcksz = XLOG_BLCKSZ;
-	ControlFile->xlog_seg_size = wal_segment_size;
+	ControlFile->blcksz =  rel_blck_size;
+	ControlFile->relseg_size = rel_file_blck;
+	ControlFile->xlog_blcksz = wal_blck_size;
+	ControlFile->xlog_seg_size = wal_file_size;
 
 	ControlFile->nameDataLen = NAMEDATALEN;
 	ControlFile->indexMaxKeys = INDEX_MAX_KEYS;
 
 	ControlFile->toast_max_chunk_size = TOAST_MAX_CHUNK_SIZE;
+	debug_xlog("ControlFile->toast_max_chunk_size = %d\n", ControlFile->toast_max_chunk_size);
 	ControlFile->loblksize = LOBLKSIZE;
 
 	ControlFile->float4ByVal = FLOAT4PASSBYVAL;
@@ -4472,7 +4542,7 @@ ReadControlFile(void)
 {
 	pg_crc32c	crc;
 	int			fd;
-	static char wal_segsz_str[20];
+	char param_str[20];
 
 	/*
 	 * Read data...
@@ -4541,6 +4611,7 @@ ReadControlFile(void)
 						   " but the server was compiled with CATALOG_VERSION_NO %d.",
 						   ControlFile->catalog_version_no, CATALOG_VERSION_NO),
 				 errhint("It looks like you need to initdb.")));
+
 	if (ControlFile->maxAlign != MAXIMUM_ALIGNOF)
 		ereport(FATAL,
 				(errmsg("database files are incompatible with server"),
@@ -4548,32 +4619,13 @@ ReadControlFile(void)
 						   " but the server was compiled with MAXALIGN %d.",
 						   ControlFile->maxAlign, MAXIMUM_ALIGNOF),
 				 errhint("It looks like you need to initdb.")));
+
 	if (ControlFile->floatFormat != FLOATFORMAT_VALUE)
 		ereport(FATAL,
 				(errmsg("database files are incompatible with server"),
 				 errdetail("The database cluster appears to use a different floating-point number format than the server executable."),
 				 errhint("It looks like you need to initdb.")));
-	if (ControlFile->blcksz != BLCKSZ)
-		ereport(FATAL,
-				(errmsg("database files are incompatible with server"),
-				 errdetail("The database cluster was initialized with BLCKSZ %d,"
-						   " but the server was compiled with BLCKSZ %d.",
-						   ControlFile->blcksz, BLCKSZ),
-				 errhint("It looks like you need to recompile or initdb.")));
-	if (ControlFile->relseg_size != RELSEG_SIZE)
-		ereport(FATAL,
-				(errmsg("database files are incompatible with server"),
-				 errdetail("The database cluster was initialized with RELSEG_SIZE %d,"
-						   " but the server was compiled with RELSEG_SIZE %d.",
-						   ControlFile->relseg_size, RELSEG_SIZE),
-				 errhint("It looks like you need to recompile or initdb.")));
-	if (ControlFile->xlog_blcksz != XLOG_BLCKSZ)
-		ereport(FATAL,
-				(errmsg("database files are incompatible with server"),
-				 errdetail("The database cluster was initialized with XLOG_BLCKSZ %d,"
-						   " but the server was compiled with XLOG_BLCKSZ %d.",
-						   ControlFile->xlog_blcksz, XLOG_BLCKSZ),
-				 errhint("It looks like you need to recompile or initdb.")));
+
 	if (ControlFile->nameDataLen != NAMEDATALEN)
 		ereport(FATAL,
 				(errmsg("database files are incompatible with server"),
@@ -4581,6 +4633,7 @@ ReadControlFile(void)
 						   " but the server was compiled with NAMEDATALEN %d.",
 						   ControlFile->nameDataLen, NAMEDATALEN),
 				 errhint("It looks like you need to recompile or initdb.")));
+
 	if (ControlFile->indexMaxKeys != INDEX_MAX_KEYS)
 		ereport(FATAL,
 				(errmsg("database files are incompatible with server"),
@@ -4588,20 +4641,6 @@ ReadControlFile(void)
 						   " but the server was compiled with INDEX_MAX_KEYS %d.",
 						   ControlFile->indexMaxKeys, INDEX_MAX_KEYS),
 				 errhint("It looks like you need to recompile or initdb.")));
-	if (ControlFile->toast_max_chunk_size != TOAST_MAX_CHUNK_SIZE)
-		ereport(FATAL,
-				(errmsg("database files are incompatible with server"),
-				 errdetail("The database cluster was initialized with TOAST_MAX_CHUNK_SIZE %d,"
-						   " but the server was compiled with TOAST_MAX_CHUNK_SIZE %d.",
-						   ControlFile->toast_max_chunk_size, (int) TOAST_MAX_CHUNK_SIZE),
-				 errhint("It looks like you need to recompile or initdb.")));
-	if (ControlFile->loblksize != LOBLKSIZE)
-		ereport(FATAL,
-				(errmsg("database files are incompatible with server"),
-				 errdetail("The database cluster was initialized with LOBLKSIZE %d,"
-						   " but the server was compiled with LOBLKSIZE %d.",
-						   ControlFile->loblksize, (int) LOBLKSIZE),
-				 errhint("It looks like you need to recompile or initdb.")));
 
 #ifdef USE_FLOAT4_BYVAL
 	if (ControlFile->float4ByVal != true)
@@ -4635,28 +4674,66 @@ ReadControlFile(void)
 				 errhint("It looks like you need to recompile or initdb.")));
 #endif
 
-	wal_segment_size = ControlFile->xlog_seg_size;
+	/*
+	 * Set block and segment sizes based on control file values
+	 */
+	rel_blck_size = ControlFile->blcksz;
+	rel_file_blck = ControlFile->relseg_size;
+	rel_file_size = rel_file_blck * rel_blck_size; 
+
+	wal_blck_size = ControlFile->xlog_blcksz;
+	wal_file_size = ControlFile->xlog_seg_size;
+	wal_file_blck = wal_file_size / wal_blck_size;
 
-	if (!IsValidWalSegSize(wal_segment_size))
+	snprintf(param_str, sizeof(param_str), "%d", rel_blck_size);
+	SetConfigOption("block_size", param_str, PGC_INTERNAL, PGC_S_OVERRIDE);
+
+	snprintf(param_str, sizeof(param_str), "%d", rel_file_blck);
+	SetConfigOption("segment_size", param_str, PGC_INTERNAL, PGC_S_OVERRIDE);
+
+	snprintf(param_str, sizeof(param_str), "%d", wal_blck_size);
+	SetConfigOption("wal_block_size", param_str, PGC_INTERNAL, PGC_S_OVERRIDE);
+
+	snprintf(param_str, sizeof(param_str), "%ld", wal_file_size);
+	SetConfigOption("wal_segment_size", param_str, PGC_INTERNAL, PGC_S_OVERRIDE);
+
+	wal_segment_size = (int) wal_file_size;
+
+	if (!IsValidWalSegSize(wal_file_size))
 		ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-						errmsg("WAL segment size must be a power of two between 1MB and 1GB, but the control file specifies %d bytes",
-							   wal_segment_size)));
+			errmsg("WAL segment size must be a power of two between 1MB and 1GB, but the control file specifies %lu bytes",
+				   wal_file_size)));
 
-	snprintf(wal_segsz_str, sizeof(wal_segsz_str), "%d", wal_segment_size);
-	SetConfigOption("wal_segment_size", wal_segsz_str, PGC_INTERNAL,
-					PGC_S_OVERRIDE);
+	/*
+	 * TOAST_MAX_CHUNK_SIZE requires above rel_blck_size to be defined based
+	 * on control file value.
+	 */
+	if (ControlFile->toast_max_chunk_size != TOAST_MAX_CHUNK_SIZE)
+		ereport(FATAL,
+				(errmsg("database files are incompatible with server"),
+				 errdetail("The database cluster was initialized with TOAST_MAX_CHUNK_SIZE %d,"
+						   " but the server was compiled with TOAST_MAX_CHUNK_SIZE %d.",
+						   ControlFile->toast_max_chunk_size, (int) TOAST_MAX_CHUNK_SIZE),
+				 errhint("It looks like you need to recompile or initdb.")));
+
+	if (ControlFile->loblksize != LOBLKSIZE)
+		ereport(FATAL,
+				(errmsg("database files are incompatible with server"),
+				 errdetail("The database cluster was initialized with LOBLKSIZE %d,"
+						   " but the server was compiled with LOBLKSIZE %d.",
+						   ControlFile->loblksize, (int) LOBLKSIZE),
+				 errhint("It looks like you need to recompile or initdb.")));
 
 	/* check and update variables dependent on wal_segment_size */
-	if (ConvertToXSegs(min_wal_size_mb, wal_segment_size) < 2)
+	if (ConvertToXSegs(min_wal_size_mb, wal_file_size) < 2)
 		ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-						errmsg("\"min_wal_size\" must be at least twice \"wal_segment_size\".")));
+			errmsg("\"min_wal_size\" must be at least twice \"wal_file_size\".")));
 
-	if (ConvertToXSegs(max_wal_size_mb, wal_segment_size) < 2)
+	if (ConvertToXSegs(max_wal_size_mb, wal_file_size) < 2)
 		ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-						errmsg("\"max_wal_size\" must be at least twice \"wal_segment_size\".")));
+			errmsg("\"max_wal_size\" must be at least twice \"wal_file_size\".")));
 
-	UsableBytesInSegment =
-		(wal_segment_size / XLOG_BLCKSZ * UsableBytesInPage) -
+	UsableBytesInSegment = (wal_file_size / wal_blck_size * UsableBytesInPage) -
 		(SizeOfXLogLongPHD - SizeOfXLogShortPHD);
 
 	CalculateCheckpointSegments();
@@ -4780,8 +4857,8 @@ XLOGChooseNumBuffers(void)
 	int			xbuffers;
 
 	xbuffers = NBuffers / 32;
-	if (xbuffers > (wal_segment_size / XLOG_BLCKSZ))
-		xbuffers = (wal_segment_size / XLOG_BLCKSZ);
+	if (xbuffers > (wal_file_size / wal_blck_size))
+		xbuffers = (wal_file_size / wal_blck_size);
 	if (xbuffers < 8)
 		xbuffers = 8;
 	return xbuffers;
@@ -4873,9 +4950,9 @@ XLOGShmemSize(void)
 	/* xlblocks array */
 	size = add_size(size, mul_size(sizeof(XLogRecPtr), XLOGbuffers));
 	/* extra alignment padding for XLOG I/O buffers */
-	size = add_size(size, XLOG_BLCKSZ);
+	size = add_size(size, wal_blck_size);
 	/* and the buffers themselves */
-	size = add_size(size, mul_size(XLOG_BLCKSZ, XLOGbuffers));
+	size = add_size(size, mul_size(wal_blck_size, XLOGbuffers));
 
 	/*
 	 * Note: we don't count ControlFileData, it comes out of the "slop factor"
@@ -4976,9 +5053,9 @@ XLOGShmemInit(void)
 	 * This simplifies some calculations in XLOG insertion. It is also
 	 * required for O_DIRECT.
 	 */
-	allocptr = (char *) TYPEALIGN(XLOG_BLCKSZ, allocptr);
+	allocptr = (char *) TYPEALIGN(wal_blck_size, allocptr);
 	XLogCtl->pages = allocptr;
-	memset(XLogCtl->pages, 0, (Size) XLOG_BLCKSZ * XLOGbuffers);
+	memset(XLogCtl->pages, 0, (Size) wal_blck_size * XLOGbuffers);
 
 	/*
 	 * Do basic initialization of XLogCtl shared data. (StartupXLOG will fill
@@ -5045,10 +5122,31 @@ BootStrapXLOG(void)
 	/* First timeline ID is always 1 */
 	ThisTimeLineID = 1;
 
+	/*
+	 * Fetch block and file sizes from GUC
+	 * Values have been set in the bootstrap.c code 
+	 */
+	rel_blck_size = atoi(GetConfigOption("block_size", false, false));
+	rel_file_blck = atoi(GetConfigOption("segment_size", false, false));
+	rel_file_size = rel_file_blck * rel_blck_size;
+
+	wal_blck_size = atoi(GetConfigOption("wal_block_size", false, false));
+	wal_file_size = atoi(GetConfigOption("wal_segment_size", false, false) );
+	wal_file_blck = wal_file_size / wal_blck_size;
+	wal_segment_size = wal_file_size;		/* Alias */
+
+	fprintf(stdout, "rel_blck_size = %u\n", rel_blck_size);
+	fprintf(stdout, "rel_file_blck = %u\n", rel_file_blck);
+	//fprintf(stdout, "rel_file_size = %lu\n", rel_file_size);
+
+	fprintf(stdout, "wal_blck_size = %u\n", wal_blck_size);
+	fprintf(stdout, "wal_file_blck = %u\n", wal_file_blck);
+	//fprintf(stdout, "wal_file_size = %lu\n", wal_file_size);
+
 	/* page buffer must be aligned suitably for O_DIRECT */
-	buffer = (char *) palloc(XLOG_BLCKSZ + XLOG_BLCKSZ);
-	page = (XLogPageHeader) TYPEALIGN(XLOG_BLCKSZ, buffer);
-	memset(page, 0, XLOG_BLCKSZ);
+	buffer = (char *) palloc(wal_blck_size + wal_blck_size);
+	page = (XLogPageHeader) TYPEALIGN(wal_blck_size, buffer);
+	memset(page, 0, wal_blck_size);
 
 	/*
 	 * Set up information for the initial checkpoint record
@@ -5057,7 +5155,8 @@ BootStrapXLOG(void)
 	 * segment with logid=0 logseg=1. The very first WAL segment, 0/0, is not
 	 * used, so that we can use 0/0 to mean "before any valid WAL segment".
 	 */
-	checkPoint.redo = wal_segment_size + SizeOfXLogLongPHD;
+	checkPoint.redo = wal_file_size + SizeOfXLogLongPHD;
+	debug_xlog("BootStrapXLOG checkPoint.redo = %lu\n", checkPoint.redo);
 	checkPoint.ThisTimeLineID = ThisTimeLineID;
 	checkPoint.PrevTimeLineID = ThisTimeLineID;
 	checkPoint.fullPageWrites = fullPageWrites;
@@ -5078,6 +5177,7 @@ BootStrapXLOG(void)
 	ShmemVariableCache->nextXid = checkPoint.nextXid;
 	ShmemVariableCache->nextOid = checkPoint.nextOid;
 	ShmemVariableCache->oidCount = 0;
+
 	MultiXactSetNextMXact(checkPoint.nextMulti, checkPoint.nextMultiOffset);
 	AdvanceOldestClogXid(checkPoint.oldestXid);
 	SetTransactionIdLimit(checkPoint.oldestXid, checkPoint.oldestXidDB);
@@ -5088,11 +5188,11 @@ BootStrapXLOG(void)
 	page->xlp_magic = XLOG_PAGE_MAGIC;
 	page->xlp_info = XLP_LONG_HEADER;
 	page->xlp_tli = ThisTimeLineID;
-	page->xlp_pageaddr = wal_segment_size;
+	page->xlp_pageaddr = wal_file_size;
 	longpage = (XLogLongPageHeader) page;
 	longpage->xlp_sysid = sysidentifier;
-	longpage->xlp_seg_size = wal_segment_size;
-	longpage->xlp_xlog_blcksz = XLOG_BLCKSZ;
+	longpage->xlp_seg_size = wal_file_size;
+	longpage->xlp_xlog_blcksz = wal_blck_size;
 
 	/* Insert the initial checkpoint record */
 	recptr = ((char *) page + SizeOfXLogLongPHD);
@@ -5103,6 +5203,7 @@ BootStrapXLOG(void)
 	record->xl_info = XLOG_CHECKPOINT_SHUTDOWN;
 	record->xl_rmid = RM_XLOG_ID;
 	recptr += SizeOfXLogRecord;
+
 	/* fill the XLogRecordDataHeaderShort struct */
 	*(recptr++) = (char) XLR_BLOCK_ID_DATA_SHORT;
 	*(recptr++) = sizeof(checkPoint);
@@ -5123,7 +5224,7 @@ BootStrapXLOG(void)
 	/* Write the first page with the initial record */
 	errno = 0;
 	pgstat_report_wait_start(WAIT_EVENT_WAL_BOOTSTRAP_WRITE);
-	if (write(openLogFile, page, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+	if (write(openLogFile, page, wal_blck_size) != wal_blck_size)
 	{
 		/* if write didn't set errno, assume problem is no disk space */
 		if (errno == 0)
@@ -5131,7 +5232,10 @@ BootStrapXLOG(void)
 		ereport(PANIC,
 				(errcode_for_file_access(),
 				 errmsg("could not write bootstrap write-ahead log file: %m")));
+	} else {
+		debug_xlog("wrote xlog file\n");
 	}
+
 	pgstat_report_wait_end();
 
 	pgstat_report_wait_start(WAIT_EVENT_WAL_BOOTSTRAP_SYNC);
@@ -5171,13 +5275,17 @@ BootStrapXLOG(void)
 	ControlFile->data_checksum_version = bootstrap_data_checksum_version;
 
 	/* some additional ControlFile fields are set in WriteControlFile() */
-
+	debug_xlog("writing control file\n");
 	WriteControlFile();
 
 	/* Bootstrap the commit log, too */
+	debug_xlog("BootStrapCLOG\n");
 	BootStrapCLOG();
+	debug_xlog("BootStrapCommitTs\n");
 	BootStrapCommitTs();
+	debug_xlog("BootStrapSUBTRANS\n");
 	BootStrapSUBTRANS();
+	debug_xlog("BootStrapMultiXact\n");
 	BootStrapMultiXact();
 
 	pfree(buffer);
@@ -5186,6 +5294,7 @@ BootStrapXLOG(void)
 	 * Force control file to be read - in contrast to normal processing we'd
 	 * otherwise never run the checks and GUC related initializations therein.
 	 */
+	debug_xlog("ReadControlFile\n");
 	ReadControlFile();
 }
 
@@ -5573,8 +5682,8 @@ exitArchiveRecovery(TimeLineID endTLI, XLogRecPtr endOfLog)
 	 * they are the same, but if the switch happens exactly at a segment
 	 * boundary, startLogSegNo will be endLogSegNo + 1.
 	 */
-	XLByteToPrevSeg(endOfLog, endLogSegNo, wal_segment_size);
-	XLByteToSeg(endOfLog, startLogSegNo, wal_segment_size);
+	XLByteToPrevSeg(endOfLog, endLogSegNo, wal_file_size);
+	XLByteToSeg(endOfLog, startLogSegNo, wal_file_size);
 
 	/*
 	 * Initialize the starting WAL segment for the new timeline. If the switch
@@ -5592,7 +5701,7 @@ exitArchiveRecovery(TimeLineID endTLI, XLogRecPtr endOfLog)
 		 * avoid emplacing a bogus file.
 		 */
 		XLogFileCopy(endLogSegNo, endTLI, endLogSegNo,
-					 XLogSegmentOffset(endOfLog, wal_segment_size));
+					 XLogSegmentOffset(endOfLog, wal_file_size));
 	}
 	else
 	{
@@ -5616,7 +5725,7 @@ exitArchiveRecovery(TimeLineID endTLI, XLogRecPtr endOfLog)
 	 * Let's just make real sure there are not .ready or .done flags posted
 	 * for the new segment.
 	 */
-	XLogFileName(xlogfname, ThisTimeLineID, startLogSegNo, wal_segment_size);
+	XLogFileName(xlogfname, ThisTimeLineID, startLogSegNo, wal_file_size);
 	XLogArchiveCleanup(xlogfname);
 
 	/*
@@ -6336,6 +6445,7 @@ StartupXLOG(void)
 	 * someone has performed a copy for PITR, these directories may have been
 	 * excluded and need to be re-created.
 	 */
+	debug_xlog("ValidateXLOGDirectoryStructure\n");
 	ValidateXLOGDirectoryStructure();
 
 	/*
@@ -6412,8 +6522,9 @@ StartupXLOG(void)
 		OwnLatch(&XLogCtl->recoveryWakeupLatch);
 
 	/* Set up XLOG reader facility */
+	debug_xlog("creating xlogreader\n");
 	MemSet(&private, 0, sizeof(XLogPageReadPrivate));
-	xlogreader = XLogReaderAllocate(wal_segment_size, &XLogPageRead, &private);
+	xlogreader = XLogReaderAllocate(wal_file_size, &XLogPageRead, &private);
 	if (!xlogreader)
 		ereport(ERROR,
 				(errcode(ERRCODE_OUT_OF_MEMORY),
@@ -6425,9 +6536,10 @@ StartupXLOG(void)
 	 * Allocate pages dedicated to WAL consistency checks, those had better be
 	 * aligned.
 	 */
-	replay_image_masked = (char *) palloc(BLCKSZ);
-	master_image_masked = (char *) palloc(BLCKSZ);
+	replay_image_masked = (char *) palloc(rel_blck_size);
+	master_image_masked = (char *) palloc(rel_blck_size);
 
+	debug_xlog("reading backup_label\n");
 	if (read_backup_label(&checkPointLoc, &backupEndRequired,
 						  &backupFromStandby))
 	{
@@ -6438,6 +6550,7 @@ StartupXLOG(void)
 		 * file, we know how far we need to replay to reach consistency. Enter
 		 * archive recovery directly.
 		 */
+		debug_xlog("InArchiveRecovery = true\n");
 		InArchiveRecovery = true;
 		if (StandbyModeRequested)
 			StandbyMode = true;
@@ -6526,6 +6639,7 @@ StartupXLOG(void)
 		 * that occurs in rename operation as even if map file is present
 		 * without backup_label file, it is harmless.
 		 */
+		debug_xlog("read TABLESPACE_MAP\n");
 		if (stat(TABLESPACE_MAP, &st) == 0)
 		{
 			unlink(TABLESPACE_MAP_OLD);
@@ -6610,6 +6724,7 @@ StartupXLOG(void)
 	 * backup, so needs to clear old relcache files here after creating
 	 * symlinks.
 	 */
+	debug_xlog("RelationCacheInitFileRemove\n");
 	RelationCacheInitFileRemove();
 
 	/*
@@ -6698,12 +6813,14 @@ StartupXLOG(void)
 	 * Initialize replication slots, before there's a chance to remove
 	 * required resources.
 	 */
+	debug_xlog("StartupReorderBuffer\n");
 	StartupReplicationSlots();
 
 	/*
 	 * Startup logical state, needs to be setup now so we have proper data
 	 * during crash recovery.
 	 */
+	debug_xlog("StartupReorderBuffer\n");
 	StartupReorderBuffer();
 
 	/*
@@ -6723,6 +6840,7 @@ StartupXLOG(void)
 	/*
 	 * Recover knowledge about replay progress of known replication partners.
 	 */
+	debug_xlog("StartupReplicationOrigin\n");
 	StartupReplicationOrigin();
 
 	/*
@@ -6990,6 +7108,7 @@ StartupXLOG(void)
 		}
 
 		/* Initialize resource managers */
+		debug_xlog("Initialize resource managers\n");
 		for (rmid = 0; rmid <= RM_MAX_ID; rmid++)
 		{
 			if (RmgrTable[rmid].rm_startup != NULL)
@@ -7041,6 +7160,7 @@ StartupXLOG(void)
 		 * Allow read-only connections immediately if we're consistent
 		 * already.
 		 */
+		debug_xlog("CheckRecoveryConsistency\n");
 		CheckRecoveryConsistency();
 
 		/*
@@ -7524,26 +7644,26 @@ StartupXLOG(void)
 	 * record spans, not the one it starts in.  The last block is indeed the
 	 * one we want to use.
 	 */
-	if (EndOfLog % XLOG_BLCKSZ != 0)
+	if (EndOfLog % wal_blck_size != 0)
 	{
 		char	   *page;
 		int			len;
 		int			firstIdx;
 		XLogRecPtr	pageBeginPtr;
 
-		pageBeginPtr = EndOfLog - (EndOfLog % XLOG_BLCKSZ);
-		Assert(readOff == XLogSegmentOffset(pageBeginPtr, wal_segment_size));
+		pageBeginPtr = EndOfLog - (EndOfLog % wal_blck_size);
+		Assert(readOff == XLogSegmentOffset(pageBeginPtr, wal_file_size));
 
 		firstIdx = XLogRecPtrToBufIdx(EndOfLog);
 
 		/* Copy the valid part of the last block, and zero the rest */
-		page = &XLogCtl->pages[firstIdx * XLOG_BLCKSZ];
-		len = EndOfLog % XLOG_BLCKSZ;
+		page = &XLogCtl->pages[firstIdx * wal_blck_size];
+		len = EndOfLog % wal_blck_size;
 		memcpy(page, xlogreader->readBuf, len);
-		memset(page + len, 0, XLOG_BLCKSZ - len);
+		memset(page + len, 0, wal_blck_size - len);
 
-		XLogCtl->xlblocks[firstIdx] = pageBeginPtr + XLOG_BLCKSZ;
-		XLogCtl->InitializedUpTo = pageBeginPtr + XLOG_BLCKSZ;
+		XLogCtl->xlblocks[firstIdx] = pageBeginPtr + wal_blck_size;
+		XLogCtl->InitializedUpTo = pageBeginPtr + wal_blck_size;
 	}
 	else
 	{
@@ -7680,14 +7800,14 @@ StartupXLOG(void)
 		 * restored from the archive to begin with, it's expected to have a
 		 * .done file).
 		 */
-		if (XLogSegmentOffset(EndOfLog, wal_segment_size) != 0 &&
+		if (XLogSegmentOffset(EndOfLog, wal_file_size) != 0 &&
 			XLogArchivingActive())
 		{
 			char		origfname[MAXFNAMELEN];
 			XLogSegNo	endLogSegNo;
 
-			XLByteToPrevSeg(EndOfLog, endLogSegNo, wal_segment_size);
-			XLogFileName(origfname, EndOfLogTLI, endLogSegNo, wal_segment_size);
+			XLByteToPrevSeg(EndOfLog, endLogSegNo, wal_file_size);
+			XLogFileName(origfname, EndOfLogTLI, endLogSegNo, wal_file_size);
 
 			if (!XLogArchiveIsReadyOrDone(origfname))
 			{
@@ -7695,7 +7815,7 @@ StartupXLOG(void)
 				char		partialfname[MAXFNAMELEN];
 				char		partialpath[MAXPGPATH];
 
-				XLogFilePath(origpath, EndOfLogTLI, endLogSegNo, wal_segment_size);
+				XLogFilePath(origpath, EndOfLogTLI, endLogSegNo, wal_file_size);
 				snprintf(partialfname, MAXFNAMELEN, "%s.partial", origfname);
 				snprintf(partialpath, MAXPGPATH, "%s.partial", origpath);
 
@@ -8074,6 +8194,7 @@ ReadCheckpointRecord(XLogReaderState *xlogreader, XLogRecPtr RecPtr,
 	XLogRecord *record;
 	uint8		info;
 
+	debug_xlog("ReadCheckpointRecord RecPtr = %lu\n", RecPtr);
 	if (!XRecOffIsValid(RecPtr))
 	{
 		if (!report)
@@ -8181,8 +8302,8 @@ InitXLOGAccess(void)
 	ThisTimeLineID = XLogCtl->ThisTimeLineID;
 	Assert(ThisTimeLineID != 0 || IsBootstrapProcessingMode());
 
-	/* set wal_segment_size */
-	wal_segment_size = ControlFile->xlog_seg_size;
+	/* set wal_file_size */
+	wal_file_size = ControlFile->xlog_seg_size;
 
 	/* Use GetRedoRecPtr to copy the RedoRecPtr safely */
 	(void) GetRedoRecPtr();
@@ -8514,7 +8635,7 @@ UpdateCheckPointDistanceEstimate(uint64 nbytes)
 	 * more.
 	 *
 	 * When checkpoints are triggered by max_wal_size, this should converge to
-	 * CheckpointSegments * wal_segment_size,
+	 * CheckpointSegments * wal_file_size,
 	 *
 	 * Note: This doesn't pay any attention to what caused the checkpoint.
 	 * Checkpoints triggered manually with CHECKPOINT command, or by e.g.
@@ -8713,7 +8834,7 @@ CreateCheckPoint(int flags)
 	freespace = INSERT_FREESPACE(curInsert);
 	if (freespace == 0)
 	{
-		if (XLogSegmentOffset(curInsert, wal_segment_size) == 0)
+		if (XLogSegmentOffset(curInsert, wal_file_size) == 0)
 			curInsert += SizeOfXLogLongPHD;
 		else
 			curInsert += SizeOfXLogShortPHD;
@@ -8945,7 +9066,7 @@ CreateCheckPoint(int flags)
 		UpdateCheckPointDistanceEstimate(RedoRecPtr - PriorRedoPtr);
 
 		/* Trim from the last checkpoint, not the last - 1 */
-		XLByteToSeg(RedoRecPtr, _logSegNo, wal_segment_size);
+		XLByteToSeg(RedoRecPtr, _logSegNo, wal_file_size);
 		KeepLogSeg(recptr, &_logSegNo);
 		_logSegNo--;
 		RemoveOldXlogFiles(_logSegNo, PriorRedoPtr, recptr);
@@ -9271,7 +9392,7 @@ CreateRestartPoint(int flags)
 		/* Update the average distance between checkpoints/restartpoints. */
 		UpdateCheckPointDistanceEstimate(RedoRecPtr - PriorRedoPtr);
 
-		XLByteToSeg(PriorRedoPtr, _logSegNo, wal_segment_size);
+		XLByteToSeg(PriorRedoPtr, _logSegNo, wal_file_size);
 
 		/*
 		 * Get the current end of xlog replayed or received, whichever is
@@ -9366,7 +9487,7 @@ KeepLogSeg(XLogRecPtr recptr, XLogSegNo *logSegNo)
 	XLogSegNo	segno;
 	XLogRecPtr	keep;
 
-	XLByteToSeg(recptr, segno, wal_segment_size);
+	XLByteToSeg(recptr, segno, wal_file_size);
 	keep = XLogGetReplicationSlotMinimumLSN();
 
 	/* compute limit for wal_keep_segments first */
@@ -9384,7 +9505,7 @@ KeepLogSeg(XLogRecPtr recptr, XLogSegNo *logSegNo)
 	{
 		XLogSegNo	slotSegNo;
 
-		XLByteToSeg(keep, slotSegNo, wal_segment_size);
+		XLByteToSeg(keep, slotSegNo, wal_file_size);
 
 		if (slotSegNo <= 0)
 			segno = 1;
@@ -10167,7 +10288,7 @@ XLogFileNameP(TimeLineID tli, XLogSegNo segno)
 {
 	char	   *result = palloc(MAXFNAMELEN);
 
-	XLogFileName(result, tli, segno, wal_segment_size);
+	XLogFileName(result, tli, segno, wal_file_size);
 	return result;
 }
 
@@ -10421,8 +10542,8 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p,
 			WALInsertLockRelease();
 		} while (!gotUniqueStartpoint);
 
-		XLByteToSeg(startpoint, _logSegNo, wal_segment_size);
-		XLogFileName(xlogfilename, starttli, _logSegNo, wal_segment_size);
+		XLByteToSeg(startpoint, _logSegNo, wal_file_size);
+		XLogFileName(xlogfilename, starttli, _logSegNo, wal_file_size);
 
 		/*
 		 * Construct tablespace_map file
@@ -10973,8 +11094,8 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p)
 		 */
 		RequestXLogSwitch(false);
 
-		XLByteToPrevSeg(stoppoint, _logSegNo, wal_segment_size);
-		XLogFileName(stopxlogfilename, stoptli, _logSegNo, wal_segment_size);
+		XLByteToPrevSeg(stoppoint, _logSegNo, wal_file_size);
+		XLogFileName(stopxlogfilename, stoptli, _logSegNo, wal_file_size);
 
 		/* Use the log timezone here, not the session timezone */
 		stamp_time = (pg_time_t) time(NULL);
@@ -10985,9 +11106,9 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p)
 		/*
 		 * Write the backup history file
 		 */
-		XLByteToSeg(startpoint, _logSegNo, wal_segment_size);
+		XLByteToSeg(startpoint, _logSegNo, wal_file_size);
 		BackupHistoryFilePath(histfilepath, stoptli, _logSegNo,
-							  startpoint, wal_segment_size);
+							  startpoint, wal_file_size);
 		fp = AllocateFile(histfilepath, "w");
 		if (!fp)
 			ereport(ERROR,
@@ -11041,12 +11162,12 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p)
 		((!backup_started_in_recovery && XLogArchivingActive()) ||
 		 (backup_started_in_recovery && XLogArchivingAlways())))
 	{
-		XLByteToPrevSeg(stoppoint, _logSegNo, wal_segment_size);
-		XLogFileName(lastxlogfilename, stoptli, _logSegNo, wal_segment_size);
+		XLByteToPrevSeg(stoppoint, _logSegNo, wal_file_size);
+		XLogFileName(lastxlogfilename, stoptli, _logSegNo, wal_file_size);
 
-		XLByteToSeg(startpoint, _logSegNo, wal_segment_size);
+		XLByteToSeg(startpoint, _logSegNo, wal_file_size);
 		BackupHistoryFileName(histfilename, stoptli, _logSegNo,
-							  startpoint, wal_segment_size);
+							  startpoint, wal_file_size);
 
 		seconds_before_warning = 60;
 		waits = 0;
@@ -11489,16 +11610,21 @@ XLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr, int reqLen,
 	uint32		targetPageOff;
 	XLogSegNo	targetSegNo PG_USED_FOR_ASSERTS_ONLY;
 
-	XLByteToSeg(targetPagePtr, targetSegNo, wal_segment_size);
-	targetPageOff = XLogSegmentOffset(targetPagePtr, wal_segment_size);
+
+	XLByteToSeg(targetPagePtr, targetSegNo, wal_file_size);
+	targetPageOff = XLogSegmentOffset(targetPagePtr, wal_file_size);
+
+	debug_xlog("XLogPageRead targetPagePtr = %lu, targetSegNo = %lu, wal_file_size = %lu\n", 
+			targetPagePtr, targetSegNo, wal_file_size);
 
 	/*
 	 * See if we need to switch to a new segment because the requested record
 	 * is not in the currently open one.
 	 */
 	if (readFile >= 0 &&
-		!XLByteInSeg(targetPagePtr, readSegNo, wal_segment_size))
+		!XLByteInSeg(targetPagePtr, readSegNo, wal_file_size))
 	{
+		debug_xlog("step 0 1\n");
 		/*
 		 * Request a restartpoint if we've replayed too much xlog since the
 		 * last one.
@@ -11507,30 +11633,37 @@ XLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr, int reqLen,
 		{
 			if (XLogCheckpointNeeded(readSegNo))
 			{
+				debug_xlog("step 0 1 1\n");
 				(void) GetRedoRecPtr();
 				if (XLogCheckpointNeeded(readSegNo))
 					RequestCheckpoint(CHECKPOINT_CAUSE_XLOG);
 			}
 		}
 
+		debug_xlog("step 0 2\n");
 		close(readFile);
 		readFile = -1;
 		readSource = 0;
 	}
 
-	XLByteToSeg(targetPagePtr, readSegNo, wal_segment_size);
+	XLByteToSeg(targetPagePtr, readSegNo, wal_file_size);
+	debug_xlog("targetPagePtr = %lu, readSegNo = %lu, wal_file_size = %lu\n", 
+			targetPagePtr, readSegNo, wal_file_size);
 
 retry:
 	/* See if we need to retrieve more data */
+	debug_xlog("step 1\n");
 	if (readFile < 0 ||
 		(readSource == XLOG_FROM_STREAM &&
 		 receivedUpto < targetPagePtr + reqLen))
 	{
+		debug_xlog("step 1 1\n");
 		if (!WaitForWALToBecomeAvailable(targetPagePtr + reqLen,
 										 private->randAccess,
 										 private->fetching_ckpt,
 										 targetRecPtr))
 		{
+			debug_xlog("step 1 1 1\n");
 			if (readFile >= 0)
 				close(readFile);
 			readFile = -1;
@@ -11545,6 +11678,7 @@ retry:
 	 * At this point, we have the right segment open and if we're streaming we
 	 * know the requested record is in it.
 	 */
+	debug_xlog("step 2\n");
 	Assert(readFile != -1);
 
 	/*
@@ -11555,22 +11689,23 @@ retry:
 	 */
 	if (readSource == XLOG_FROM_STREAM)
 	{
-		if (((targetPagePtr) / XLOG_BLCKSZ) != (receivedUpto / XLOG_BLCKSZ))
-			readLen = XLOG_BLCKSZ;
+		if (((targetPagePtr) / wal_blck_size) != (receivedUpto / wal_blck_size))
+			readLen = wal_blck_size;
 		else
-			readLen = XLogSegmentOffset(receivedUpto, wal_segment_size) -
+			readLen = XLogSegmentOffset(receivedUpto, wal_file_size) -
 				targetPageOff;
 	}
 	else
-		readLen = XLOG_BLCKSZ;
+		readLen = wal_blck_size;
 
 	/* Read the requested page */
+	debug_xlog("Read the requested page\n");
 	readOff = targetPageOff;
 	if (lseek(readFile, (off_t) readOff, SEEK_SET) < 0)
 	{
 		char		fname[MAXFNAMELEN];
 
-		XLogFileName(fname, curFileTLI, readSegNo, wal_segment_size);
+		XLogFileName(fname, curFileTLI, readSegNo, wal_file_size);
 		ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 				(errcode_for_file_access(),
 				 errmsg("could not seek in log segment %s to offset %u: %m",
@@ -11579,12 +11714,14 @@ retry:
 	}
 
 	pgstat_report_wait_start(WAIT_EVENT_WAL_READ);
-	if (read(readFile, readBuf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+	if (read(readFile, readBuf, wal_blck_size) != wal_blck_size)
 	{
 		char		fname[MAXFNAMELEN];
 
+		debug_xlog("could not read from log segment %s, offset %u: %m", fname, readOff);
+
 		pgstat_report_wait_end();
-		XLogFileName(fname, curFileTLI, readSegNo, wal_segment_size);
+		XLogFileName(fname, curFileTLI, readSegNo, wal_file_size);
 		ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 				(errcode_for_file_access(),
 				 errmsg("could not read from log segment %s, offset %u: %m",
@@ -11650,6 +11787,8 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
 	TimestampTz now;
 	bool		streaming_reply_sent = false;
 
+	debug_xlog("WaitForWALToBecomeAvailable\n");
+
 	/*-------
 	 * Standby mode is implemented by a state machine:
 	 *
@@ -11720,6 +11859,7 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
 					 * that when we later jump backwards to start redo at
 					 * RedoStartLSN, we will have the logs streamed already.
 					 */
+					debug_xlog("PrimaryConnInfo\n");	
 					if (PrimaryConnInfo)
 					{
 						XLogRecPtr	ptr;
@@ -11727,6 +11867,7 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
 
 						if (fetching_ckpt)
 						{
+							debug_xlog("fetching_ckpt\n");
 							ptr = RedoStartLSN;
 							tli = ControlFile->checkPointCopy.ThisTimeLineID;
 						}
@@ -11741,6 +11882,8 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
 									 tli, curFileTLI);
 						}
 						curFileTLI = tli;
+
+						debug_xlog("RequestXLogStreaming\n");
 						RequestXLogStreaming(tli, ptr, PrimaryConnInfo,
 											 PrimarySlotName);
 						receivedUpto = 0;
@@ -11846,6 +11989,7 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
 		 * We've now handled possible failure. Try to read from the chosen
 		 * source.
 		 */
+		debug_xlog("lastSourceFailed\n");
 		lastSourceFailed = false;
 
 		switch (currentSource)
@@ -11866,11 +12010,14 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
 				 * Try to restore the file from archive, or read an existing
 				 * file from pg_wal.
 				 */
+				debug_xlog("XLogFileReadAnyTLI\n");
 				readFile = XLogFileReadAnyTLI(readSegNo, DEBUG2,
 											  currentSource == XLOG_FROM_ARCHIVE ? XLOG_FROM_ANY :
 											  currentSource);
-				if (readFile >= 0)
+				if (readFile >= 0) {
+					debug_xlog("XLogFileReadAnyTLI failed\n");
 					return true;	/* success! */
+				}
 
 				/*
 				 * Nope, not found in archive or pg_wal.
diff --git a/src/backend/access/transam/xlogarchive.c b/src/backend/access/transam/xlogarchive.c
index 488acd0f70..2fd4a5a49d 100644
--- a/src/backend/access/transam/xlogarchive.c
+++ b/src/backend/access/transam/xlogarchive.c
@@ -134,14 +134,14 @@ RestoreArchivedFile(char *path, const char *xlogfname,
 	if (cleanupEnabled)
 	{
 		GetOldestRestartPoint(&restartRedoPtr, &restartTli);
-		XLByteToSeg(restartRedoPtr, restartSegNo, wal_segment_size);
+		XLByteToSeg(restartRedoPtr, restartSegNo, wal_file_size);
 		XLogFileName(lastRestartPointFname, restartTli, restartSegNo,
-					 wal_segment_size);
+					 wal_file_size);
 		/* we shouldn't need anything earlier than last restart point */
 		Assert(strcmp(lastRestartPointFname, xlogfname) <= 0);
 	}
 	else
-		XLogFileName(lastRestartPointFname, 0, 0L, wal_segment_size);
+		XLogFileName(lastRestartPointFname, 0, 0L, wal_file_size);
 
 	/*
 	 * construct the command to be executed
@@ -348,9 +348,9 @@ ExecuteRecoveryCommand(const char *command, const char *commandName, bool failOn
 	 * archive, though there is no requirement to do so.
 	 */
 	GetOldestRestartPoint(&restartRedoPtr, &restartTli);
-	XLByteToSeg(restartRedoPtr, restartSegNo, wal_segment_size);
+	XLByteToSeg(restartRedoPtr, restartSegNo, wal_file_size);
 	XLogFileName(lastRestartPointFname, restartTli, restartSegNo,
-				 wal_segment_size);
+				 wal_file_size);
 
 	/*
 	 * construct the command to be executed
@@ -549,7 +549,7 @@ XLogArchiveNotifySeg(XLogSegNo segno)
 {
 	char		xlog[MAXFNAMELEN];
 
-	XLogFileName(xlog, ThisTimeLineID, segno, wal_segment_size);
+	XLogFileName(xlog, ThisTimeLineID, segno, wal_file_size);
 	XLogArchiveNotify(xlog);
 }
 
diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index 443ccd6411..29acb60637 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -489,8 +489,8 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS)
 	/*
 	 * xlogfilename
 	 */
-	XLByteToPrevSeg(locationpoint, xlogsegno, wal_segment_size);
-	XLogFileName(xlogfilename, ThisTimeLineID, xlogsegno, wal_segment_size);
+	XLByteToPrevSeg(locationpoint, xlogsegno, wal_file_size);
+	XLogFileName(xlogfilename, ThisTimeLineID, xlogsegno, wal_file_size);
 
 	values[0] = CStringGetTextDatum(xlogfilename);
 	isnull[0] = false;
@@ -498,7 +498,7 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS)
 	/*
 	 * offset
 	 */
-	xrecoff = XLogSegmentOffset(locationpoint, wal_segment_size);
+	xrecoff = XLogSegmentOffset(locationpoint, wal_file_size);
 
 	values[1] = UInt32GetDatum(xrecoff);
 	isnull[1] = false;
@@ -530,8 +530,8 @@ pg_walfile_name(PG_FUNCTION_ARGS)
 				 errmsg("recovery is in progress"),
 				 errhint("pg_walfile_name() cannot be executed during recovery.")));
 
-	XLByteToPrevSeg(locationpoint, xlogsegno, wal_segment_size);
-	XLogFileName(xlogfilename, ThisTimeLineID, xlogsegno, wal_segment_size);
+	XLByteToPrevSeg(locationpoint, xlogsegno, wal_file_size);
+	XLogFileName(xlogfilename, ThisTimeLineID, xlogsegno, wal_file_size);
 
 	PG_RETURN_TEXT_P(cstring_to_text(xlogfilename));
 }
diff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c
index 2a41667c39..eb465e6491 100644
--- a/src/backend/access/transam/xloginsert.c
+++ b/src/backend/access/transam/xloginsert.c
@@ -33,7 +33,7 @@
 #include "pg_trace.h"
 
 /* Buffer size required to store a compressed version of backup block image */
-#define PGLZ_MAX_BLCKSZ PGLZ_MAX_OUTPUT(BLCKSZ)
+#define PGLZ_MAX_BLCKSZ		PGLZ_MAX_OUTPUT(rel_blck_size)
 
 /*
  * For each block reference registered with XLogRegisterBuffer, we fill in
@@ -57,13 +57,14 @@ typedef struct
 								 * backup block data in XLogRecordAssemble() */
 
 	/* buffer to store a compressed version of backup block image */
-	char		compressed_page[PGLZ_MAX_BLCKSZ];
+	char*		compressed_page;
 } registered_buffer;
 
 static registered_buffer *registered_buffers;
 static int	max_registered_buffers; /* allocated size */
-static int	max_registered_block_id = 0;	/* highest block_id + 1 currently
-											 * registered */
+static int	max_registered_block_id = 0;	/* highest block_id + 1 currently registered */
+
+#define SIZEOF_COMPRESSED_PAGE 		(sizeof(char) * PGLZ_MAX_BLCKSZ)
 
 /*
  * A chain of XLogRecDatas to hold the "main data" of a WAL record, registered
@@ -146,6 +147,7 @@ void
 XLogEnsureRecordSpace(int max_block_id, int ndatas)
 {
 	int			nbuffers;
+	int i;
 
 	/*
 	 * This must be called before entering a critical section, because
@@ -176,6 +178,14 @@ XLogEnsureRecordSpace(int max_block_id, int ndatas)
 		 */
 		MemSet(&registered_buffers[max_registered_buffers], 0,
 			   (nbuffers - max_registered_buffers) * sizeof(registered_buffer));
+
+		/*
+		 * Allocate memory for compressed_page
+		 */
+		for (i = max_registered_buffers; i < nbuffers; i++)
+			(&registered_buffers[i])->compressed_page = 
+				MemoryContextAllocZero(xloginsert_cxt, SIZEOF_COMPRESSED_PAGE);
+
 		max_registered_buffers = nbuffers;
 	}
 
@@ -352,7 +362,7 @@ XLogRegisterData(char *data, int len)
  * block_id, the data is appended.
  *
  * The maximum amount of data that can be registered per block is 65535
- * bytes. That should be plenty; if you need more than BLCKSZ bytes to
+ * bytes. That should be plenty; if you need more than rel_blck_size bytes to
  * reconstruct the changes to the page, you might as well just log a full
  * copy of it. (the "main data" that's not associated with a block is not
  * limited)
@@ -598,7 +608,7 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,
 
 				if (lower >= SizeOfPageHeaderData &&
 					upper > lower &&
-					upper <= BLCKSZ)
+					upper <= rel_blck_size)
 				{
 					bimg.hole_offset = lower;
 					cbimg.hole_length = upper - lower;
@@ -662,12 +672,12 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,
 			}
 			else
 			{
-				bimg.length = BLCKSZ - cbimg.hole_length;
+				bimg.length = rel_blck_size - cbimg.hole_length;
 
 				if (cbimg.hole_length == 0)
 				{
 					rdt_datas_last->data = page;
-					rdt_datas_last->len = BLCKSZ;
+					rdt_datas_last->len = rel_blck_size;
 				}
 				else
 				{
@@ -681,7 +691,7 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,
 					rdt_datas_last->data =
 						page + (bimg.hole_offset + cbimg.hole_length);
 					rdt_datas_last->len =
-						BLCKSZ - (bimg.hole_offset + cbimg.hole_length);
+						rel_blck_size - (bimg.hole_offset + cbimg.hole_length);
 				}
 			}
 
@@ -805,11 +815,11 @@ static bool
 XLogCompressBackupBlock(char *page, uint16 hole_offset, uint16 hole_length,
 						char *dest, uint16 *dlen)
 {
-	int32		orig_len = BLCKSZ - hole_length;
+	int32		orig_len = rel_blck_size - hole_length;
 	int32		len;
 	int32		extra_bytes = 0;
 	char	   *source;
-	char		tmp[BLCKSZ];
+	char		tmp[rel_blck_size];
 
 	if (hole_length != 0)
 	{
@@ -818,7 +828,7 @@ XLogCompressBackupBlock(char *page, uint16 hole_offset, uint16 hole_length,
 		memcpy(source, page, hole_offset);
 		memcpy(source + hole_offset,
 			   page + (hole_offset + hole_length),
-			   BLCKSZ - (hole_length + hole_offset));
+			   rel_blck_size - (hole_length + hole_offset));
 
 		/*
 		 * Extra data needs to be stored in WAL record for the compressed
@@ -917,7 +927,7 @@ XLogSaveBufferForHint(Buffer buffer, bool buffer_std)
 	if (lsn <= RedoRecPtr)
 	{
 		int			flags;
-		char		copied_buffer[BLCKSZ];
+		char		copied_buffer[rel_blck_size];
 		char	   *origdata = (char *) BufferGetBlock(buffer);
 		RelFileNode rnode;
 		ForkNumber	forkno;
@@ -936,10 +946,10 @@ XLogSaveBufferForHint(Buffer buffer, bool buffer_std)
 			uint16		upper = ((PageHeader) page)->pd_upper;
 
 			memcpy(copied_buffer, origdata, lower);
-			memcpy(copied_buffer + upper, origdata + upper, BLCKSZ - upper);
+			memcpy(copied_buffer + upper, origdata + upper, rel_blck_size - upper);
 		}
 		else
-			memcpy(copied_buffer, origdata, BLCKSZ);
+			memcpy(copied_buffer, origdata, rel_blck_size);
 
 		XLogBeginInsert();
 
@@ -1027,6 +1037,8 @@ log_newpage_buffer(Buffer buffer, bool page_std)
 void
 InitXLogInsert(void)
 {
+	int i;
+
 	/* Initialize the working areas */
 	if (xloginsert_cxt == NULL)
 	{
@@ -1039,8 +1051,12 @@ InitXLogInsert(void)
 	{
 		registered_buffers = (registered_buffer *)
 			MemoryContextAllocZero(xloginsert_cxt,
-								   sizeof(registered_buffer) * (XLR_NORMAL_MAX_BLOCK_ID + 1));
+					   sizeof(registered_buffer) * (XLR_NORMAL_MAX_BLOCK_ID + 1));
 		max_registered_buffers = XLR_NORMAL_MAX_BLOCK_ID + 1;
+		for (i = 0; i < max_registered_buffers; i++)
+                        (&registered_buffers[i])->compressed_page =
+                                MemoryContextAllocZero(xloginsert_cxt, SIZEOF_COMPRESSED_PAGE);
+
 	}
 	if (rdatas == NULL)
 	{
diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index 0a75c36026..84f4940efe 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -24,6 +24,14 @@
 #include "catalog/pg_control.h"
 #include "common/pg_lzcompress.h"
 #include "replication/origin.h"
+#include "storage/md.h"
+
+#define DEBUG_XLOGREADER                  0
+
+#define debug_xlogreader(format, ...)      \
+       if (DEBUG_XLOGREADER)               \
+               fprintf(stderr, "xlogreader --> " format, ##__VA_ARGS__);
+
 
 static bool allocate_recordbuf(XLogReaderState *state, uint32 reclength);
 
@@ -64,7 +72,7 @@ report_invalid_record(XLogReaderState *state, const char *fmt,...)
  * Returns NULL if the xlogreader couldn't be allocated.
  */
 XLogReaderState *
-XLogReaderAllocate(int wal_segment_size, XLogPageReadCB pagereadfunc,
+XLogReaderAllocate(int wal_file_size, XLogPageReadCB pagereadfunc,
 				   void *private_data)
 {
 	XLogReaderState *state;
@@ -84,7 +92,7 @@ XLogReaderAllocate(int wal_segment_size, XLogPageReadCB pagereadfunc,
 	 * isn't guaranteed to have any particular alignment, whereas
 	 * palloc_extended() will provide MAXALIGN'd storage.
 	 */
-	state->readBuf = (char *) palloc_extended(XLOG_BLCKSZ,
+	state->readBuf = (char *) palloc_extended(wal_blck_size,
 											  MCXT_ALLOC_NO_OOM);
 	if (!state->readBuf)
 	{
@@ -92,7 +100,7 @@ XLogReaderAllocate(int wal_segment_size, XLogPageReadCB pagereadfunc,
 		return NULL;
 	}
 
-	state->wal_segment_size = wal_segment_size;
+	state->wal_file_size = wal_file_size;
 	state->read_page = pagereadfunc;
 	/* system_identifier initialized to zeroes above */
 	state->private_data = private_data;
@@ -150,7 +158,7 @@ XLogReaderFree(XLogReaderState *state)
  * readRecordBufSize is set to the new buffer size.
  *
  * To avoid useless small increases, round its size to a multiple of
- * XLOG_BLCKSZ, and make sure it's at least 5*Max(BLCKSZ, XLOG_BLCKSZ) to start
+ * wal_blck_size, and make sure it's at least 5*Max(rel_blck_size, wal_blck_size) to start
  * with.  (That is enough for all "normal" records, but very large commit or
  * abort records might need more space.)
  */
@@ -159,8 +167,8 @@ allocate_recordbuf(XLogReaderState *state, uint32 reclength)
 {
 	uint32		newSize = reclength;
 
-	newSize += XLOG_BLCKSZ - (newSize % XLOG_BLCKSZ);
-	newSize = Max(newSize, 5 * Max(BLCKSZ, XLOG_BLCKSZ));
+	newSize += wal_blck_size - (newSize % wal_blck_size);
+	newSize = Max(newSize, 5 * Max(rel_blck_size, wal_blck_size));
 
 	if (state->readRecordBuf)
 		pfree(state->readRecordBuf);
@@ -204,6 +212,8 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 	bool		gotheader;
 	int			readOff;
 
+	debug_xlogreader("XLogReadRecord RecPtr = %lu\n", RecPtr);
+
 	/*
 	 * randAccess indicates whether to verify the previous-record pointer of
 	 * the record we're reading.  We only do this if we're reading
@@ -246,24 +256,29 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 
 	state->currRecPtr = RecPtr;
 
-	targetPagePtr = RecPtr - (RecPtr % XLOG_BLCKSZ);
-	targetRecOff = RecPtr % XLOG_BLCKSZ;
+	targetPagePtr = RecPtr - (RecPtr % wal_blck_size);
+	targetRecOff = RecPtr % wal_blck_size;
+	debug_xlogreader("XLogReadRecord targetPagePtr = %lu, targetRecOff = %u\n", targetPagePtr, targetRecOff);
 
 	/*
 	 * Read the page containing the record into state->readBuf. Request enough
 	 * byte to cover the whole record header, or at least the part of it that
 	 * fits on the same page.
 	 */
+	debug_xlogreader("ReadPageInternal\n");
 	readOff = ReadPageInternal(state,
 							   targetPagePtr,
-							   Min(targetRecOff + SizeOfXLogRecord, XLOG_BLCKSZ));
-	if (readOff < 0)
+							   Min(targetRecOff + SizeOfXLogRecord, wal_blck_size));
+	if (readOff < 0) {
+		debug_xlogreader("readOff < 0\n");
 		goto err;
+	}
 
 	/*
 	 * ReadPageInternal always returns at least the page header, so we can
 	 * examine it now.
 	 */
+	debug_xlogreader("XLogPageHeaderSize\n");
 	pageHeaderSize = XLogPageHeaderSize((XLogPageHeader) state->readBuf);
 	if (targetRecOff == 0)
 	{
@@ -300,7 +315,7 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 	 * cannot access any other fields until we've verified that we got the
 	 * whole header.
 	 */
-	record = (XLogRecord *) (state->readBuf + RecPtr % XLOG_BLCKSZ);
+	record = (XLogRecord *) (state->readBuf + RecPtr % wal_blck_size);
 	total_len = record->xl_tot_len;
 
 	/*
@@ -311,7 +326,7 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 	 * record" code path below; otherwise we might fail to apply
 	 * ValidXLogRecordHeader at all.
 	 */
-	if (targetRecOff <= XLOG_BLCKSZ - SizeOfXLogRecord)
+	if (targetRecOff <= wal_blck_size - SizeOfXLogRecord)
 	{
 		if (!ValidXLogRecordHeader(state, RecPtr, state->ReadRecPtr, record,
 								   randAccess))
@@ -345,7 +360,7 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 		goto err;
 	}
 
-	len = XLOG_BLCKSZ - RecPtr % XLOG_BLCKSZ;
+	len = wal_blck_size - RecPtr % wal_blck_size;
 	if (total_len > len)
 	{
 		/* Need to reassemble record */
@@ -354,21 +369,23 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 		char	   *buffer;
 		uint32		gotlen;
 
+		debug_xlogreader("reassemble record\n");
+
 		/* Copy the first fragment of the record from the first page. */
 		memcpy(state->readRecordBuf,
-			   state->readBuf + RecPtr % XLOG_BLCKSZ, len);
+			   state->readBuf + RecPtr % wal_blck_size, len);
 		buffer = state->readRecordBuf + len;
 		gotlen = len;
 
 		do
 		{
 			/* Calculate pointer to beginning of next page */
-			targetPagePtr += XLOG_BLCKSZ;
+			targetPagePtr += wal_blck_size;
 
 			/* Wait for the next page to become available */
 			readOff = ReadPageInternal(state, targetPagePtr,
 									   Min(total_len - gotlen + SizeOfXLogShortPHD,
-										   XLOG_BLCKSZ));
+										   wal_blck_size));
 
 			if (readOff < 0)
 				goto err;
@@ -409,7 +426,7 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 			Assert(pageHeaderSize <= readOff);
 
 			contdata = (char *) state->readBuf + pageHeaderSize;
-			len = XLOG_BLCKSZ - pageHeaderSize;
+			len = wal_blck_size - pageHeaderSize;
 			if (pageHeader->xlp_rem_len < len)
 				len = pageHeader->xlp_rem_len;
 
@@ -447,7 +464,7 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 	{
 		/* Wait for the record data to become available */
 		readOff = ReadPageInternal(state, targetPagePtr,
-								   Min(targetRecOff + total_len, XLOG_BLCKSZ));
+								   Min(targetRecOff + total_len, wal_blck_size));
 		if (readOff < 0)
 			goto err;
 
@@ -468,8 +485,8 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 		(record->xl_info & ~XLR_INFO_MASK) == XLOG_SWITCH)
 	{
 		/* Pretend it extends to end of segment */
-		state->EndRecPtr += state->wal_segment_size - 1;
-		state->EndRecPtr -= XLogSegmentOffset(state->EndRecPtr, state->wal_segment_size);
+		state->EndRecPtr += state->wal_file_size - 1;
+		state->EndRecPtr -= XLogSegmentOffset(state->EndRecPtr, state->wal_file_size);
 	}
 
 	if (DecodeXLogRecord(state, record, errormsg))
@@ -509,10 +526,16 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
 	XLogSegNo	targetSegNo;
 	XLogPageHeader hdr;
 
-	Assert((pageptr % XLOG_BLCKSZ) == 0);
+	debug_xlogreader("ReadPageInternal pageptr\n");
+
+	Assert((pageptr % wal_blck_size) == 0);
 
-	XLByteToSeg(pageptr, targetSegNo, state->wal_segment_size);
-	targetPageOff = XLogSegmentOffset(pageptr, state->wal_segment_size);
+	XLByteToSeg(pageptr, targetSegNo, state->wal_file_size);
+	targetPageOff = XLogSegmentOffset(pageptr, state->wal_file_size);
+
+	debug_xlogreader("ReadPageInternal pageptr = %lu, targetPageOff= %d"
+			", targetSegNo = %ld, wal_file_size = %u\n",
+			pageptr, targetPageOff, targetSegNo, state->wal_file_size);
 
 	/* check whether we have all the requested data already */
 	if (targetSegNo == state->readSegNo && targetPageOff == state->readOff &&
@@ -531,19 +554,20 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
 	 * record is.  This is so that we can check the additional identification
 	 * info that is present in the first page's "long" header.
 	 */
+	debug_xlogreader("Data is not in our buffer\n");
 	if (targetSegNo != state->readSegNo && targetPageOff != 0)
 	{
 		XLogPageHeader hdr;
 		XLogRecPtr	targetSegmentPtr = pageptr - targetPageOff;
 
-		readLen = state->read_page(state, targetSegmentPtr, XLOG_BLCKSZ,
+		readLen = state->read_page(state, targetSegmentPtr, wal_blck_size,
 								   state->currRecPtr,
 								   state->readBuf, &state->readPageTLI);
 		if (readLen < 0)
 			goto err;
 
 		/* we can be sure to have enough WAL available, we scrolled back */
-		Assert(readLen == XLOG_BLCKSZ);
+		Assert(readLen == wal_blck_size);
 
 		hdr = (XLogPageHeader) state->readBuf;
 
@@ -555,13 +579,14 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
 	 * First, read the requested data length, but at least a short page header
 	 * so that we can validate it.
 	 */
+	debug_xlogreader("read_page\n");
 	readLen = state->read_page(state, pageptr, Max(reqLen, SizeOfXLogShortPHD),
 							   state->currRecPtr,
 							   state->readBuf, &state->readPageTLI);
 	if (readLen < 0)
 		goto err;
 
-	Assert(readLen <= XLOG_BLCKSZ);
+	Assert(readLen <= wal_blck_size);
 
 	/* Do we have enough data to check the header length? */
 	if (readLen <= SizeOfXLogShortPHD)
@@ -572,6 +597,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
 	hdr = (XLogPageHeader) state->readBuf;
 
 	/* still not enough */
+	debug_xlogreader("check readLen < XLogPageHeaderSize\n");
 	if (readLen < XLogPageHeaderSize(hdr))
 	{
 		readLen = state->read_page(state, pageptr, XLogPageHeaderSize(hdr),
@@ -584,6 +610,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
 	/*
 	 * Now that we know we have the full header, validate it.
 	 */
+	debug_xlogreader("check ValidXLogPageHeader\n");
 	if (!ValidXLogPageHeader(state, pageptr, hdr))
 		goto err;
 
@@ -592,6 +619,8 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
 	state->readOff = targetPageOff;
 	state->readLen = readLen;
 
+	debug_xlogreader("end\n");
+
 	return readLen;
 
 err:
@@ -719,18 +748,18 @@ ValidXLogPageHeader(XLogReaderState *state, XLogRecPtr recptr,
 	XLogSegNo	segno;
 	int32		offset;
 
-	Assert((recptr % XLOG_BLCKSZ) == 0);
+	Assert((recptr % wal_blck_size) == 0);
 
-	XLByteToSeg(recptr, segno, state->wal_segment_size);
-	offset = XLogSegmentOffset(recptr, state->wal_segment_size);
+	XLByteToSeg(recptr, segno, state->wal_file_size);
+	offset = XLogSegmentOffset(recptr, state->wal_file_size);
 
-	XLogSegNoOffsetToRecPtr(segno, offset, recaddr, state->wal_segment_size);
+	XLogSegNoOffsetToRecPtr(segno, offset, recaddr, state->wal_file_size);
 
 	if (hdr->xlp_magic != XLOG_PAGE_MAGIC)
 	{
 		char		fname[MAXFNAMELEN];
 
-		XLogFileName(fname, state->readPageTLI, segno, state->wal_segment_size);
+		XLogFileName(fname, state->readPageTLI, segno, state->wal_file_size);
 
 		report_invalid_record(state,
 							  "invalid magic number %04X in log segment %s, offset %u",
@@ -744,7 +773,7 @@ ValidXLogPageHeader(XLogReaderState *state, XLogRecPtr recptr,
 	{
 		char		fname[MAXFNAMELEN];
 
-		XLogFileName(fname, state->readPageTLI, segno, state->wal_segment_size);
+		XLogFileName(fname, state->readPageTLI, segno, state->wal_file_size);
 
 		report_invalid_record(state,
 							  "invalid info bits %04X in log segment %s, offset %u",
@@ -777,16 +806,16 @@ ValidXLogPageHeader(XLogReaderState *state, XLogRecPtr recptr,
 								  fhdrident_str, sysident_str);
 			return false;
 		}
-		else if (longhdr->xlp_seg_size != state->wal_segment_size)
+		else if (longhdr->xlp_seg_size != state->wal_file_size)
 		{
 			report_invalid_record(state,
 								  "WAL file is from different database system: incorrect segment size in page header");
 			return false;
 		}
-		else if (longhdr->xlp_xlog_blcksz != XLOG_BLCKSZ)
+		else if (longhdr->xlp_xlog_blcksz != wal_blck_size)
 		{
 			report_invalid_record(state,
-								  "WAL file is from different database system: incorrect XLOG_BLCKSZ in page header");
+								  "WAL file is from different database system: incorrect wal_blck_size in page header");
 			return false;
 		}
 	}
@@ -794,7 +823,7 @@ ValidXLogPageHeader(XLogReaderState *state, XLogRecPtr recptr,
 	{
 		char		fname[MAXFNAMELEN];
 
-		XLogFileName(fname, state->readPageTLI, segno, state->wal_segment_size);
+		XLogFileName(fname, state->readPageTLI, segno, state->wal_file_size);
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
@@ -809,7 +838,7 @@ ValidXLogPageHeader(XLogReaderState *state, XLogRecPtr recptr,
 	{
 		char		fname[MAXFNAMELEN];
 
-		XLogFileName(fname, state->readPageTLI, segno, state->wal_segment_size);
+		XLogFileName(fname, state->readPageTLI, segno, state->wal_file_size);
 
 		report_invalid_record(state,
 							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
@@ -834,7 +863,7 @@ ValidXLogPageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		{
 			char		fname[MAXFNAMELEN];
 
-			XLogFileName(fname, state->readPageTLI, segno, state->wal_segment_size);
+			XLogFileName(fname, state->readPageTLI, segno, state->wal_file_size);
 
 			report_invalid_record(state,
 								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
@@ -897,7 +926,7 @@ XLogFindNextRecord(XLogReaderState *state, XLogRecPtr RecPtr)
 		 * ReadPageInternal() is prepared to handle that and will read at
 		 * least short page-header worth of data
 		 */
-		targetRecOff = tmpRecPtr % XLOG_BLCKSZ;
+		targetRecOff = tmpRecPtr % wal_blck_size;
 
 		/* scroll back to page boundary */
 		targetPagePtr = tmpRecPtr - targetRecOff;
@@ -928,8 +957,8 @@ XLogFindNextRecord(XLogReaderState *state, XLogRecPtr RecPtr)
 			 *
 			 * Note that record headers are MAXALIGN'ed
 			 */
-			if (MAXALIGN(header->xlp_rem_len) > (XLOG_BLCKSZ - pageHeaderSize))
-				tmpRecPtr = targetPagePtr + XLOG_BLCKSZ;
+			if (MAXALIGN(header->xlp_rem_len) > (wal_blck_size - pageHeaderSize))
+				tmpRecPtr = targetPagePtr + wal_blck_size;
 			else
 			{
 				/*
@@ -1135,17 +1164,17 @@ DecodeXLogRecord(XLogReaderState *state, XLogRecord *record, char **errormsg)
 						blk->hole_length = 0;
 				}
 				else
-					blk->hole_length = BLCKSZ - blk->bimg_len;
+					blk->hole_length = rel_blck_size - blk->bimg_len;
 				datatotal += blk->bimg_len;
 
 				/*
 				 * cross-check that hole_offset > 0, hole_length > 0 and
-				 * bimg_len < BLCKSZ if the HAS_HOLE flag is set.
+				 * bimg_len < rel_blck_size if the HAS_HOLE flag is set.
 				 */
 				if ((blk->bimg_info & BKPIMAGE_HAS_HOLE) &&
 					(blk->hole_offset == 0 ||
 					 blk->hole_length == 0 ||
-					 blk->bimg_len == BLCKSZ))
+					 blk->bimg_len == rel_blck_size))
 				{
 					report_invalid_record(state,
 										  "BKPIMAGE_HAS_HOLE set, but hole offset %u length %u block image length %u at %X/%X",
@@ -1172,11 +1201,11 @@ DecodeXLogRecord(XLogReaderState *state, XLogRecord *record, char **errormsg)
 				}
 
 				/*
-				 * cross-check that bimg_len < BLCKSZ if the IS_COMPRESSED
+				 * cross-check that bimg_len < rel_blck_size if the IS_COMPRESSED
 				 * flag is set.
 				 */
 				if ((blk->bimg_info & BKPIMAGE_IS_COMPRESSED) &&
-					blk->bimg_len == BLCKSZ)
+					blk->bimg_len == rel_blck_size)
 				{
 					report_invalid_record(state,
 										  "BKPIMAGE_IS_COMPRESSED set, but block image length %u at %X/%X",
@@ -1186,12 +1215,12 @@ DecodeXLogRecord(XLogReaderState *state, XLogRecord *record, char **errormsg)
 				}
 
 				/*
-				 * cross-check that bimg_len = BLCKSZ if neither HAS_HOLE nor
+				 * cross-check that bimg_len = rel_blck_size if neither HAS_HOLE nor
 				 * IS_COMPRESSED flag is set.
 				 */
 				if (!(blk->bimg_info & BKPIMAGE_HAS_HOLE) &&
 					!(blk->bimg_info & BKPIMAGE_IS_COMPRESSED) &&
-					blk->bimg_len != BLCKSZ)
+					blk->bimg_len != rel_blck_size)
 				{
 					report_invalid_record(state,
 										  "neither BKPIMAGE_HAS_HOLE nor BKPIMAGE_IS_COMPRESSED set, but block image length is %u at %X/%X",
@@ -1266,11 +1295,11 @@ DecodeXLogRecord(XLogReaderState *state, XLogRecord *record, char **errormsg)
 					pfree(blk->data);
 
 				/*
-				 * Force the initial request to be BLCKSZ so that we don't
+				 * Force the initial request to be rel_blck_size so that we don't
 				 * waste time with lots of trips through this stanza as a
 				 * result of WAL compression.
 				 */
-				blk->data_bufsz = MAXALIGN(Max(blk->data_len, BLCKSZ));
+				blk->data_bufsz = MAXALIGN(Max(blk->data_len, rel_blck_size));
 				blk->data = palloc(blk->data_bufsz);
 			}
 			memcpy(blk->data, ptr, blk->data_len);
@@ -1297,10 +1326,10 @@ DecodeXLogRecord(XLogReaderState *state, XLogRecord *record, char **errormsg)
 			 *
 			 * In addition, force the initial request to be reasonably large
 			 * so that we don't waste time with lots of trips through this
-			 * stanza.  BLCKSZ / 2 seems like a good compromise choice.
+			 * stanza.  rel_blck_size / 2 seems like a good compromise choice.
 			 */
 			state->main_data_bufsz = MAXALIGN(Max(state->main_data_len,
-												  BLCKSZ / 2));
+												  rel_blck_size / 2));
 			state->main_data = palloc(state->main_data_bufsz);
 		}
 		memcpy(state->main_data, ptr, state->main_data_len);
@@ -1384,7 +1413,7 @@ RestoreBlockImage(XLogReaderState *record, uint8 block_id, char *page)
 {
 	DecodedBkpBlock *bkpb;
 	char	   *ptr;
-	char		tmp[BLCKSZ];
+	char		tmp[rel_blck_size];
 
 	if (!record->blocks[block_id].in_use)
 		return false;
@@ -1398,7 +1427,7 @@ RestoreBlockImage(XLogReaderState *record, uint8 block_id, char *page)
 	{
 		/* If a backup block image is compressed, decompress it */
 		if (pglz_decompress(ptr, bkpb->bimg_len, tmp,
-							BLCKSZ - bkpb->hole_length) < 0)
+							rel_blck_size - bkpb->hole_length) < 0)
 		{
 			report_invalid_record(record, "invalid compressed image at %X/%X, block %d",
 								  (uint32) (record->ReadRecPtr >> 32),
@@ -1412,7 +1441,7 @@ RestoreBlockImage(XLogReaderState *record, uint8 block_id, char *page)
 	/* generate page, taking into account hole if necessary */
 	if (bkpb->hole_length == 0)
 	{
-		memcpy(page, ptr, BLCKSZ);
+		memcpy(page, ptr, rel_blck_size);
 	}
 	else
 	{
@@ -1421,7 +1450,7 @@ RestoreBlockImage(XLogReaderState *record, uint8 block_id, char *page)
 		MemSet(page + bkpb->hole_offset, 0, bkpb->hole_length);
 		memcpy(page + (bkpb->hole_offset + bkpb->hole_length),
 			   ptr + bkpb->hole_offset,
-			   BLCKSZ - (bkpb->hole_offset + bkpb->hole_length));
+			   rel_blck_size - (bkpb->hole_offset + bkpb->hole_length));
 	}
 
 	return true;
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 3af6e19c98..21df7fe605 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -667,7 +667,7 @@ XLogRead(char *buf, int segsize, TimeLineID tli, XLogRecPtr startptr,
 	static TimeLineID sendTLI = 0;
 	static uint32 sendOff = 0;
 
-	Assert(segsize == wal_segment_size);
+	Assert(segsize == wal_file_size);
 
 	p = buf;
 	recptr = startptr;
@@ -771,7 +771,7 @@ XLogRead(char *buf, int segsize, TimeLineID tli, XLogRecPtr startptr,
  *
  * wantPage must be set to the start address of the page to read and
  * wantLength to the amount of the page that will be read, up to
- * XLOG_BLCKSZ. If the amount to be read isn't known, pass XLOG_BLCKSZ.
+ * wal_blck_size. If the amount to be read isn't known, pass wal_blck_size.
  *
  * We switch to an xlog segment from the new timeline eagerly when on a
  * historical timeline, as soon as we reach the start of the xlog segment
@@ -802,11 +802,11 @@ void
 XLogReadDetermineTimeline(XLogReaderState *state, XLogRecPtr wantPage, uint32 wantLength)
 {
 	const XLogRecPtr lastReadPage = state->readSegNo *
-		state->wal_segment_size + state->readOff;
+		state->wal_file_size + state->readOff;
 
-	Assert(wantPage != InvalidXLogRecPtr && wantPage % XLOG_BLCKSZ == 0);
-	Assert(wantLength <= XLOG_BLCKSZ);
-	Assert(state->readLen == 0 || state->readLen <= XLOG_BLCKSZ);
+	Assert(wantPage != InvalidXLogRecPtr && wantPage % wal_blck_size == 0);
+	Assert(wantLength <= wal_blck_size);
+	Assert(state->readLen == 0 || state->readLen <= wal_blck_size);
 
 	/*
 	 * If the desired page is currently read in and valid, we have nothing to
@@ -818,7 +818,7 @@ XLogReadDetermineTimeline(XLogReaderState *state, XLogRecPtr wantPage, uint32 wa
 	 */
 	if (lastReadPage == wantPage &&
 		state->readLen != 0 &&
-		lastReadPage + state->readLen >= wantPage + Min(wantLength, XLOG_BLCKSZ - 1))
+		lastReadPage + state->readLen >= wantPage + Min(wantLength, wal_blck_size - 1))
 		return;
 
 	/*
@@ -846,8 +846,8 @@ XLogReadDetermineTimeline(XLogReaderState *state, XLogRecPtr wantPage, uint32 wa
 	if (state->currTLIValidUntil != InvalidXLogRecPtr &&
 		state->currTLI != ThisTimeLineID &&
 		state->currTLI != 0 &&
-		((wantPage + wantLength) / state->wal_segment_size) <
-		(state->currTLIValidUntil / state->wal_segment_size))
+		((wantPage + wantLength) / state->wal_file_size) <
+		(state->currTLIValidUntil / state->wal_file_size))
 		return;
 
 	/*
@@ -869,11 +869,11 @@ XLogReadDetermineTimeline(XLogReaderState *state, XLogRecPtr wantPage, uint32 wa
 		 */
 		List	   *timelineHistory = readTimeLineHistory(ThisTimeLineID);
 
-		XLogRecPtr	endOfSegment = (((wantPage / state->wal_segment_size) + 1)
-									* state->wal_segment_size) - 1;
+		XLogRecPtr	endOfSegment = (((wantPage / state->wal_file_size) + 1)
+									* state->wal_file_size) - 1;
 
-		Assert(wantPage / state->wal_segment_size ==
-			   endOfSegment / state->wal_segment_size);
+		Assert(wantPage / state->wal_file_size ==
+			   endOfSegment / state->wal_file_size);
 
 		/*
 		 * Find the timeline of the last LSN on the segment containing
@@ -997,13 +997,13 @@ read_local_xlog_page(XLogReaderState *state, XLogRecPtr targetPagePtr,
 		}
 	}
 
-	if (targetPagePtr + XLOG_BLCKSZ <= read_upto)
+	if (targetPagePtr + wal_blck_size <= read_upto)
 	{
 		/*
 		 * more than one block available; read only that block, have caller
 		 * come back if they need more.
 		 */
-		count = XLOG_BLCKSZ;
+		count = wal_blck_size;
 	}
 	else if (targetPagePtr + reqLen > read_upto)
 	{
@@ -1021,8 +1021,8 @@ read_local_xlog_page(XLogReaderState *state, XLogRecPtr targetPagePtr,
 	 * as 'count', read the whole page anyway. It's guaranteed to be
 	 * zero-padded up to the page boundary if it's incomplete.
 	 */
-	XLogRead(cur_page, state->wal_segment_size, *pageTLI, targetPagePtr,
-			 XLOG_BLCKSZ);
+	XLogRead(cur_page, state->wal_file_size, *pageTLI, targetPagePtr,
+			 wal_blck_size);
 
 	/* number of valid bytes in the buffer */
 	return count;
diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c
index 8287de97a2..3642f2ec41 100644
--- a/src/backend/bootstrap/bootstrap.c
+++ b/src/backend/bootstrap/bootstrap.c
@@ -47,9 +47,15 @@
 #include "utils/relmapper.h"
 #include "utils/tqual.h"
 
-uint32		bootstrap_data_checksum_version = 0;	/* No checksum */
+#define DEBUG_BOOTSTRAP                   0
+
+#define debug_bootstrap(format, ...)      \
+       if (DEBUG_BOOTSTRAP)               \
+               fprintf(stderr, "bootstrap --> " format, ##__VA_ARGS__);
 
 
+uint32		bootstrap_data_checksum_version = 0;	/* No checksum */
+
 #define ALLOC(t, c) \
 	((t *) MemoryContextAllocZero(TopMemoryContext, (unsigned)(c) * sizeof(t)))
 
@@ -223,7 +229,7 @@ AuxiliaryProcessMain(int argc, char *argv[])
 	/* If no -x argument, we are a CheckerProcess */
 	MyAuxProcType = CheckerProcess;
 
-	while ((flag = getopt(argc, argv, "B:c:d:D:Fkr:x:X:-:")) != -1)
+	while ((flag = getopt(argc, argv, "B:c:d:D:Fkr:x:-:")) != -1)
 	{
 		switch (flag)
 		{
@@ -258,18 +264,6 @@ AuxiliaryProcessMain(int argc, char *argv[])
 			case 'x':
 				MyAuxProcType = atoi(optarg);
 				break;
-			case 'X':
-				{
-					int			WalSegSz = strtoul(optarg, NULL, 0);
-
-					if (!IsValidWalSegSize(WalSegSz))
-						ereport(ERROR,
-								(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-								 errmsg("-X requires a power of 2 value between 1MB and 1GB")));
-					SetConfigOption("wal_segment_size", optarg, PGC_INTERNAL,
-									PGC_S_OVERRIDE);
-				}
-				break;
 			case 'c':
 			case '-':
 				{
@@ -291,7 +285,16 @@ AuxiliaryProcessMain(int argc, char *argv[])
 											optarg)));
 					}
 
-					SetConfigOption(name, value, PGC_POSTMASTER, PGC_S_ARGV);
+					debug_bootstrap("config: name = %s, value = %s\n", name, value);
+					if (strcmp(name, "block_size") == 0
+						|| strcmp(name, "segment_size") == 0
+						|| strcmp(name, "wal_block_size") == 0
+						|| strcmp(name, "wal_segment_size") == 0) {
+						SetConfigOption(name, value, PGC_INTERNAL, PGC_S_OVERRIDE);
+					} else {
+						SetConfigOption(name, value, PGC_POSTMASTER, PGC_S_ARGV);
+					}
+
 					free(name);
 					if (value)
 						free(value);
diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index f7de742a56..385aec809c 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -150,7 +150,7 @@
  * than that, so changes in that data structure won't affect user-visible
  * restrictions.
  */
-#define NOTIFY_PAYLOAD_MAX_LENGTH	(BLCKSZ - NAMEDATALEN - 128)
+#define NOTIFY_PAYLOAD_MAX_LENGTH	(rel_blck_size - NAMEDATALEN - 128)
 
 /*
  * Struct representing an entry in the global notify queue
@@ -170,13 +170,15 @@ typedef struct AsyncQueueEntry
 	Oid			dboid;			/* sender's database OID */
 	TransactionId xid;			/* sender's XID */
 	int32		srcPid;			/* sender's PID */
-	char		data[NAMEDATALEN + NOTIFY_PAYLOAD_MAX_LENGTH];
+	char*		data;
 } AsyncQueueEntry;
 
+
 /* Currently, no field of AsyncQueueEntry requires more than int alignment */
 #define QUEUEALIGN(len)		INTALIGN(len)
 
 #define AsyncQueueEntryEmptySize	(offsetof(AsyncQueueEntry, data) + 2)
+#define SizeOfAsyncQueueEntryData	(sizeof(char) * (NAMEDATALEN + NOTIFY_PAYLOAD_MAX_LENGTH))
 
 /*
  * Struct describing a queue position, and assorted macros for working with it
@@ -266,7 +268,7 @@ static AsyncQueueControl *asyncQueueControl;
 static SlruCtlData AsyncCtlData;
 
 #define AsyncCtl					(&AsyncCtlData)
-#define QUEUE_PAGESIZE				BLCKSZ
+#define QUEUE_PAGESIZE				rel_blck_size
 #define QUEUE_FULL_WARN_INTERVAL	5000	/* warn at most once every 5s */
 
 /*
@@ -280,7 +282,7 @@ static SlruCtlData AsyncCtlData;
  *
  * The most data we can have in the queue at a time is QUEUE_MAX_PAGE/2
  * pages, because more than that would confuse slru.c into thinking there
- * was a wraparound condition.  With the default BLCKSZ this means there
+ * was a wraparound condition.  With the default rel_blck_size this means there
  * can be up to 8GB of queued-and-not-read data.
  *
  * Note: it's possible to redefine QUEUE_MAX_PAGE with a smaller multiple of
@@ -1328,6 +1330,8 @@ asyncQueueAddEntries(ListCell *nextNotify)
 	int			offset;
 	int			slotno;
 
+	qe.data = palloc(SizeOfAsyncQueueEntryData);
+
 	/* We hold both AsyncQueueLock and AsyncCtlLock during this operation */
 	LWLockAcquire(AsyncCtlLock, LW_EXCLUSIVE);
 
@@ -1347,6 +1351,7 @@ asyncQueueAddEntries(ListCell *nextNotify)
 	/* Fetch the current page */
 	pageno = QUEUE_POS_PAGE(queue_head);
 	slotno = SimpleLruReadPage(AsyncCtl, pageno, true, InvalidTransactionId);
+
 	/* Note we mark the page dirty before writing in it */
 	AsyncCtl->shared->page_dirty[slotno] = true;
 
@@ -1405,6 +1410,8 @@ asyncQueueAddEntries(ListCell *nextNotify)
 
 	LWLockRelease(AsyncCtlLock);
 
+	pfree(qe.data);
+
 	return nextNotify;
 }
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index d979ce266d..166410a303 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10982,7 +10982,7 @@ copy_relation_data(SMgrRelation src, SMgrRelation dst,
 	 * can seriously hurt transfer speed to and from the kernel; not to
 	 * mention possibly making log_newpage's accesses to the page header fail.
 	 */
-	buf = (char *) palloc(BLCKSZ);
+	buf = (char *) palloc(rel_blck_size);
 	page = (Page) buf;
 
 	/*
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 6587db77ac..aaf25bcff9 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -363,9 +363,9 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 			write_rate = 0;
 			if ((secs > 0) || (usecs > 0))
 			{
-				read_rate = (double) BLCKSZ * VacuumPageMiss / (1024 * 1024) /
+				read_rate = (double) rel_blck_size * VacuumPageMiss / (1024 * 1024) /
 					(secs + usecs / 1000000.0);
-				write_rate = (double) BLCKSZ * VacuumPageDirty / (1024 * 1024) /
+			write_rate = (double) rel_blck_size * VacuumPageDirty / (1024 * 1024) /
 					(secs + usecs / 1000000.0);
 			}
 
diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c
index 07c8852fca..85dc52e046 100644
--- a/src/backend/executor/execGrouping.c
+++ b/src/backend/executor/execGrouping.c
@@ -43,6 +43,7 @@ static int	TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tup
 #define SH_STORE_HASH
 #define SH_GET_HASH(tb, a) a->hash
 #define SH_DEFINE
+#define SH_SIZEOF_ELEMENT_TYPE	sizeof(TupleHashEntryData)
 #include "lib/simplehash.h"
 
 
diff --git a/src/backend/nodes/tidbitmap.c b/src/backend/nodes/tidbitmap.c
index acfe6b263c..e727eba757 100644
--- a/src/backend/nodes/tidbitmap.c
+++ b/src/backend/nodes/tidbitmap.c
@@ -13,7 +13,7 @@
  * fact that a particular page needs to be visited.
  *
  * The "lossy" storage uses one bit per disk page, so at the standard 8K
- * BLCKSZ, we can represent all pages in 64Gb of disk space in about 1Mb
+ * rel_blck_size, we can represent all pages in 64Gb of disk space in about 1Mb
  * of memory.  People pushing around tables of that size should have a
  * couple of Mb to spare, so we don't worry about providing a second level
  * of lossiness.  In theory we could fall back to page ranges at some
@@ -47,6 +47,15 @@
 #include "utils/dsa.h"
 #include "utils/hashutils.h"
 
+#define DEBUG_TIDBITMAP              0
+
+#define debug_tidbitmap(format, ...)      				\
+       if (DEBUG_TIDBITMAP) {               				\
+               fprintf(stderr, "tidbitmap --> " format, ##__VA_ARGS__); \
+	       fflush(stderr);						\
+       }
+
+
 /*
  * The maximum number of tuples per page is not large (typically 256 with
  * 8K pages, or 1024 with 32K pages).  So there's not much point in making
@@ -70,7 +79,9 @@
  * too different.  But we also want PAGES_PER_CHUNK to be a power of 2 to
  * avoid expensive integer remainder operations.  So, define it like this:
  */
-#define PAGES_PER_CHUNK  (BLCKSZ / 32)
+#define PAGES_PER_CHUNK  (rel_blck_size / 32)
+//#define BLCKSZ	8192
+//#define PAGES_PER_CHUNK  (BLCKSZ / 32)
 
 /* We use BITS_PER_BITMAPWORD and typedef bitmapword from nodes/bitmapset.h */
 
@@ -80,7 +91,7 @@
 /* number of active words for an exact page: */
 #define WORDS_PER_PAGE	((MAX_TUPLES_PER_PAGE - 1) / BITS_PER_BITMAPWORD + 1)
 /* number of active words for a lossy chunk: */
-#define WORDS_PER_CHUNK  ((PAGES_PER_CHUNK - 1) / BITS_PER_BITMAPWORD + 1)
+#define WORDS_PER_CHUNK	((PAGES_PER_CHUNK - 1) / BITS_PER_BITMAPWORD + 1)
 
 /*
  * The hashtable entries are represented by this data structure.  For
@@ -98,13 +109,19 @@
  */
 typedef struct PagetableEntry
 {
-	BlockNumber blockno;		/* page number (hashtable key) */
+	BlockNumber blockno;			/* page number (hashtable key) */
 	char		status;			/* hash entry status */
 	bool		ischunk;		/* T = lossy storage, F = exact */
 	bool		recheck;		/* should the tuples be rechecked? */
-	bitmapword	words[Max(WORDS_PER_PAGE, WORDS_PER_CHUNK)];
+	//bitmapword*	words;			/* MUST be last. See tbm_pte_zero() */
+	//bitmapword	words[Max(WORDS_PER_PAGE, WORDS_PER_CHUNK)];		/* MUST be last. See tbm_pte_zero() */
+	bitmapword	words[FLEXIBLE_ARRAY_MEMBER];		/* MUST be last. See tbm_pte_zero() */
 } PagetableEntry;
 
+#define NR_BITMAP_WORDS		Max(WORDS_PER_PAGE, WORDS_PER_CHUNK)
+#define SIZEOF_BITMAP_WORDS 	(sizeof(bitmapword) * NR_BITMAP_WORDS)
+#define SIZEOF_PAGETABLE_ENTRY	(offsetof(PagetableEntry, words) + SIZEOF_BITMAP_WORDS)
+
 /*
  * Holds array of pagetable entries.
  */
@@ -157,7 +174,6 @@ struct TIDBitmap
 	int			nchunks;		/* number of lossy entries in pagetable */
 	TBMIteratingState iterating;	/* tbm_begin_iterate called? */
 	uint32		lossify_start;	/* offset to start lossifying hashtable at */
-	PagetableEntry entry1;		/* used when status == TBM_ONE_PAGE */
 	/* these are valid when iterating is true: */
 	PagetableEntry **spages;	/* sorted exact-page list, or NULL */
 	PagetableEntry **schunks;	/* sorted lossy-chunk list, or NULL */
@@ -166,8 +182,14 @@ struct TIDBitmap
 	dsa_pointer ptpages;		/* dsa_pointer to the page array */
 	dsa_pointer ptchunks;		/* dsa_pointer to the chunk array */
 	dsa_area   *dsa;			/* reference to per-query dsa area */
+
+	/* MUST BE LAST */
+	PagetableEntry entry1;		/* used when status == TBM_ONE_PAGE */
 };
 
+#define SIZEOF_TID_BITMAP	(offsetof(struct TIDBitmap, entry1) + SIZEOF_PAGETABLE_ENTRY)
+
+
 /*
  * When iterating over a bitmap in sorted order, a TBMIterator is used to
  * track our progress.  There can be several iterators scanning the same
@@ -249,6 +271,8 @@ static int tbm_shared_comparator(const void *left, const void *right,
 #define SH_SCOPE static inline
 #define SH_DEFINE
 #define SH_DECLARE
+#define SH_SIZEOF_ELEMENT_TYPE	SIZEOF_PAGETABLE_ENTRY
+
 #include "lib/simplehash.h"
 
 
@@ -267,11 +291,11 @@ tbm_create(long maxbytes, dsa_area *dsa)
 	TIDBitmap  *tbm;
 
 	/* Create the TIDBitmap struct and zero all its fields */
-	tbm = makeNode(TIDBitmap);
+	tbm = makeNodeSize(TIDBitmap, SIZEOF_TID_BITMAP);
+	debug_tidbitmap("\ntbs_create with maxbytes = %ld, tbm = %p\n", maxbytes, tbm);
 
 	tbm->mcxt = CurrentMemoryContext;
 	tbm->status = TBM_EMPTY;
-
 	tbm->maxentries = (int) tbm_calculate_entries(maxbytes);
 	tbm->lossify_start = 0;
 	tbm->dsa = dsa;
@@ -293,6 +317,7 @@ tbm_create_pagetable(TIDBitmap *tbm)
 	Assert(tbm->status != TBM_HASH);
 	Assert(tbm->pagetable == NULL);
 
+	debug_tidbitmap("tbm_create_pagetable\n");
 	tbm->pagetable = pagetable_create(tbm->mcxt, 128, tbm);
 
 	/* If entry1 is valid, push it into the hashtable */
@@ -306,9 +331,14 @@ tbm_create_pagetable(TIDBitmap *tbm)
 								tbm->entry1.blockno,
 								&found);
 		Assert(!found);
+	
+		debug_tidbitmap("entry1 = %p, page = %p\n", &tbm->entry1, page);
 		oldstatus = page->status;
-		memcpy(page, &tbm->entry1, sizeof(PagetableEntry));
+		memcpy(page, &tbm->entry1, SIZEOF_PAGETABLE_ENTRY);
 		page->status = oldstatus;
+
+		debug_tidbitmap("tbm = %p, npages = %d, page = %p, oldstatus = %d, status = %d\n",
+					tbm, tbm->npages, page, oldstatus, page->status);
 	}
 
 	tbm->status = TBM_HASH;
@@ -380,6 +410,10 @@ tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids,
 	PagetableEntry *page = NULL;	/* only valid when currblk is valid */
 	int			i;
 
+	debug_tidbitmap("tbm_add_tuples for ntids = %d starting at %d/%d/%d\n",
+			ntids, tids->ip_blkid.bi_hi, tids->ip_blkid.bi_lo, tids->ip_posid);
+
+
 	Assert(tbm->iterating == TBM_NOT_ITERATING);
 	for (i = 0; i < ntids; i++)
 	{
@@ -420,6 +454,7 @@ tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids,
 			wordnum = WORDNUM(off - 1);
 			bitnum = BITNUM(off - 1);
 		}
+
 		page->words[wordnum] |= ((bitmapword) 1 << bitnum);
 		page->recheck |= recheck;
 
@@ -441,6 +476,7 @@ tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids,
 void
 tbm_add_page(TIDBitmap *tbm, BlockNumber pageno)
 {
+	debug_tidbitmap("tbm_add_page\n");
 	/* Enter the page in the bitmap, or mark it lossy if already present */
 	tbm_mark_page_lossy(tbm, pageno);
 	/* If we went over the memory limit, lossify some more pages */
@@ -457,6 +493,7 @@ void
 tbm_union(TIDBitmap *a, const TIDBitmap *b)
 {
 	Assert(!a->iterating);
+	debug_tidbitmap("tbm_union\n");
 	/* Nothing to do if b is empty */
 	if (b->nentries == 0)
 		return;
@@ -482,6 +519,7 @@ tbm_union_page(TIDBitmap *a, const PagetableEntry *bpage)
 	PagetableEntry *apage;
 	int			wordnum;
 
+	debug_tidbitmap("tbm_union_page\n");
 	if (bpage->ischunk)
 	{
 		/* Scan b's chunk, mark each indicated page lossy in a */
@@ -539,6 +577,9 @@ void
 tbm_intersect(TIDBitmap *a, const TIDBitmap *b)
 {
 	Assert(!a->iterating);
+
+	debug_tidbitmap("tbm_intersect\n");
+
 	/* Nothing to do if a is empty */
 	if (a->nentries == 0)
 		return;
@@ -571,7 +612,9 @@ tbm_intersect(TIDBitmap *a, const TIDBitmap *b)
 					a->nchunks--;
 				else
 					a->npages--;
+
 				a->nentries--;
+
 				if (!pagetable_delete(a->pagetable, apage->blockno))
 					elog(ERROR, "hash table corrupted");
 			}
@@ -590,6 +633,7 @@ tbm_intersect_page(TIDBitmap *a, PagetableEntry *apage, const TIDBitmap *b)
 	const PagetableEntry *bpage;
 	int			wordnum;
 
+	debug_tidbitmap("tbm_intersect_page\n");
 	if (apage->ischunk)
 	{
 		/* Scan each bit in chunk, try to clear */
@@ -691,6 +735,7 @@ tbm_begin_iterate(TIDBitmap *tbm)
 
 	Assert(tbm->iterating != TBM_ITERATING_SHARED);
 
+	debug_tidbitmap("tbm_begin_iterate\n");
 	/*
 	 * Create the TBMIterator struct, with enough trailing space to serve the
 	 * needs of the TBMIterateResult sub-struct.
@@ -729,14 +774,21 @@ tbm_begin_iterate(TIDBitmap *tbm)
 								   tbm->nchunks * sizeof(PagetableEntry *));
 
 		npages = nchunks = 0;
+		debug_tidbitmap("initial: npages = %d, nchunks = %d, tbm->npages = %d\n", npages, nchunks, tbm->npages);
+		debug_tidbitmap("initial: tbm->pagetable->size = %lu\n", tbm->pagetable->size);
+		debug_tidbitmap("initial: tbm->pagetable->sizemask = %d\n", tbm->pagetable->sizemask);
 		pagetable_start_iterate(tbm->pagetable, &i);
+		debug_tidbitmap("initial: (after start_iterate) i->cur = %d, i->end = %d\n", (&i)->cur, (&i)->end);
 		while ((page = pagetable_iterate(tbm->pagetable, &i)) != NULL)
 		{
 			if (page->ischunk)
 				tbm->schunks[nchunks++] = page;
 			else
 				tbm->spages[npages++] = page;
+			debug_tidbitmap("loop: npages = %d, nchunks = %d, tbm->npages = %d\n", npages, nchunks, tbm->npages);
 		}
+
+		debug_tidbitmap("final: npages = %d, nchunks = %d, tbm->npages = %d\n", npages, nchunks, tbm->npages);
 		Assert(npages == tbm->npages);
 		Assert(nchunks == tbm->nchunks);
 		if (npages > 1)
@@ -843,10 +895,11 @@ tbm_prepare_shared_iterate(TIDBitmap *tbm)
 			 * initialize it, and directly store its index (i.e. 0) in the
 			 * page array.
 			 */
-			tbm->dsapagetable = dsa_allocate(tbm->dsa, sizeof(PTEntryArray) +
-											 sizeof(PagetableEntry));
+			tbm->dsapagetable = dsa_allocate(tbm->dsa, sizeof(PTEntryArray) + SIZEOF_PAGETABLE_ENTRY);
+			//tbm->dsapagetable = dsa_allocate(tbm->dsa, sizeof(PTEntryArray) + sizeof(PagetableEntry));
 			ptbase = dsa_get_address(tbm->dsa, tbm->dsapagetable);
-			memcpy(ptbase->ptentry, &tbm->entry1, sizeof(PagetableEntry));
+			//memcpy(ptbase->ptentry, &tbm->entry1, sizeof(PagetableEntry));
+			memcpy(ptbase->ptentry, &tbm->entry1, SIZEOF_PAGETABLE_ENTRY);
 			ptpages->index[0] = 0;
 		}
 
@@ -1202,6 +1255,7 @@ tbm_get_pageentry(TIDBitmap *tbm, BlockNumber pageno)
 {
 	PagetableEntry *page;
 	bool		found;
+	PagetableEntry* p;
 
 	if (tbm->status == TBM_EMPTY)
 	{
@@ -1218,11 +1272,14 @@ tbm_get_pageentry(TIDBitmap *tbm, BlockNumber pageno)
 			if (page->blockno == pageno)
 				return page;
 			/* Time to switch from one page to a hashtable */
+			debug_tidbitmap("tbm = %p, switch to hashtable\n", tbm);
 			tbm_create_pagetable(tbm);
+			debug_tidbitmap("tbm = %p, after switch npages = %d\n", tbm, tbm->npages);
 		}
 
 		/* Look up or create an entry */
 		page = pagetable_insert(tbm->pagetable, pageno, &found);
+		debug_tidbitmap("tbm = %p, page = %p, found = %d\n", tbm, page, found);
 	}
 
 	/* Initialize it if not present before */
@@ -1230,12 +1287,19 @@ tbm_get_pageentry(TIDBitmap *tbm, BlockNumber pageno)
 	{
 		char		oldstatus = page->status;
 
-		MemSet(page, 0, sizeof(PagetableEntry));
+		p = (PagetableEntry*)((char*)page + 0x30);
+		debug_tidbitmap("START tbm = %p, npages = %d, page = %p, status = %d\n", tbm, tbm->npages, p, p->status);
+		MemSet(page, 0, SIZEOF_PAGETABLE_ENTRY);
 		page->status = oldstatus;
 		page->blockno = pageno;
 		/* must count it too */
 		tbm->nentries++;
 		tbm->npages++;
+		debug_tidbitmap("tbm = %p, npages = %d, page = %p, oldstatus = %d, status = %d\n", tbm, tbm->npages, page, oldstatus, page->status);
+		p = (PagetableEntry*)((char*)page + 0x30);
+		debug_tidbitmap("END tbm = %p, npages = %d, page = %p, status = %d\n", tbm, tbm->npages, p, p->status);
+		debug_tidbitmap("NR_BITMAP_WORDS = %d, SIZEOF_PAGETABLE_ENTRY = %lu\n", NR_BITMAP_WORDS, SIZEOF_PAGETABLE_ENTRY);
+		debug_tidbitmap("offseof = %lu\n", offsetof(PagetableEntry, words));
 	}
 
 	return page;
@@ -1287,6 +1351,7 @@ tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno)
 	int			bitno;
 	int			wordnum;
 	int			bitnum;
+	bitmapword*		bmw;
 
 	/* We force the bitmap into hashtable mode whenever it's lossy */
 	if (tbm->status != TBM_HASH)
@@ -1301,39 +1366,55 @@ tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno)
 	 */
 	if (bitno != 0)
 	{
+		page = pagetable_lookup(tbm->pagetable, pageno);
+		if (page != NULL)
+			bmw = page->words;
+
 		if (pagetable_delete(tbm->pagetable, pageno))
 		{
 			/* It was present, so adjust counts */
 			tbm->nentries--;
 			tbm->npages--;		/* assume it must have been non-lossy */
+
+			if (bmw != NULL)
+				pfree(bmw);
 		}
 	}
 
 	/* Look up or create entry for chunk-header page */
 	page = pagetable_insert(tbm->pagetable, chunk_pageno, &found);
+	//if (!found)
+	//	page->words = MemoryContextAllocZero(tbm->mcxt, SIZEOF_BITMAP_WORD);
+
+	//tbm_pte_zero(page);
 
 	/* Initialize it if not present before */
 	if (!found)
 	{
-		char		oldstatus = page->status;
-
-		MemSet(page, 0, sizeof(PagetableEntry));
-		page->status = oldstatus;
-		page->blockno = chunk_pageno;
-		page->ischunk = true;
+		char 			oldstatus;
+               	//MemSet(page, 0, sizeof(PagetableEntry));
+		oldstatus = page->status;
+               	MemSet(page, 0, SIZEOF_PAGETABLE_ENTRY);
+        	page->status = oldstatus;
+		debug_tidbitmap("tbm = %p, npages = %d, page = %p, oldstatus = %d, status = %d\n", tbm, tbm->npages, page, oldstatus, page->status);
+        	page->blockno = chunk_pageno;
+        	page->ischunk = true;
 		/* must count it too */
 		tbm->nentries++;
 		tbm->nchunks++;
 	}
 	else if (!page->ischunk)
 	{
-		char		oldstatus = page->status;
+		char 			oldstatus;
+               	/* chunk header page was formerly non-lossy, make it lossy */
+               	//MemSet(page, 0, sizeof(PagetableEntry));
+		oldstatus = page->status;
+               	MemSet(page, 0, SIZEOF_PAGETABLE_ENTRY);
+        	page->status = oldstatus;
+		debug_tidbitmap("tbm = %p, npages = %d, page = %p, oldstatus = %d, status = %d\n", tbm, tbm->npages, page, oldstatus, page->status);
+        	page->blockno = chunk_pageno;
+        	page->ischunk = true;
 
-		/* chunk header page was formerly non-lossy, make it lossy */
-		MemSet(page, 0, sizeof(PagetableEntry));
-		page->status = oldstatus;
-		page->blockno = chunk_pageno;
-		page->ischunk = true;
 		/* we assume it had some tuple bit(s) set, so mark it lossy */
 		page->words[0] = ((bitmapword) 1 << 0);
 		/* adjust counts */
@@ -1552,8 +1633,10 @@ tbm_calculate_entries(double maxbytes)
 	 * for our purpose.  Also count an extra Pointer per entry for the arrays
 	 * created during iteration readout.
 	 */
+	//nbuckets = maxbytes /
+	//	(sizeof(PagetableEntry) + sizeof(Pointer) + sizeof(Pointer));
 	nbuckets = maxbytes /
-		(sizeof(PagetableEntry) + sizeof(Pointer) + sizeof(Pointer));
+		(SIZEOF_PAGETABLE_ENTRY + sizeof(Pointer) + sizeof(Pointer));
 	nbuckets = Min(nbuckets, INT_MAX - 1);	/* safety limit */
 	nbuckets = Max(nbuckets, 16);	/* sanity limit */
 
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index d11bf19e30..ef9889adbe 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -1686,7 +1686,7 @@ cost_sort(Path *path, PlannerInfo *root,
 		/*
 		 * We'll have to use a disk-based sort of all the tuples
 		 */
-		double		npages = ceil(input_bytes / BLCKSZ);
+		double		npages = ceil(input_bytes / rel_blck_size);
 		double		nruns = input_bytes / sort_mem_bytes;
 		double		mergeorder = tuplesort_merge_order(sort_mem_bytes);
 		double		log_runs;
@@ -1850,7 +1850,7 @@ cost_material(Path *path,
 	 */
 	if (nbytes > work_mem_bytes)
 	{
-		double		npages = ceil(nbytes / BLCKSZ);
+		double		npages = ceil(nbytes / rel_blck_size);
 
 		run_cost += seq_page_cost * npages;
 	}
@@ -3452,7 +3452,7 @@ cost_rescan(PlannerInfo *root, Path *path,
 				if (nbytes > work_mem_bytes)
 				{
 					/* It will spill, so account for re-read cost */
-					double		npages = ceil(nbytes / BLCKSZ);
+					double		npages = ceil(nbytes / rel_blck_size);
 
 					run_cost += seq_page_cost * npages;
 				}
@@ -3479,7 +3479,7 @@ cost_rescan(PlannerInfo *root, Path *path,
 				if (nbytes > work_mem_bytes)
 				{
 					/* It will spill, so account for re-read cost */
-					double		npages = ceil(nbytes / BLCKSZ);
+					double		npages = ceil(nbytes / rel_blck_size);
 
 					run_cost += seq_page_cost * npages;
 				}
@@ -5126,7 +5126,7 @@ relation_byte_size(double tuples, int width)
 static double
 page_size(double tuples, int width)
 {
-	return ceil(relation_byte_size(tuples, width) / BLCKSZ);
+	return ceil(relation_byte_size(tuples, width) / rel_blck_size);
 }
 
 /*
diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c
index 9d35a41e22..0e207fbd58 100644
--- a/src/backend/optimizer/util/plancat.c
+++ b/src/backend/optimizer/util/plancat.c
@@ -1024,7 +1024,7 @@ estimate_rel_size(Relation rel, int32 *attr_widths,
 				tuple_width += MAXALIGN(SizeofHeapTupleHeader);
 				tuple_width += sizeof(ItemIdData);
 				/* note: integer division is intentional here */
-				density = (BLCKSZ - SizeOfPageHeaderData) / tuple_width;
+				density = (rel_blck_size - SizeOfPageHeaderData) / tuple_width;
 			}
 			*tuples = rint(density * (double) curpages);
 
diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c
index 7e0af10c4d..0e9a8abcf5 100644
--- a/src/backend/postmaster/checkpointer.c
+++ b/src/backend/postmaster/checkpointer.c
@@ -624,7 +624,7 @@ CheckArchiveTimeout(void)
 			 * If the returned pointer points exactly to a segment boundary,
 			 * assume nothing happened.
 			 */
-			if (XLogSegmentOffset(switchpoint, wal_segment_size) != 0)
+			if (XLogSegmentOffset(switchpoint, wal_file_size) != 0)
 				elog(DEBUG1, "write-ahead log switch forced (archive_timeout=%d)",
 					 XLogArchiveTimeout);
 		}
@@ -783,7 +783,7 @@ IsCheckpointOnSchedule(double progress)
 	else
 		recptr = GetInsertRecPtr();
 	elapsed_xlogs = (((double) (recptr - ckpt_start_recptr)) /
-					 wal_segment_size) / CheckPointSegments;
+					 wal_file_size) / CheckPointSegments;
 
 	if (progress < elapsed_xlogs)
 	{
diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c
index cbcb3dbec3..aaeba39c17 100644
--- a/src/backend/replication/basebackup.c
+++ b/src/backend/replication/basebackup.c
@@ -361,10 +361,10 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 		 * shouldn't be such files, but if there are, there's little harm in
 		 * including them.
 		 */
-		XLByteToSeg(startptr, startsegno, wal_segment_size);
-		XLogFileName(firstoff, ThisTimeLineID, startsegno, wal_segment_size);
-		XLByteToPrevSeg(endptr, endsegno, wal_segment_size);
-		XLogFileName(lastoff, ThisTimeLineID, endsegno, wal_segment_size);
+		XLByteToSeg(startptr, startsegno, wal_file_size);
+		XLogFileName(firstoff, ThisTimeLineID, startsegno, wal_file_size);
+		XLByteToPrevSeg(endptr, endsegno, wal_file_size);
+		XLogFileName(lastoff, ThisTimeLineID, endsegno, wal_file_size);
 
 		dir = AllocateDir("pg_wal");
 		if (!dir)
@@ -419,13 +419,13 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 		 * Sanity check: the first and last segment should cover startptr and
 		 * endptr, with no gaps in between.
 		 */
-		XLogFromFileName(walFiles[0], &tli, &segno, wal_segment_size);
+		XLogFromFileName(walFiles[0], &tli, &segno, wal_file_size);
 		if (segno != startsegno)
 		{
 			char		startfname[MAXFNAMELEN];
 
 			XLogFileName(startfname, ThisTimeLineID, startsegno,
-						 wal_segment_size);
+						 wal_file_size);
 			ereport(ERROR,
 					(errmsg("could not find WAL file \"%s\"", startfname)));
 		}
@@ -434,13 +434,13 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 			XLogSegNo	currsegno = segno;
 			XLogSegNo	nextsegno = segno + 1;
 
-			XLogFromFileName(walFiles[i], &tli, &segno, wal_segment_size);
+			XLogFromFileName(walFiles[i], &tli, &segno, wal_file_size);
 			if (!(nextsegno == segno || currsegno == segno))
 			{
 				char		nextfname[MAXFNAMELEN];
 
 				XLogFileName(nextfname, ThisTimeLineID, nextsegno,
-							 wal_segment_size);
+							 wal_file_size);
 				ereport(ERROR,
 						(errmsg("could not find WAL file \"%s\"", nextfname)));
 			}
@@ -449,7 +449,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 		{
 			char		endfname[MAXFNAMELEN];
 
-			XLogFileName(endfname, ThisTimeLineID, endsegno, wal_segment_size);
+			XLogFileName(endfname, ThisTimeLineID, endsegno, wal_file_size);
 			ereport(ERROR,
 					(errmsg("could not find WAL file \"%s\"", endfname)));
 		}
@@ -463,7 +463,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 			pgoff_t		len = 0;
 
 			snprintf(pathbuf, MAXPGPATH, XLOGDIR "/%s", walFiles[i]);
-			XLogFromFileName(walFiles[i], &tli, &segno, wal_segment_size);
+			XLogFromFileName(walFiles[i], &tli, &segno, wal_file_size);
 
 			fp = AllocateFile(pathbuf, "rb");
 			if (fp == NULL)
@@ -485,7 +485,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 						(errcode_for_file_access(),
 						 errmsg("could not stat file \"%s\": %m",
 								pathbuf)));
-			if (statbuf.st_size != wal_segment_size)
+			if (statbuf.st_size != wal_file_size)
 			{
 				CheckXLogRemoved(segno, tli);
 				ereport(ERROR,
@@ -497,7 +497,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 			_tarWriteHeader(pathbuf, NULL, &statbuf, false);
 
 			while ((cnt = fread(buf, 1,
-								Min(sizeof(buf), wal_segment_size - len),
+								Min(sizeof(buf), wal_file_size - len),
 								fp)) > 0)
 			{
 				CheckXLogRemoved(segno, tli);
@@ -509,11 +509,11 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 				len += cnt;
 				throttle(cnt);
 
-				if (len == wal_segment_size)
+				if (len == wal_file_size)
 					break;
 			}
 
-			if (len != wal_segment_size)
+			if (len != wal_file_size)
 			{
 				CheckXLogRemoved(segno, tli);
 				ereport(ERROR,
@@ -521,7 +521,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 						 errmsg("unexpected WAL file size \"%s\"", walFiles[i])));
 			}
 
-			/* wal_segment_size is a multiple of 512, so no need for padding */
+			/* wal_file_size is a multiple of 512, so no need for padding */
 
 			FreeFile(fp);
 
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index bca585fc27..4ebd54118d 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -163,7 +163,7 @@ StartupDecodingContext(List *output_plugin_options,
 
 	ctx->slot = slot;
 
-	ctx->reader = XLogReaderAllocate(wal_segment_size, read_page, ctx);
+	ctx->reader = XLogReaderAllocate(wal_file_size, read_page, ctx);
 	if (!ctx->reader)
 		ereport(ERROR,
 				(errcode(ERRCODE_OUT_OF_MEMORY),
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index dc0ad5b0e7..21d3825465 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -2038,15 +2038,15 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
 		 * multiple segments tho
 		 */
 		if (fd == -1 ||
-			!XLByteInSeg(change->lsn, curOpenSegNo, wal_segment_size))
+			!XLByteInSeg(change->lsn, curOpenSegNo, wal_file_size))
 		{
 			XLogRecPtr	recptr;
 
 			if (fd != -1)
 				CloseTransientFile(fd);
 
-			XLByteToSeg(change->lsn, curOpenSegNo, wal_segment_size);
-			XLogSegNoOffsetToRecPtr(curOpenSegNo, 0, recptr, wal_segment_size);
+			XLByteToSeg(change->lsn, curOpenSegNo, wal_file_size);
+			XLogSegNoOffsetToRecPtr(curOpenSegNo, 0, recptr, wal_file_size);
 
 			/*
 			 * No need to care about TLIs here, only used during a single run,
@@ -2273,7 +2273,7 @@ ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN *txn,
 	txn->nentries_mem = 0;
 	Assert(dlist_is_empty(&txn->changes));
 
-	XLByteToSeg(txn->final_lsn, last_segno, wal_segment_size);
+	XLByteToSeg(txn->final_lsn, last_segno, wal_file_size);
 
 	while (restored < max_changes_in_memory && *segno <= last_segno)
 	{
@@ -2288,11 +2288,11 @@ ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN *txn,
 			/* first time in */
 			if (*segno == 0)
 			{
-				XLByteToSeg(txn->first_lsn, *segno, wal_segment_size);
+				XLByteToSeg(txn->first_lsn, *segno, wal_file_size);
 			}
 
 			Assert(*segno != 0 || dlist_is_empty(&txn->changes));
-			XLogSegNoOffsetToRecPtr(*segno, 0, recptr, wal_segment_size);
+			XLogSegNoOffsetToRecPtr(*segno, 0, recptr, wal_file_size);
 
 			/*
 			 * No need to care about TLIs here, only used during a single run,
@@ -2529,8 +2529,8 @@ ReorderBufferRestoreCleanup(ReorderBuffer *rb, ReorderBufferTXN *txn)
 	Assert(txn->first_lsn != InvalidXLogRecPtr);
 	Assert(txn->final_lsn != InvalidXLogRecPtr);
 
-	XLByteToSeg(txn->first_lsn, first, wal_segment_size);
-	XLByteToSeg(txn->final_lsn, last, wal_segment_size);
+	XLByteToSeg(txn->first_lsn, first, wal_file_size);
+	XLByteToSeg(txn->final_lsn, last, wal_file_size);
 
 	/* iterate over all possible filenames, and delete them */
 	for (cur = first; cur <= last; cur++)
@@ -2538,7 +2538,7 @@ ReorderBufferRestoreCleanup(ReorderBuffer *rb, ReorderBufferTXN *txn)
 		char		path[MAXPGPATH];
 		XLogRecPtr	recptr;
 
-		XLogSegNoOffsetToRecPtr(cur, 0, recptr, wal_segment_size);
+		XLogSegNoOffsetToRecPtr(cur, 0, recptr, wal_file_size);
 
 		sprintf(path, "pg_replslot/%s/xid-%u-lsn-%X-%X.snap",
 				NameStr(MyReplicationSlot->data.name), txn->xid,
diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c
index 0d27b6f39e..a291d031f3 100644
--- a/src/backend/replication/slot.c
+++ b/src/backend/replication/slot.c
@@ -1039,7 +1039,7 @@ ReplicationSlotReserveWal(void)
 		 * the new restart_lsn above, so normally we should never need to loop
 		 * more than twice.
 		 */
-		XLByteToSeg(slot->data.restart_lsn, segno, wal_segment_size);
+		XLByteToSeg(slot->data.restart_lsn, segno, wal_file_size);
 		if (XLogGetLastRemovedSegno() < segno)
 			break;
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index fe4e085938..0181b4e686 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -619,7 +619,7 @@ WalReceiverMain(void)
 			 * Create .done file forcibly to prevent the streamed segment from
 			 * being archived later.
 			 */
-			XLogFileName(xlogfname, recvFileTLI, recvSegNo, wal_segment_size);
+			XLogFileName(xlogfname, recvFileTLI, recvSegNo, wal_file_size);
 			if (XLogArchiveMode != ARCHIVE_MODE_ALWAYS)
 				XLogArchiveForceDone(xlogfname);
 			else
@@ -949,7 +949,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)
 	{
 		int			segbytes;
 
-		if (recvFile < 0 || !XLByteInSeg(recptr, recvSegNo, wal_segment_size))
+		if (recvFile < 0 || !XLByteInSeg(recptr, recvSegNo, wal_file_size))
 		{
 			bool		use_existent;
 
@@ -978,7 +978,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)
 				 * Create .done file forcibly to prevent the streamed segment
 				 * from being archived later.
 				 */
-				XLogFileName(xlogfname, recvFileTLI, recvSegNo, wal_segment_size);
+				XLogFileName(xlogfname, recvFileTLI, recvSegNo, wal_file_size);
 				if (XLogArchiveMode != ARCHIVE_MODE_ALWAYS)
 					XLogArchiveForceDone(xlogfname);
 				else
@@ -987,7 +987,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)
 			recvFile = -1;
 
 			/* Create/use new log file */
-			XLByteToSeg(recptr, recvSegNo, wal_segment_size);
+			XLByteToSeg(recptr, recvSegNo, wal_file_size);
 			use_existent = true;
 			recvFile = XLogFileInit(recvSegNo, &use_existent, true);
 			recvFileTLI = ThisTimeLineID;
@@ -995,10 +995,10 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)
 		}
 
 		/* Calculate the start offset of the received logs */
-		startoff = XLogSegmentOffset(recptr, wal_segment_size);
+		startoff = XLogSegmentOffset(recptr, wal_file_size);
 
-		if (startoff + nbytes > wal_segment_size)
-			segbytes = wal_segment_size - startoff;
+		if (startoff + nbytes > wal_file_size)
+			segbytes = wal_file_size - startoff;
 		else
 			segbytes = nbytes;
 
diff --git a/src/backend/replication/walreceiverfuncs.c b/src/backend/replication/walreceiverfuncs.c
index b1f28d0fc4..5553991ab3 100644
--- a/src/backend/replication/walreceiverfuncs.c
+++ b/src/backend/replication/walreceiverfuncs.c
@@ -234,8 +234,8 @@ RequestXLogStreaming(TimeLineID tli, XLogRecPtr recptr, const char *conninfo,
 	 * being created by XLOG streaming, which might cause trouble later on if
 	 * the segment is e.g archived.
 	 */
-	if (XLogSegmentOffset(recptr, wal_segment_size) != 0)
-		recptr -= XLogSegmentOffset(recptr, wal_segment_size);
+	if (XLogSegmentOffset(recptr, wal_file_size) != 0)
+		recptr -= XLogSegmentOffset(recptr, wal_file_size);
 
 	SpinLockAcquire(&walrcv->mutex);
 
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index fa1db748b5..3e4c6ac426 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -94,7 +94,7 @@
 #include "utils/timestamp.h"
 
 /*
- * Maximum data payload in a WAL data message.  Must be >= XLOG_BLCKSZ.
+ * Maximum data payload in a WAL data message.  Must be >= wal_blck_size.
  *
  * We don't have a good idea of what a good value would be; there's some
  * overhead per message in both walsender and walreceiver, but on the other
@@ -102,7 +102,7 @@
  * because signals are checked only between messages.  128kB (with
  * default 8k blocks) seems like a reasonable guess for now.
  */
-#define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)
+#define MAX_SEND_SIZE (wal_blck_size * 16)
 
 /* Array of WalSnds in shared memory */
 WalSndCtlData *WalSndCtl = NULL;
@@ -494,7 +494,7 @@ SendTimeLineHistory(TimeLineHistoryCmd *cmd)
 	bytesleft = histfilelen;
 	while (bytesleft > 0)
 	{
-		char		rbuf[BLCKSZ];
+		char		rbuf[rel_blck_size];
 		int			nread;
 
 		pgstat_report_wait_start(WAIT_EVENT_WALSENDER_TIMELINE_HISTORY_READ);
@@ -764,13 +764,13 @@ logical_read_xlog_page(XLogReaderState *state, XLogRecPtr targetPagePtr, int req
 	if (flushptr < targetPagePtr + reqLen)
 		return -1;
 
-	if (targetPagePtr + XLOG_BLCKSZ <= flushptr)
-		count = XLOG_BLCKSZ;	/* more than one block available */
+	if (targetPagePtr + wal_blck_size <= flushptr)
+		count = wal_blck_size;	/* more than one block available */
 	else
 		count = flushptr - targetPagePtr;	/* part of the page available */
 
 	/* now actually read the data, we know it's there */
-	XLogRead(cur_page, targetPagePtr, XLOG_BLCKSZ);
+	XLogRead(cur_page, targetPagePtr, wal_blck_size);
 
 	return count;
 }
@@ -2316,9 +2316,9 @@ retry:
 		int			segbytes;
 		int			readbytes;
 
-		startoff = XLogSegmentOffset(recptr, wal_segment_size);
+		startoff = XLogSegmentOffset(recptr, wal_file_size);
 
-		if (sendFile < 0 || !XLByteInSeg(recptr, sendSegNo, wal_segment_size))
+		if (sendFile < 0 || !XLByteInSeg(recptr, sendSegNo, wal_file_size))
 		{
 			char		path[MAXPGPATH];
 
@@ -2326,7 +2326,7 @@ retry:
 			if (sendFile >= 0)
 				close(sendFile);
 
-			XLByteToSeg(recptr, sendSegNo, wal_segment_size);
+			XLByteToSeg(recptr, sendSegNo, wal_file_size);
 
 			/*-------
 			 * When reading from a historic timeline, and there is a timeline
@@ -2359,12 +2359,12 @@ retry:
 			{
 				XLogSegNo	endSegNo;
 
-				XLByteToSeg(sendTimeLineValidUpto, endSegNo, wal_segment_size);
+				XLByteToSeg(sendTimeLineValidUpto, endSegNo, wal_file_size);
 				if (sendSegNo == endSegNo)
 					curFileTimeLine = sendTimeLineNextTLI;
 			}
 
-			XLogFilePath(path, curFileTimeLine, sendSegNo, wal_segment_size);
+			XLogFilePath(path, curFileTimeLine, sendSegNo, wal_file_size);
 
 			sendFile = BasicOpenFile(path, O_RDONLY | PG_BINARY);
 			if (sendFile < 0)
@@ -2401,8 +2401,8 @@ retry:
 		}
 
 		/* How many bytes are within this segment? */
-		if (nbytes > (wal_segment_size - startoff))
-			segbytes = wal_segment_size - startoff;
+		if (nbytes > (wal_file_size - startoff))
+			segbytes = wal_file_size - startoff;
 		else
 			segbytes = nbytes;
 
@@ -2433,7 +2433,7 @@ retry:
 	 * read() succeeds in that case, but the data we tried to read might
 	 * already have been overwritten with new WAL records.
 	 */
-	XLByteToSeg(startptr, segno, wal_segment_size);
+	XLByteToSeg(startptr, segno, wal_file_size);
 	CheckXLogRemoved(segno, ThisTimeLineID);
 
 	/*
@@ -2672,7 +2672,7 @@ XLogSendPhysical(void)
 	else
 	{
 		/* round down to page boundary. */
-		endptr -= (endptr % XLOG_BLCKSZ);
+		endptr -= (endptr % wal_blck_size);
 		WalSndCaughtUp = false;
 	}
 
diff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c
index 147fced852..eaec147411 100644
--- a/src/backend/storage/buffer/buf_init.c
+++ b/src/backend/storage/buffer/buf_init.c
@@ -80,7 +80,7 @@ InitBufferPool(void)
 
 	BufferBlocks = (char *)
 		ShmemInitStruct("Buffer Blocks",
-						NBuffers * (Size) BLCKSZ, &foundBufs);
+						NBuffers * (Size) rel_blck_size, &foundBufs);
 
 	/* Align lwlocks to cacheline boundary */
 	BufferIOLWLockArray = (LWLockMinimallyPadded *)
@@ -168,7 +168,7 @@ BufferShmemSize(void)
 	size = add_size(size, PG_CACHE_LINE_SIZE);
 
 	/* size of data pages */
-	size = add_size(size, mul_size(NBuffers, BLCKSZ));
+	size = add_size(size, mul_size(NBuffers, rel_blck_size));
 
 	/* size of stuff controlled by freelist.c */
 	size = add_size(size, StrategyShmemSize());
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 26df7cb38f..59d2e413bb 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -54,7 +54,7 @@
 
 
 /* Note: these two macros only work on shared buffers, not local ones! */
-#define BufHdrGetBlock(bufHdr)	((Block) (BufferBlocks + ((Size) (bufHdr)->buf_id) * BLCKSZ))
+#define BufHdrGetBlock(bufHdr)	((Block) (BufferBlocks + ((Size) (bufHdr)->buf_id) * rel_blck_size))
 #define BufferGetLSN(bufHdr)	(PageGetLSN(BufHdrGetBlock(bufHdr)))
 
 /* Note: this macro only works on local buffers, not shared ones! */
@@ -860,7 +860,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
 	if (isExtend)
 	{
 		/* new buffers are zero-filled */
-		MemSet((char *) bufBlock, 0, BLCKSZ);
+		MemSet((char *) bufBlock, 0, rel_blck_size);
 		/* don't set checksum for all-zero page */
 		smgrextend(smgr, forkNum, blockNum, (char *) bufBlock, false);
 
@@ -878,7 +878,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
 		 * just wants us to allocate a buffer.
 		 */
 		if (mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_AND_CLEANUP_LOCK)
-			MemSet((char *) bufBlock, 0, BLCKSZ);
+			MemSet((char *) bufBlock, 0, rel_blck_size);
 		else
 		{
 			instr_time	io_start,
@@ -907,7 +907,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
 							 errmsg("invalid page in block %u of relation %s; zeroing out page",
 									blockNum,
 									relpath(smgr->smgr_rnode, forkNum))));
-					MemSet((char *) bufBlock, 0, BLCKSZ);
+					MemSet((char *) bufBlock, 0, rel_blck_size);
 				}
 				else
 					ereport(ERROR,
diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c
index f033323cff..0956729549 100644
--- a/src/backend/storage/buffer/freelist.c
+++ b/src/backend/storage/buffer/freelist.c
@@ -557,13 +557,13 @@ GetAccessStrategy(BufferAccessStrategyType btype)
 			return NULL;
 
 		case BAS_BULKREAD:
-			ring_size = 256 * 1024 / BLCKSZ;
+			ring_size = 256 * 1024 / rel_blck_size;
 			break;
 		case BAS_BULKWRITE:
-			ring_size = 16 * 1024 * 1024 / BLCKSZ;
+			ring_size = 16 * 1024 * 1024 / rel_blck_size;
 			break;
 		case BAS_VACUUM:
-			ring_size = 256 * 1024 / BLCKSZ;
+			ring_size = 256 * 1024 / rel_blck_size;
 			break;
 
 		default:
diff --git a/src/backend/storage/buffer/localbuf.c b/src/backend/storage/buffer/localbuf.c
index 1930f0ee0b..c5f6b74eb0 100644
--- a/src/backend/storage/buffer/localbuf.c
+++ b/src/backend/storage/buffer/localbuf.c
@@ -518,16 +518,16 @@ GetLocalBufferStorage(void)
 		/* But not more than what we need for all remaining local bufs */
 		num_bufs = Min(num_bufs, NLocBuffer - total_bufs_allocated);
 		/* And don't overflow MaxAllocSize, either */
-		num_bufs = Min(num_bufs, MaxAllocSize / BLCKSZ);
+		num_bufs = Min(num_bufs, MaxAllocSize / rel_blck_size);
 
 		cur_block = (char *) MemoryContextAlloc(LocalBufferContext,
-												num_bufs * BLCKSZ);
+												num_bufs * rel_blck_size);
 		next_buf_in_block = 0;
 		num_bufs_in_block = num_bufs;
 	}
 
 	/* Allocate next buffer in current memory block */
-	this_buf = cur_block + next_buf_in_block * BLCKSZ;
+	this_buf = cur_block + next_buf_in_block * rel_blck_size;
 	next_buf_in_block++;
 	total_bufs_allocated++;
 
diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c
index 06bf2fadbf..348bb590c8 100644
--- a/src/backend/storage/file/buffile.c
+++ b/src/backend/storage/file/buffile.c
@@ -44,12 +44,12 @@
 #include "utils/resowner.h"
 
 /*
- * We break BufFiles into gigabyte-sized segments, regardless of RELSEG_SIZE.
+ * We break BufFiles into gigabyte-sized segments, regardless of rel_file_blck.
  * The reason is that we'd like large BufFiles to be spread across multiple
  * tablespaces when available.
  */
 #define MAX_PHYSICAL_FILESIZE	0x40000000
-#define BUFFILE_SEG_SIZE		(MAX_PHYSICAL_FILESIZE / BLCKSZ)
+#define BUFFILE_SEG_SIZE		(MAX_PHYSICAL_FILESIZE / rel_blck_size)
 
 /*
  * This data structure represents a buffered file that consists of one or
@@ -86,7 +86,7 @@ struct BufFile
 	off_t		curOffset;		/* offset part of current pos */
 	int			pos;			/* next read/write position in buffer */
 	int			nbytes;			/* total # of valid bytes in buffer */
-	char		buffer[BLCKSZ];
+	char*		buffer;
 };
 
 static BufFile *makeBufFile(File firstfile);
@@ -117,6 +117,7 @@ makeBufFile(File firstfile)
 	file->curOffset = 0L;
 	file->pos = 0;
 	file->nbytes = 0;
+	file->buffer = (char*) palloc(rel_blck_size);
 
 	return file;
 }
@@ -193,6 +194,7 @@ BufFileClose(BufFile *file)
 	/* release the buffer space */
 	pfree(file->files);
 	pfree(file->offsets);
+	pfree(file->buffer);
 	pfree(file);
 }
 
@@ -392,7 +394,7 @@ BufFileWrite(BufFile *file, void *ptr, size_t size)
 
 	while (size > 0)
 	{
-		if (file->pos >= BLCKSZ)
+		if (file->pos >= rel_blck_size)
 		{
 			/* Buffer full, dump it out */
 			if (file->dirty)
@@ -410,7 +412,7 @@ BufFileWrite(BufFile *file, void *ptr, size_t size)
 			}
 		}
 
-		nthistime = BLCKSZ - file->pos;
+		nthistime = rel_blck_size - file->pos;
 		if (nthistime > size)
 			nthistime = size;
 		Assert(nthistime > 0);
@@ -551,9 +553,9 @@ BufFileTell(BufFile *file, int *fileno, off_t *offset)
 /*
  * BufFileSeekBlock --- block-oriented seek
  *
- * Performs absolute seek to the start of the n'th BLCKSZ-sized block of
+ * Performs absolute seek to the start of the n'th rel_blck_size-sized block of
  * the file.  Note that users of this interface will fail if their files
- * exceed BLCKSZ * LONG_MAX bytes, but that is quite a lot; we don't work
+ * exceed rel_blck_size * LONG_MAX bytes, but that is quite a lot; we don't work
  * with tables bigger than that, either...
  *
  * Result is 0 if OK, EOF if not.  Logical position is not moved if an
@@ -564,7 +566,7 @@ BufFileSeekBlock(BufFile *file, long blknum)
 {
 	return BufFileSeek(file,
 					   (int) (blknum / BUFFILE_SEG_SIZE),
-					   (off_t) (blknum % BUFFILE_SEG_SIZE) * BLCKSZ,
+					   (off_t) (blknum % BUFFILE_SEG_SIZE) * rel_blck_size,
 					   SEEK_SET);
 }
 
@@ -579,7 +581,7 @@ BufFileTellBlock(BufFile *file)
 {
 	long		blknum;
 
-	blknum = (file->curOffset + file->pos) / BLCKSZ;
+	blknum = (file->curOffset + file->pos) / rel_blck_size;
 	blknum += file->curFile * BUFFILE_SEG_SIZE;
 	return blknum;
 }
diff --git a/src/backend/storage/file/copydir.c b/src/backend/storage/file/copydir.c
index eae9f5a1f2..b64f14768e 100644
--- a/src/backend/storage/file/copydir.c
+++ b/src/backend/storage/file/copydir.c
@@ -142,7 +142,7 @@ copy_file(char *fromfile, char *tofile)
 	off_t		flush_offset;
 
 	/* Size of copy buffer (read and write requests) */
-#define COPY_BUF_SIZE (8 * BLCKSZ)
+#define COPY_BUF_SIZE (8 * rel_blck_size)
 
 	/*
 	 * Size of data flush requests.  It seems beneficial on most platforms to
diff --git a/src/backend/storage/freespace/README b/src/backend/storage/freespace/README
index bbd1b93fac..499c648c9a 100644
--- a/src/backend/storage/freespace/README
+++ b/src/backend/storage/freespace/README
@@ -14,8 +14,8 @@ It is important to keep the map small so that it can be searched rapidly.
 Therefore, we don't attempt to record the exact free space on a page.
 We allocate one map byte to each page, allowing us to record free space
 at a granularity of 1/256th of a page.  Another way to say it is that
-the stored value is the free space divided by BLCKSZ/256 (rounding down).
-We assume that the free space must always be less than BLCKSZ, since
+the stored value is the free space divided by rel_blck_size/256 (rounding down).
+We assume that the free space must always be less than rel_blck_size, since
 all pages have some overhead; so the maximum map value is 255.
 
 To assist in fast searching, the map isn't simply an array of per-page
@@ -97,7 +97,7 @@ has the same value as the corresponding leaf node on its parent page.
 The root page is always stored at physical block 0.
 
 For example, assuming each FSM page can hold information about 4 pages (in
-reality, it holds (BLCKSZ - headers) / 2, or ~4000 with default BLCKSZ),
+reality, it holds (rel_blck_size - headers) / 2, or ~4000 with default rel_blck_size),
 we get a disk layout like this:
 
  0     <-- page 0 at level 2 (root page)
@@ -136,7 +136,7 @@ and so forth.
 
 To keep things simple, the tree is always constant height. To cover the
 maximum relation size of 2^32-1 blocks, three levels is enough with the default
-BLCKSZ (4000^3 > 2^32).
+rel_blck_size (4000^3 > 2^32).
 
 Addressing
 ----------
diff --git a/src/backend/storage/freespace/freespace.c b/src/backend/storage/freespace/freespace.c
index 4648473523..49f883f9e4 100644
--- a/src/backend/storage/freespace/freespace.c
+++ b/src/backend/storage/freespace/freespace.c
@@ -40,8 +40,8 @@
  * represents the range from 254 * FSM_CAT_STEP, inclusive, to
  * MaxFSMRequestSize, exclusive.
  *
- * MaxFSMRequestSize depends on the architecture and BLCKSZ, but assuming
- * default 8k BLCKSZ, and that MaxFSMRequestSize is 8164 bytes, the
+ * MaxFSMRequestSize depends on the architecture and rel_blck_size, but assuming
+ * default 8k rel_blck_size, and that MaxFSMRequestSize is 8164 bytes, the
  * categories look like this:
  *
  *
@@ -60,21 +60,20 @@
  * completely empty page, that would mean that we could never satisfy a
  * request of exactly MaxFSMRequestSize bytes.
  */
-#define FSM_CATEGORIES	256
-#define FSM_CAT_STEP	(BLCKSZ / FSM_CATEGORIES)
+#define FSM_CATEGORIES		256
+#define FSM_CAT_STEP		(rel_blck_size / FSM_CATEGORIES)
 #define MaxFSMRequestSize	MaxHeapTupleSize
 
 /*
  * Depth of the on-disk tree. We need to be able to address 2^32-1 blocks,
  * and 1626 is the smallest number that satisfies X^3 >= 2^32-1. Likewise,
  * 216 is the smallest number that satisfies X^4 >= 2^32-1. In practice,
- * this means that 4096 bytes is the smallest BLCKSZ that we can get away
+ * this means that 4096 bytes is the smallest rel_blck_size that we can get away
  * with a 3-level tree, and 512 is the smallest we support.
  */
-#define FSM_TREE_DEPTH	((SlotsPerFSMPage >= 1626) ? 3 : 4)
-
-#define FSM_ROOT_LEVEL	(FSM_TREE_DEPTH - 1)
-#define FSM_BOTTOM_LEVEL 0
+#define FSM_TREE_DEPTH		((SlotsPerFSMPage >= 1626) ? 3 : 4)
+#define FSM_ROOT_LEVEL		(FSM_TREE_DEPTH - 1)
+#define FSM_BOTTOM_LEVEL	0
 
 /*
  * The internal FSM routines work on a logical addressing scheme. Each
@@ -87,7 +86,7 @@ typedef struct
 } FSMAddress;
 
 /* Address of the root page. */
-static const FSMAddress FSM_ROOT_ADDRESS = {FSM_ROOT_LEVEL, 0};
+static FSMAddress FSM_ROOT_ADDRESS;
 
 /* functions to navigate the tree */
 static FSMAddress fsm_get_child(FSMAddress parent, uint16 slot);
@@ -255,7 +254,7 @@ XLogRecordPageWithFreeSpace(RelFileNode rnode, BlockNumber heapBlk,
 
 	page = BufferGetPage(buf);
 	if (PageIsNew(page))
-		PageInit(page, BLCKSZ, 0);
+		PageInit(page, rel_blck_size, 0);
 
 	if (fsm_set_avail(page, slot, new_cat))
 		MarkBufferDirtyHint(buf, false);
@@ -397,7 +396,7 @@ fsm_space_avail_to_cat(Size avail)
 {
 	int			cat;
 
-	Assert(avail < BLCKSZ);
+	Assert(avail < rel_blck_size);
 
 	if (avail >= MaxFSMRequestSize)
 		return 255;
@@ -596,7 +595,7 @@ fsm_readbuf(Relation rel, FSMAddress addr, bool extend)
 	 */
 	buf = ReadBufferExtended(rel, FSM_FORKNUM, blkno, RBM_ZERO_ON_ERROR, NULL);
 	if (PageIsNew(BufferGetPage(buf)))
-		PageInit(BufferGetPage(buf), BLCKSZ, 0);
+		PageInit(BufferGetPage(buf), rel_blck_size, 0);
 	return buf;
 }
 
@@ -611,8 +610,8 @@ fsm_extend(Relation rel, BlockNumber fsm_nblocks)
 	BlockNumber fsm_nblocks_now;
 	Page		pg;
 
-	pg = (Page) palloc(BLCKSZ);
-	PageInit(pg, BLCKSZ, 0);
+	pg = (Page) palloc(rel_blck_size);
+	PageInit(pg, rel_blck_size, 0);
 
 	/*
 	 * We use the relation extension lock to lock out other backends trying to
@@ -887,3 +886,10 @@ fsm_update_recursive(Relation rel, FSMAddress addr, uint8 new_cat)
 	fsm_set_and_search(rel, parent, parentslot, new_cat, 0);
 	fsm_update_recursive(rel, parent, new_cat);
 }
+
+void
+fsm_init(void)
+{
+	 FSM_ROOT_ADDRESS.level = FSM_ROOT_LEVEL;
+	 FSM_ROOT_ADDRESS.logpageno = 0;
+}
diff --git a/src/backend/storage/freespace/indexfsm.c b/src/backend/storage/freespace/indexfsm.c
index 5cfbd4c867..03f8c3adca 100644
--- a/src/backend/storage/freespace/indexfsm.c
+++ b/src/backend/storage/freespace/indexfsm.c
@@ -16,7 +16,7 @@
  *	This is similar to the FSM used for heap, in freespace.c, but instead
  *	of tracking the amount of free space on pages, we only track whether
  *	pages are completely free or in-use. We use the same FSM implementation
- *	as for heaps, using BLCKSZ - 1 to denote used pages, and 0 for unused.
+ *	as for heaps, using rel_blck_size - 1 to denote used pages, and 0 for unused.
  *
  *-------------------------------------------------------------------------
  */
@@ -24,6 +24,7 @@
 
 #include "storage/freespace.h"
 #include "storage/indexfsm.h"
+#include "storage/md.h"
 
 /*
  * Exported routines
@@ -37,7 +38,7 @@
 BlockNumber
 GetFreeIndexPage(Relation rel)
 {
-	BlockNumber blkno = GetPageWithFreeSpace(rel, BLCKSZ / 2);
+	BlockNumber blkno = GetPageWithFreeSpace(rel, rel_blck_size / 2);
 
 	if (blkno != InvalidBlockNumber)
 		RecordUsedIndexPage(rel, blkno);
@@ -51,7 +52,7 @@ GetFreeIndexPage(Relation rel)
 void
 RecordFreeIndexPage(Relation rel, BlockNumber freeBlock)
 {
-	RecordPageWithFreeSpace(rel, freeBlock, BLCKSZ - 1);
+	RecordPageWithFreeSpace(rel, freeBlock, rel_blck_size - 1);
 }
 
 
diff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c
index 251a359bff..2ffb7974a3 100644
--- a/src/backend/storage/lmgr/predicate.c
+++ b/src/backend/storage/lmgr/predicate.c
@@ -313,7 +313,7 @@ static SlruCtlData OldSerXidSlruCtlData;
 
 #define OldSerXidSlruCtl			(&OldSerXidSlruCtlData)
 
-#define OLDSERXID_PAGESIZE			BLCKSZ
+#define OLDSERXID_PAGESIZE			rel_blck_size
 #define OLDSERXID_ENTRYSIZE			sizeof(SerCommitSeqNo)
 #define OLDSERXID_ENTRIESPERPAGE	(OLDSERXID_PAGESIZE / OLDSERXID_ENTRYSIZE)
 
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index b6aa2af818..a09041480d 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -44,7 +44,7 @@ PageInit(Page page, Size pageSize, Size specialSize)
 
 	specialSize = MAXALIGN(specialSize);
 
-	Assert(pageSize == BLCKSZ);
+	Assert(pageSize == rel_blck_size);
 	Assert(pageSize > specialSize + SizeOfPageHeaderData);
 
 	/* Make sure all fields of page are zero, as well as unused space */
@@ -110,7 +110,7 @@ PageIsVerified(Page page, BlockNumber blkno)
 		if ((p->pd_flags & ~PD_VALID_FLAG_BITS) == 0 &&
 			p->pd_lower <= p->pd_upper &&
 			p->pd_upper <= p->pd_special &&
-			p->pd_special <= BLCKSZ &&
+			p->pd_special <= rel_blck_size &&
 			p->pd_special == MAXALIGN(p->pd_special))
 			header_sane = true;
 
@@ -119,16 +119,15 @@ PageIsVerified(Page page, BlockNumber blkno)
 	}
 
 	/*
-	 * Check all-zeroes case. Luckily BLCKSZ is guaranteed to always be a
+	 * Check all-zeroes case. Luckily rel_blck_size is guaranteed to always be a
 	 * multiple of size_t - and it's much faster to compare memory using the
 	 * native word size.
 	 */
-	StaticAssertStmt(BLCKSZ == (BLCKSZ / sizeof(size_t)) * sizeof(size_t),
-					 "BLCKSZ has to be a multiple of sizeof(size_t)");
+	Assert(rel_blck_size == (rel_blck_size / sizeof(size_t)) * sizeof(size_t));
 
 	all_zeroes = true;
 	pagebytes = (size_t *) page;
-	for (i = 0; i < (BLCKSZ / sizeof(size_t)); i++)
+	for (i = 0; i < (rel_blck_size / sizeof(size_t)); i++)
 	{
 		if (pagebytes[i] != 0)
 		{
@@ -207,7 +206,7 @@ PageAddItemExtended(Page page,
 	if (phdr->pd_lower < SizeOfPageHeaderData ||
 		phdr->pd_lower > phdr->pd_upper ||
 		phdr->pd_upper > phdr->pd_special ||
-		phdr->pd_special > BLCKSZ)
+		phdr->pd_special > rel_blck_size)
 		ereport(PANIC,
 				(errcode(ERRCODE_DATA_CORRUPTED),
 				 errmsg("corrupted page pointers: lower = %u, upper = %u, special = %u",
@@ -500,7 +499,7 @@ PageRepairFragmentation(Page page)
 	if (pd_lower < SizeOfPageHeaderData ||
 		pd_lower > pd_upper ||
 		pd_upper > pd_special ||
-		pd_special > BLCKSZ ||
+		pd_special > rel_blck_size ||
 		pd_special != MAXALIGN(pd_special))
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
@@ -737,7 +736,7 @@ PageIndexTupleDelete(Page page, OffsetNumber offnum)
 	if (phdr->pd_lower < SizeOfPageHeaderData ||
 		phdr->pd_lower > phdr->pd_upper ||
 		phdr->pd_upper > phdr->pd_special ||
-		phdr->pd_special > BLCKSZ ||
+		phdr->pd_special > rel_blck_size ||
 		phdr->pd_special != MAXALIGN(phdr->pd_special))
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
@@ -870,7 +869,7 @@ PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems)
 	if (pd_lower < SizeOfPageHeaderData ||
 		pd_lower > pd_upper ||
 		pd_upper > pd_special ||
-		pd_special > BLCKSZ ||
+		pd_special > rel_blck_size ||
 		pd_special != MAXALIGN(pd_special))
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
@@ -966,7 +965,7 @@ PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offnum)
 	if (phdr->pd_lower < SizeOfPageHeaderData ||
 		phdr->pd_lower > phdr->pd_upper ||
 		phdr->pd_upper > phdr->pd_special ||
-		phdr->pd_special > BLCKSZ ||
+		phdr->pd_special > rel_blck_size ||
 		phdr->pd_special != MAXALIGN(phdr->pd_special))
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
@@ -1076,7 +1075,7 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	if (phdr->pd_lower < SizeOfPageHeaderData ||
 		phdr->pd_lower > phdr->pd_upper ||
 		phdr->pd_upper > phdr->pd_special ||
-		phdr->pd_special > BLCKSZ ||
+		phdr->pd_special > rel_blck_size ||
 		phdr->pd_special != MAXALIGN(phdr->pd_special))
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
@@ -1178,9 +1177,9 @@ PageSetChecksumCopy(Page page, BlockNumber blkno)
 	 * and second to avoid wasting space in processes that never call this.
 	 */
 	if (pageCopy == NULL)
-		pageCopy = MemoryContextAlloc(TopMemoryContext, BLCKSZ);
+		pageCopy = MemoryContextAlloc(TopMemoryContext, rel_blck_size);
 
-	memcpy(pageCopy, (char *) page, BLCKSZ);
+	memcpy(pageCopy, (char *) page, rel_blck_size);
 	((PageHeader) pageCopy)->pd_checksum = pg_checksum_page(pageCopy, blkno);
 	return pageCopy;
 }
diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c
index 64a4ccf0db..5488ed87cb 100644
--- a/src/backend/storage/smgr/md.c
+++ b/src/backend/storage/smgr/md.c
@@ -74,13 +74,13 @@
  *	easier to support relations that are larger than the operating
  *	system's file size limit (often 2GBytes).  In order to do that,
  *	we break relations up into "segment" files that are each shorter than
- *	the OS file size limit.  The segment size is set by the RELSEG_SIZE
+ *	the OS file size limit.  The segment size is set by the rel_file_blck
  *	configuration constant in pg_config.h.
  *
  *	On disk, a relation must consist of consecutively numbered segment
  *	files in the pattern
- *		-- Zero or more full segments of exactly RELSEG_SIZE blocks each
- *		-- Exactly one partial segment of size 0 <= size < RELSEG_SIZE blocks
+ *		-- Zero or more full segments of exactly rel_file_blck blocks each
+ *		-- Exactly one partial segment of size 0 <= size < rel_file_blck blocks
  *		-- Optionally, any number of inactive segments of size 0 blocks.
  *	The full and partial segments are collectively the "active" segments.
  *	Inactive segments are those that once contained data but are currently
@@ -171,7 +171,7 @@ static CycleCtr mdckpt_cycle_ctr = 0;
 #define EXTENSION_CREATE_RECOVERY	(1 << 3)
 /*
  * Allow opening segments which are preceded by segments smaller than
- * RELSEG_SIZE, e.g. inactive segments (see above). Note that this is breaks
+ * rel_file_blck, e.g. inactive segments (see above). Note that this is breaks
  * mdnblocks() and related functionality henceforth - which currently is ok,
  * because this is only required in the checkpointer which never uses
  * mdnblocks().
@@ -518,9 +518,9 @@ mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 
 	v = _mdfd_getseg(reln, forknum, blocknum, skipFsync, EXTENSION_CREATE);
 
-	seekpos = (off_t) BLCKSZ * (blocknum % ((BlockNumber) RELSEG_SIZE));
+	seekpos = (off_t) rel_blck_size * (blocknum % ((BlockNumber) rel_file_blck));
 
-	Assert(seekpos < (off_t) BLCKSZ * RELSEG_SIZE);
+	Assert(seekpos < (off_t) rel_blck_size * rel_file_blck);
 
 	/*
 	 * Note: because caller usually obtained blocknum by calling mdnblocks,
@@ -537,7 +537,7 @@ mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 				 errmsg("could not seek to block %u in file \"%s\": %m",
 						blocknum, FilePathName(v->mdfd_vfd))));
 
-	if ((nbytes = FileWrite(v->mdfd_vfd, buffer, BLCKSZ, WAIT_EVENT_DATA_FILE_EXTEND)) != BLCKSZ)
+	if ((nbytes = FileWrite(v->mdfd_vfd, buffer, rel_blck_size, WAIT_EVENT_DATA_FILE_EXTEND)) != rel_blck_size)
 	{
 		if (nbytes < 0)
 			ereport(ERROR,
@@ -550,14 +550,14 @@ mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 				(errcode(ERRCODE_DISK_FULL),
 				 errmsg("could not extend file \"%s\": wrote only %d of %d bytes at block %u",
 						FilePathName(v->mdfd_vfd),
-						nbytes, BLCKSZ, blocknum),
+						nbytes, rel_blck_size, blocknum),
 				 errhint("Check free disk space.")));
 	}
 
 	if (!skipFsync && !SmgrIsTemp(reln))
 		register_dirty_segment(reln, forknum, v);
 
-	Assert(_mdnblocks(reln, forknum, v) <= ((BlockNumber) RELSEG_SIZE));
+	Assert(_mdnblocks(reln, forknum, v) <= ((BlockNumber) rel_file_blck));
 }
 
 /*
@@ -616,7 +616,7 @@ mdopen(SMgrRelation reln, ForkNumber forknum, int behavior)
 	mdfd->mdfd_vfd = fd;
 	mdfd->mdfd_segno = 0;
 
-	Assert(_mdnblocks(reln, forknum, mdfd) <= ((BlockNumber) RELSEG_SIZE));
+	Assert(_mdnblocks(reln, forknum, mdfd) <= ((BlockNumber) rel_file_blck));
 
 	return mdfd;
 }
@@ -664,11 +664,11 @@ mdprefetch(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum)
 
 	v = _mdfd_getseg(reln, forknum, blocknum, false, EXTENSION_FAIL);
 
-	seekpos = (off_t) BLCKSZ * (blocknum % ((BlockNumber) RELSEG_SIZE));
+	seekpos = (off_t) rel_blck_size * (blocknum % ((BlockNumber) rel_file_blck));
 
-	Assert(seekpos < (off_t) BLCKSZ * RELSEG_SIZE);
+	Assert(seekpos < (off_t) rel_blck_size * rel_file_blck);
 
-	(void) FilePrefetch(v->mdfd_vfd, seekpos, BLCKSZ, WAIT_EVENT_DATA_FILE_PREFETCH);
+	(void) FilePrefetch(v->mdfd_vfd, seekpos, rel_blck_size, WAIT_EVENT_DATA_FILE_PREFETCH);
 #endif							/* USE_PREFETCH */
 }
 
@@ -705,19 +705,19 @@ mdwriteback(SMgrRelation reln, ForkNumber forknum,
 			return;
 
 		/* compute offset inside the current segment */
-		segnum_start = blocknum / RELSEG_SIZE;
+		segnum_start = blocknum / rel_file_blck;
 
 		/* compute number of desired writes within the current segment */
-		segnum_end = (blocknum + nblocks - 1) / RELSEG_SIZE;
+		segnum_end = (blocknum + nblocks - 1) / rel_file_blck;
 		if (segnum_start != segnum_end)
-			nflush = RELSEG_SIZE - (blocknum % ((BlockNumber) RELSEG_SIZE));
+			nflush = rel_file_blck - (blocknum % ((BlockNumber) rel_file_blck));
 
 		Assert(nflush >= 1);
 		Assert(nflush <= nblocks);
 
-		seekpos = (off_t) BLCKSZ * (blocknum % ((BlockNumber) RELSEG_SIZE));
+		seekpos = (off_t) rel_blck_size * (blocknum % ((BlockNumber) rel_file_blck));
 
-		FileWriteback(v->mdfd_vfd, seekpos, (off_t) BLCKSZ * nflush, WAIT_EVENT_DATA_FILE_FLUSH);
+		FileWriteback(v->mdfd_vfd, seekpos, (off_t) rel_blck_size * nflush, WAIT_EVENT_DATA_FILE_FLUSH);
 
 		nblocks -= nflush;
 		blocknum += nflush;
@@ -744,9 +744,9 @@ mdread(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 	v = _mdfd_getseg(reln, forknum, blocknum, false,
 					 EXTENSION_FAIL | EXTENSION_CREATE_RECOVERY);
 
-	seekpos = (off_t) BLCKSZ * (blocknum % ((BlockNumber) RELSEG_SIZE));
+	seekpos = (off_t) rel_blck_size * (blocknum % ((BlockNumber) rel_file_blck));
 
-	Assert(seekpos < (off_t) BLCKSZ * RELSEG_SIZE);
+	Assert(seekpos < (off_t) rel_blck_size * rel_file_blck);
 
 	if (FileSeek(v->mdfd_vfd, seekpos, SEEK_SET) != seekpos)
 		ereport(ERROR,
@@ -754,7 +754,7 @@ mdread(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 				 errmsg("could not seek to block %u in file \"%s\": %m",
 						blocknum, FilePathName(v->mdfd_vfd))));
 
-	nbytes = FileRead(v->mdfd_vfd, buffer, BLCKSZ, WAIT_EVENT_DATA_FILE_READ);
+	nbytes = FileRead(v->mdfd_vfd, buffer, rel_blck_size, WAIT_EVENT_DATA_FILE_READ);
 
 	TRACE_POSTGRESQL_SMGR_MD_READ_DONE(forknum, blocknum,
 									   reln->smgr_rnode.node.spcNode,
@@ -762,9 +762,9 @@ mdread(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 									   reln->smgr_rnode.node.relNode,
 									   reln->smgr_rnode.backend,
 									   nbytes,
-									   BLCKSZ);
+									   rel_blck_size);
 
-	if (nbytes != BLCKSZ)
+	if (nbytes != rel_blck_size)
 	{
 		if (nbytes < 0)
 			ereport(ERROR,
@@ -781,13 +781,13 @@ mdread(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 		 * update a block that was later truncated away.
 		 */
 		if (zero_damaged_pages || InRecovery)
-			MemSet(buffer, 0, BLCKSZ);
+			MemSet(buffer, 0, rel_blck_size);
 		else
 			ereport(ERROR,
 					(errcode(ERRCODE_DATA_CORRUPTED),
 					 errmsg("could not read block %u in file \"%s\": read only %d of %d bytes",
 							blocknum, FilePathName(v->mdfd_vfd),
-							nbytes, BLCKSZ)));
+							nbytes, rel_blck_size)));
 	}
 }
 
@@ -820,9 +820,9 @@ mdwrite(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 	v = _mdfd_getseg(reln, forknum, blocknum, skipFsync,
 					 EXTENSION_FAIL | EXTENSION_CREATE_RECOVERY);
 
-	seekpos = (off_t) BLCKSZ * (blocknum % ((BlockNumber) RELSEG_SIZE));
+	seekpos = (off_t) rel_blck_size * (blocknum % ((BlockNumber) rel_file_blck));
 
-	Assert(seekpos < (off_t) BLCKSZ * RELSEG_SIZE);
+	Assert(seekpos < (off_t) rel_blck_size * rel_file_blck);
 
 	if (FileSeek(v->mdfd_vfd, seekpos, SEEK_SET) != seekpos)
 		ereport(ERROR,
@@ -830,7 +830,7 @@ mdwrite(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 				 errmsg("could not seek to block %u in file \"%s\": %m",
 						blocknum, FilePathName(v->mdfd_vfd))));
 
-	nbytes = FileWrite(v->mdfd_vfd, buffer, BLCKSZ, WAIT_EVENT_DATA_FILE_WRITE);
+	nbytes = FileWrite(v->mdfd_vfd, buffer, rel_blck_size, WAIT_EVENT_DATA_FILE_WRITE);
 
 	TRACE_POSTGRESQL_SMGR_MD_WRITE_DONE(forknum, blocknum,
 										reln->smgr_rnode.node.spcNode,
@@ -838,9 +838,9 @@ mdwrite(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 										reln->smgr_rnode.node.relNode,
 										reln->smgr_rnode.backend,
 										nbytes,
-										BLCKSZ);
+										rel_blck_size);
 
-	if (nbytes != BLCKSZ)
+	if (nbytes != rel_blck_size)
 	{
 		if (nbytes < 0)
 			ereport(ERROR,
@@ -853,7 +853,7 @@ mdwrite(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,
 				 errmsg("could not write block %u in file \"%s\": wrote only %d of %d bytes",
 						blocknum,
 						FilePathName(v->mdfd_vfd),
-						nbytes, BLCKSZ),
+						nbytes, rel_blck_size),
 				 errhint("Check free disk space.")));
 	}
 
@@ -881,7 +881,7 @@ mdnblocks(SMgrRelation reln, ForkNumber forknum)
 
 	/*
 	 * Start from the last open segments, to avoid redundant seeks.  We have
-	 * previously verified that these segments are exactly RELSEG_SIZE long,
+	 * previously verified that these segments are exactly rel_file_blck long,
 	 * and it's useless to recheck that each time.
 	 *
 	 * NOTE: this assumption could only be wrong if another backend has
@@ -898,13 +898,13 @@ mdnblocks(SMgrRelation reln, ForkNumber forknum)
 	for (;;)
 	{
 		nblocks = _mdnblocks(reln, forknum, v);
-		if (nblocks > ((BlockNumber) RELSEG_SIZE))
+		if (nblocks > ((BlockNumber) rel_file_blck))
 			elog(FATAL, "segment too big");
-		if (nblocks < ((BlockNumber) RELSEG_SIZE))
-			return (segno * ((BlockNumber) RELSEG_SIZE)) + nblocks;
+		if (nblocks < ((BlockNumber) rel_file_blck))
+			return (segno * ((BlockNumber) rel_file_blck)) + nblocks;
 
 		/*
-		 * If segment is exactly RELSEG_SIZE, advance to next one.
+		 * If segment is exactly rel_file_blck, advance to next one.
 		 */
 		segno++;
 
@@ -917,7 +917,7 @@ mdnblocks(SMgrRelation reln, ForkNumber forknum)
 		 */
 		v = _mdfd_openseg(reln, forknum, segno, 0);
 		if (v == NULL)
-			return segno * ((BlockNumber) RELSEG_SIZE);
+			return segno * ((BlockNumber) rel_file_blck);
 	}
 }
 
@@ -958,7 +958,7 @@ mdtruncate(SMgrRelation reln, ForkNumber forknum, BlockNumber nblocks)
 	{
 		MdfdVec    *v;
 
-		priorblocks = (curopensegs - 1) * RELSEG_SIZE;
+		priorblocks = (curopensegs - 1) * rel_file_blck;
 
 		v = &reln->md_seg_fds[forknum][curopensegs - 1];
 
@@ -983,18 +983,18 @@ mdtruncate(SMgrRelation reln, ForkNumber forknum, BlockNumber nblocks)
 			FileClose(v->mdfd_vfd);
 			_fdvec_resize(reln, forknum, curopensegs - 1);
 		}
-		else if (priorblocks + ((BlockNumber) RELSEG_SIZE) > nblocks)
+		else if (priorblocks + ((BlockNumber) rel_file_blck) > nblocks)
 		{
 			/*
 			 * This is the last segment we want to keep. Truncate the file to
 			 * the right length. NOTE: if nblocks is exactly a multiple K of
-			 * RELSEG_SIZE, we will truncate the K+1st segment to 0 length but
+			 * rel_file_blck, we will truncate the K+1st segment to 0 length but
 			 * keep it. This adheres to the invariant given in the header
 			 * comments.
 			 */
 			BlockNumber lastsegblocks = nblocks - priorblocks;
 
-			if (FileTruncate(v->mdfd_vfd, (off_t) lastsegblocks * BLCKSZ, WAIT_EVENT_DATA_FILE_TRUNCATE) < 0)
+			if (FileTruncate(v->mdfd_vfd, (off_t) lastsegblocks * rel_blck_size, WAIT_EVENT_DATA_FILE_TRUNCATE) < 0)
 				ereport(ERROR,
 						(errcode_for_file_access(),
 						 errmsg("could not truncate file \"%s\" to %u blocks: %m",
@@ -1225,7 +1225,7 @@ mdsync(void)
 
 					/* Attempt to open and fsync the target segment */
 					seg = _mdfd_getseg(reln, forknum,
-									   (BlockNumber) segno * (BlockNumber) RELSEG_SIZE,
+									   (BlockNumber) segno * (BlockNumber) rel_file_blck,
 									   false,
 									   EXTENSION_RETURN_NULL
 									   | EXTENSION_DONT_CHECK_SIZE);
@@ -1795,7 +1795,7 @@ _mdfd_openseg(SMgrRelation reln, ForkNumber forknum, BlockNumber segno,
 	v->mdfd_vfd = fd;
 	v->mdfd_segno = segno;
 
-	Assert(_mdnblocks(reln, forknum, v) <= ((BlockNumber) RELSEG_SIZE));
+	Assert(_mdnblocks(reln, forknum, v) <= ((BlockNumber) rel_file_blck));
 
 	/* all done */
 	return v;
@@ -1821,7 +1821,7 @@ _mdfd_getseg(SMgrRelation reln, ForkNumber forknum, BlockNumber blkno,
 	Assert(behavior &
 		   (EXTENSION_FAIL | EXTENSION_CREATE | EXTENSION_RETURN_NULL));
 
-	targetseg = blkno / ((BlockNumber) RELSEG_SIZE);
+	targetseg = blkno / ((BlockNumber) rel_file_blck);
 
 	/* if an existing and opened segment, we're done */
 	if (targetseg < reln->md_num_open_segs[forknum])
@@ -1854,7 +1854,7 @@ _mdfd_getseg(SMgrRelation reln, ForkNumber forknum, BlockNumber blkno,
 
 		Assert(nextsegno == v->mdfd_segno + 1);
 
-		if (nblocks > ((BlockNumber) RELSEG_SIZE))
+		if (nblocks > ((BlockNumber) rel_file_blck))
 			elog(FATAL, "segment too big");
 
 		if ((behavior & EXTENSION_CREATE) ||
@@ -1872,29 +1872,29 @@ _mdfd_getseg(SMgrRelation reln, ForkNumber forknum, BlockNumber blkno,
 			 * even in recovery; we won't reach this point in that case.
 			 *
 			 * We have to maintain the invariant that segments before the last
-			 * active segment are of size RELSEG_SIZE; therefore, if
+			 * active segment are of size rel_file_blck; therefore, if
 			 * extending, pad them out with zeroes if needed.  (This only
 			 * matters if in recovery, or if the caller is extending the
 			 * relation discontiguously, but that can happen in hash indexes.)
 			 */
-			if (nblocks < ((BlockNumber) RELSEG_SIZE))
+			if (nblocks < ((BlockNumber) rel_file_blck))
 			{
-				char	   *zerobuf = palloc0(BLCKSZ);
+				char	   *zerobuf = palloc0(rel_blck_size);
 
 				mdextend(reln, forknum,
-						 nextsegno * ((BlockNumber) RELSEG_SIZE) - 1,
+						 nextsegno * ((BlockNumber) rel_file_blck) - 1,
 						 zerobuf, skipFsync);
 				pfree(zerobuf);
 			}
 			flags = O_CREAT;
 		}
 		else if (!(behavior & EXTENSION_DONT_CHECK_SIZE) &&
-				 nblocks < ((BlockNumber) RELSEG_SIZE))
+				 nblocks < ((BlockNumber) rel_file_blck))
 		{
 			/*
 			 * When not extending (or explicitly including truncated
 			 * segments), only open the next segment if the current one is
-			 * exactly RELSEG_SIZE.  If not (this branch), either return NULL
+			 * exactly rel_file_blck.  If not (this branch), either return NULL
 			 * or fail.
 			 */
 			if (behavior & EXTENSION_RETURN_NULL)
@@ -1949,5 +1949,5 @@ _mdnblocks(SMgrRelation reln, ForkNumber forknum, MdfdVec *seg)
 				 errmsg("could not seek to end of file \"%s\": %m",
 						FilePathName(seg->mdfd_vfd))));
 	/* note that this calculation will ignore any partial block at EOF */
-	return (BlockNumber) (len / BLCKSZ);
+	return (BlockNumber) (len / rel_blck_size);
 }
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 05c5c194ec..238a10db58 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -218,7 +218,7 @@ InteractiveBackend(StringInfo inBuf)
 	/*
 	 * display a prompt and obtain input from the user
 	 */
-	printf("backend> ");
+	printf("\nbackend> ");
 	fflush(stdout);
 
 	resetStringInfo(inBuf);
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index edff6da410..d6e63f29f3 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -7883,7 +7883,7 @@ gincostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
 	 * around 3 bytes per item is fairly typical.
 	 */
 	dataPagesFetchedBySel = ceil(*indexSelectivity *
-								 (numTuples / (BLCKSZ / 3)));
+								 (numTuples / (rel_blck_size / 3)));
 	if (dataPagesFetchedBySel > dataPagesFetched)
 		dataPagesFetched = dataPagesFetchedBySel;
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 9680a4b0f7..e9c4ed88de 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -22,6 +22,7 @@
 #include "libpq/pqcomm.h"
 #include "miscadmin.h"
 #include "storage/backendid.h"
+#include "pg_control_def.h"
 
 
 ProtocolVersion FrontendProtocol;
@@ -137,3 +138,22 @@ int			VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
+
+/*
+ * Below values are extracted from control file.
+ * This avoid requiring a different binary for each possible value of relation and wal block size and file size.
+ */
+
+/*
+ * Relation parameters
+ */
+unsigned int rel_blck_size = REL_BLCK_SIZE_DEF;		/* in bytes, default 8KB */
+unsigned int rel_file_blck = REL_FILE_BLCK_DEF;		/* in blocks, default 131072 */
+unsigned long rel_file_size = REL_FILE_SIZE_DEF;	/* in bytes, max 4GB */
+
+/*
+ * WAL parameters
+ */
+unsigned int wal_blck_size = WAL_BLCK_SIZE_DEF;		/* in bytes, default 8KB */
+unsigned int wal_file_blck = WAL_FILE_BLCK_DEF;		/* in blocks, default 16MB */
+unsigned long wal_file_size = WAL_FILE_SIZE_DEF;	/* in bytes, max 4GB */
diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c
index 544fed8096..a29faa1669 100644
--- a/src/backend/utils/init/miscinit.c
+++ b/src/backend/utils/init/miscinit.c
@@ -1162,8 +1162,8 @@ AddToDataDirLockFile(int target_line, const char *str)
 	int			lineno;
 	char	   *srcptr;
 	char	   *destptr;
-	char		srcbuffer[BLCKSZ];
-	char		destbuffer[BLCKSZ];
+	char		srcbuffer[rel_blck_size];
+	char		destbuffer[rel_blck_size];
 
 	fd = open(DIRECTORY_LOCK_FILE, O_RDWR | PG_BINARY, 0);
 	if (fd < 0)
@@ -1288,7 +1288,7 @@ RecheckDataDirLockFile(void)
 	int			fd;
 	int			len;
 	long		file_pid;
-	char		buffer[BLCKSZ];
+	char		buffer[rel_blck_size];
 
 	fd = open(DIRECTORY_LOCK_FILE, O_RDWR | PG_BINARY, 0);
 	if (fd < 0)
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 20f1d279e9..6d6c1edaa9 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -49,6 +49,8 @@
 #include "storage/proc.h"
 #include "storage/sinvaladt.h"
 #include "storage/smgr.h"
+#include "storage/freespace.h"
+#include "storage/fsm_internals.h"
 #include "tcop/tcopprot.h"
 #include "utils/acl.h"
 #include "utils/fmgroids.h"
@@ -62,6 +64,12 @@
 #include "utils/timeout.h"
 #include "utils/tqual.h"
 
+#define DEBUG_POSTINIT                   0
+
+#define debug_postinit(format, ...)      \
+       if (DEBUG_POSTINIT)               \
+               fprintf(stderr, "postinit --> " format, ##__VA_ARGS__);
+
 
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
@@ -527,6 +535,7 @@ BaseInit(void)
 	/* Do local initialization of file, storage and buffer managers */
 	InitFileAccess();
 	smgrinit();
+	fsm_init();
 	InitBufferPoolAccess();
 }
 
@@ -571,6 +580,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	 *
 	 * Once I have done this, I am visible to other backends!
 	 */
+	debug_postinit("InitProcessPhase2\n");
 	InitProcessPhase2();
 
 	/*
@@ -581,12 +591,14 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	 */
 	MyBackendId = InvalidBackendId;
 
+	debug_postinit("SharedInvalBackendInit\n");
 	SharedInvalBackendInit(false);
 
 	if (MyBackendId > MaxBackends || MyBackendId <= 0)
 		elog(FATAL, "bad backend ID: %d", MyBackendId);
 
 	/* Now that we have a BackendId, we can participate in ProcSignal */
+	debug_postinit("ProcSignalInit\n");
 	ProcSignalInit(MyBackendId);
 
 	/*
@@ -605,6 +617,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	/*
 	 * bufmgr needs another initialization call too
 	 */
+	debug_postinit("InitBufferPoolBackend\n");
 	InitBufferPoolBackend();
 
 	/*
@@ -627,6 +640,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 		 * way, start up the XLOG machinery, and register to have it closed
 		 * down at exit.
 		 */
+		debug_postinit("StartupXLOG\n");
 		StartupXLOG();
 		on_shmem_exit(ShutdownXLOG, 0);
 	}
@@ -642,6 +656,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	InitPlanCache();
 
 	/* Initialize portal manager */
+	debug_postinit("EnablePortalManager\n");
 	EnablePortalManager();
 
 	/* Initialize stats collection --- must happen before first xact */
@@ -652,6 +667,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	 * Load relcache entries for the shared system catalogs.  This must create
 	 * at least entries for pg_database and catalogs used for authentication.
 	 */
+	debug_postinit("RelationCacheInitializePhase2\n");
 	RelationCacheInitializePhase2();
 
 	/*
@@ -707,6 +723,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	 */
 	if (bootstrap || IsAutoVacuumWorkerProcess())
 	{
+		debug_postinit("InitializeSessionUserIdStandalone\n");
 		InitializeSessionUserIdStandalone();
 		am_superuser = true;
 	}
@@ -832,6 +849,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	 */
 	if (bootstrap)
 	{
+		debug_postinit("Set MyDatabaseId and MyDatabaseTableSpace\n");
 		MyDatabaseId = TemplateDbOid;
 		MyDatabaseTableSpace = DEFAULTTABLESPACE_OID;
 	}
@@ -980,6 +998,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 		ValidatePgVersion(fullpath);
 	}
 
+	debug_postinit("SetDatabasePath\n");
 	SetDatabasePath(fullpath);
 
 	/*
@@ -988,6 +1007,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	 * Load relcache entries for the system catalogs.  This must create at
 	 * least the minimum set of "nailed-in" cache entries.
 	 */
+	debug_postinit("RelationCacheInitializePhase3\n");
 	RelationCacheInitializePhase3();
 
 	/* set up ACL framework (so CheckMyDatabase can check permissions) */
@@ -1023,12 +1043,15 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	 */
 
 	/* set default namespace search path */
+	debug_postinit("InitializeSearchPath\n");
 	InitializeSearchPath();
 
 	/* initialize client encoding */
+	debug_postinit("InitializeClientEncoding\n");
 	InitializeClientEncoding();
 
 	/* Initialize this backend's session state. */
+	debug_postinit("InitializeSession\n");
 	InitializeSession();
 
 	/* report this backend in the PgBackendStatus array */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 6dcd738be6..2aac87d4af 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -22,6 +22,7 @@
 #include <limits.h>
 #include <unistd.h>
 #include <sys/stat.h>
+
 #ifdef HAVE_SYSLOG
 #include <syslog.h>
 #endif
@@ -89,6 +90,7 @@
 #include "utils/tzparser.h"
 #include "utils/varlena.h"
 #include "utils/xml.h"
+#include "pg_control_def.h"
 
 #ifndef PG_KRB_SRVTAB
 #define PG_KRB_SRVTAB ""
@@ -696,6 +698,8 @@ const char *const config_type_names[] =
  */
 #define MAX_UNIT_LEN		3	/* length of longest recognized unit string */
 
+typedef int (*multiplier_func_f)(const char* unit, int base_unit);
+
 typedef struct
 {
 	char		unit[MAX_UNIT_LEN + 1]; /* unit, as a string, like "kB" or
@@ -704,44 +708,42 @@ typedef struct
 	int			multiplier;		/* If positive, multiply the value with this
 								 * for unit -> base_unit conversion.  If
 								 * negative, divide (with the absolute value) */
+	multiplier_func_f func;
 } unit_conversion;
 
-/* Ensure that the constants in the tables don't overflow or underflow */
-#if BLCKSZ < 1024 || BLCKSZ > (1024*1024)
-#error BLCKSZ must be between 1KB and 1MB
-#endif
-#if XLOG_BLCKSZ < 1024 || XLOG_BLCKSZ > (1024*1024)
-#error XLOG_BLCKSZ must be between 1KB and 1MB
-#endif
-
 static const char *memory_units_hint = gettext_noop("Valid units for this parameter are \"kB\", \"MB\", \"GB\", and \"TB\".");
 
+/*
+ * Forward declaratrion
+ */
+static int multiplier_mem_unit(const char* unit, int base_unit);
+
 static const unit_conversion memory_unit_conversion_table[] =
 {
-	{"GB", GUC_UNIT_BYTE, 1024 * 1024 * 1024},
-	{"MB", GUC_UNIT_BYTE, 1024 * 1024},
-	{"kB", GUC_UNIT_BYTE, 1024},
-	{"B", GUC_UNIT_BYTE, 1},
+	{"GB", GUC_UNIT_BYTE, 1024 * 1024 * 1024, NULL},
+	{"MB", GUC_UNIT_BYTE, 1024 * 1024, NULL},
+	{"kB", GUC_UNIT_BYTE, 1024, NULL},
+	{"B", GUC_UNIT_BYTE, 1, NULL},
 
-	{"TB", GUC_UNIT_KB, 1024 * 1024 * 1024},
-	{"GB", GUC_UNIT_KB, 1024 * 1024},
-	{"MB", GUC_UNIT_KB, 1024},
-	{"kB", GUC_UNIT_KB, 1},
+	{"TB", GUC_UNIT_KB, 1024 * 1024 * 1024, NULL},
+	{"GB", GUC_UNIT_KB, 1024 * 1024, NULL},
+	{"MB", GUC_UNIT_KB, 1024, NULL},
+	{"kB", GUC_UNIT_KB, 1, NULL},
 
-	{"TB", GUC_UNIT_MB, 1024 * 1024},
-	{"GB", GUC_UNIT_MB, 1024},
-	{"MB", GUC_UNIT_MB, 1},
-	{"kB", GUC_UNIT_MB, -1024},
+	{"TB", GUC_UNIT_MB, 1024 * 1024, NULL},
+	{"GB", GUC_UNIT_MB, 1024, NULL},
+	{"MB", GUC_UNIT_MB, 1, NULL},
+	{"kB", GUC_UNIT_MB, -1024, NULL},
 
-	{"TB", GUC_UNIT_BLOCKS, (1024 * 1024 * 1024) / (BLCKSZ / 1024)},
-	{"GB", GUC_UNIT_BLOCKS, (1024 * 1024) / (BLCKSZ / 1024)},
-	{"MB", GUC_UNIT_BLOCKS, 1024 / (BLCKSZ / 1024)},
-	{"kB", GUC_UNIT_BLOCKS, -(BLCKSZ / 1024)},
+	{"TB", GUC_UNIT_BLOCKS, 0, multiplier_mem_unit},
+	{"GB", GUC_UNIT_BLOCKS, 0, multiplier_mem_unit},
+	{"MB", GUC_UNIT_BLOCKS, 0, multiplier_mem_unit},
+	{"kB", GUC_UNIT_BLOCKS, 0, multiplier_mem_unit},
 
-	{"TB", GUC_UNIT_XBLOCKS, (1024 * 1024 * 1024) / (XLOG_BLCKSZ / 1024)},
-	{"GB", GUC_UNIT_XBLOCKS, (1024 * 1024) / (XLOG_BLCKSZ / 1024)},
-	{"MB", GUC_UNIT_XBLOCKS, 1024 / (XLOG_BLCKSZ / 1024)},
-	{"kB", GUC_UNIT_XBLOCKS, -(XLOG_BLCKSZ / 1024)},
+	{"TB", GUC_UNIT_XBLOCKS, 0, multiplier_mem_unit},
+	{"GB", GUC_UNIT_XBLOCKS, 0, multiplier_mem_unit},
+	{"MB", GUC_UNIT_XBLOCKS, 0, multiplier_mem_unit},
+	{"kB", GUC_UNIT_XBLOCKS, 0, multiplier_mem_unit},
 
 	{""}						/* end of table marker */
 };
@@ -750,23 +752,23 @@ static const char *time_units_hint = gettext_noop("Valid units for this paramete
 
 static const unit_conversion time_unit_conversion_table[] =
 {
-	{"d", GUC_UNIT_MS, 1000 * 60 * 60 * 24},
-	{"h", GUC_UNIT_MS, 1000 * 60 * 60},
-	{"min", GUC_UNIT_MS, 1000 * 60},
-	{"s", GUC_UNIT_MS, 1000},
-	{"ms", GUC_UNIT_MS, 1},
+	{"d", GUC_UNIT_MS, 1000 * 60 * 60 * 24, NULL},
+	{"h", GUC_UNIT_MS, 1000 * 60 * 60, NULL},
+	{"min", GUC_UNIT_MS, 1000 * 60, NULL},
+	{"s", GUC_UNIT_MS, 1000, NULL},
+	{"ms", GUC_UNIT_MS, 1, NULL},
 
-	{"d", GUC_UNIT_S, 60 * 60 * 24},
-	{"h", GUC_UNIT_S, 60 * 60},
-	{"min", GUC_UNIT_S, 60},
-	{"s", GUC_UNIT_S, 1},
-	{"ms", GUC_UNIT_S, -1000},
+	{"d", GUC_UNIT_S, 60 * 60 * 24, NULL},
+	{"h", GUC_UNIT_S, 60 * 60, NULL},
+	{"min", GUC_UNIT_S, 60, NULL},
+	{"s", GUC_UNIT_S, 1, NULL},
+	{"ms", GUC_UNIT_S, -1000, NULL},
 
-	{"d", GUC_UNIT_MIN, 60 * 24},
-	{"h", GUC_UNIT_MIN, 60},
-	{"min", GUC_UNIT_MIN, 1},
-	{"s", GUC_UNIT_MIN, -60},
-	{"ms", GUC_UNIT_MIN, -1000 * 60},
+	{"d", GUC_UNIT_MIN, 60 * 24, NULL},
+	{"h", GUC_UNIT_MIN, 60, NULL},
+	{"min", GUC_UNIT_MIN, 1, NULL},
+	{"s", GUC_UNIT_MIN, -60, NULL},
+	{"ms", GUC_UNIT_MIN, -1000 * 60, NULL},
 
 	{""}						/* end of table marker */
 };
@@ -2269,7 +2271,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_UNIT_MB
 		},
 		&min_wal_size_mb,
-		DEFAULT_MIN_WAL_SEGS * (DEFAULT_XLOG_SEG_SIZE / (1024 * 1024)),
+		DEFAULT_MIN_WAL_SEGS * (WAL_FILE_SIZE_DEF / (1024 * 1024)),
 		2, MAX_KILOBYTES,
 		NULL, NULL, NULL
 	},
@@ -2281,7 +2283,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_UNIT_MB
 		},
 		&max_wal_size_mb,
-		DEFAULT_MAX_WAL_SEGS * (DEFAULT_XLOG_SEG_SIZE / (1024 * 1024)),
+		DEFAULT_MAX_WAL_SEGS * (WAL_FILE_SIZE_DEF / (1024 * 1024)),
 		2, MAX_KILOBYTES,
 		NULL, assign_max_wal_size, NULL
 	},
@@ -2329,7 +2331,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_UNIT_XBLOCKS
 		},
 		&XLOGbuffers,
-		-1, -1, (INT_MAX / XLOG_BLCKSZ),
+		-1, -1, (INT_MAX / WAL_BLCK_SIZE_DEF),
 		check_wal_buffers, NULL, NULL
 	},
 
@@ -2351,7 +2353,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_UNIT_XBLOCKS
 		},
 		&WalWriterFlushAfter,
-		(1024 * 1024) / XLOG_BLCKSZ, 0, INT_MAX,
+		(1024 * 1024) / WAL_BLCK_SIZE_DEF, 0, INT_MAX,
 		NULL, NULL, NULL
 	},
 
@@ -2604,7 +2606,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
 		},
 		&block_size,
-		BLCKSZ, BLCKSZ, BLCKSZ,
+		REL_BLCK_SIZE_DEF, REL_BLCK_SIZE_MIN, REL_BLCK_SIZE_MAX,
 		NULL, NULL, NULL
 	},
 
@@ -2615,7 +2617,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_UNIT_BLOCKS | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
 		},
 		&segment_size,
-		RELSEG_SIZE, RELSEG_SIZE, RELSEG_SIZE,
+		REL_FILE_BLCK_DEF, REL_FILE_BLCK_MIN, REL_FILE_BLCK_MAX,
 		NULL, NULL, NULL
 	},
 
@@ -2626,7 +2628,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
 		},
 		&wal_block_size,
-		XLOG_BLCKSZ, XLOG_BLCKSZ, XLOG_BLCKSZ,
+		WAL_BLCK_SIZE_DEF, WAL_BLCK_SIZE_MIN, WAL_BLCK_SIZE_MAX,
 		NULL, NULL, NULL
 	},
 
@@ -2649,7 +2651,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_UNIT_BYTE | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
 		},
 		&wal_segment_size,
-		DEFAULT_XLOG_SEG_SIZE,
+		WAL_FILE_SIZE_DEF,
 		WalSegMinSize,
 		WalSegMaxSize,
 		NULL, NULL, NULL
@@ -2833,7 +2835,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_UNIT_BLOCKS,
 		},
 		&min_parallel_table_scan_size,
-		(8 * 1024 * 1024) / BLCKSZ, 0, INT_MAX / 3,
+		(8 * 1024 * 1024) / REL_BLCK_SIZE_DEF, 0, INT_MAX / 3,
 		NULL, NULL, NULL
 	},
 
@@ -2844,7 +2846,7 @@ static struct config_int ConfigureNamesInt[] =
 			GUC_UNIT_BLOCKS,
 		},
 		&min_parallel_index_scan_size,
-		(512 * 1024) / BLCKSZ, 0, INT_MAX / 3,
+		(512 * 1024) / REL_BLCK_SIZE_DEF, 0, INT_MAX / 3,
 		NULL, NULL, NULL
 	},
 
@@ -5420,6 +5422,41 @@ ReportGUCOption(struct config_generic *record)
 	}
 }
 
+static int
+multiplier_mem_unit(
+	const char* unit, int base_unit)
+{
+	if (base_unit == GUC_UNIT_BLOCKS) {
+		if (strncmp("TB", unit, 2) == 0)
+			return (1024 * 1024 * 1024) / (rel_blck_size / 1024);
+
+		if (strncmp("GB", unit, 2) == 0)
+			return (1024 * 1024) / (rel_blck_size / 1024);
+
+		if (strncmp("MB", unit, 2) == 0)
+			return  1024 / (rel_blck_size / 1024);
+
+		if (strncmp("kB", unit, 2) == 0)
+			return -(rel_blck_size / 1024);
+	}
+
+	if (base_unit == GUC_UNIT_XBLOCKS) {
+		if (strncmp("TB", unit, 2) == 0)
+			return (1024 * 1024 * 1024) / (wal_blck_size / 1024);
+
+		if (strncmp("GB", unit, 2) == 0)
+			return (1024 * 1024) / (wal_blck_size / 1024);
+
+		if (strncmp("MB", unit, 2) == 0)
+			return  1024 / (wal_blck_size / 1024);
+
+                if (strncmp("kB", unit, 2) == 0)
+                        return -(wal_blck_size / 1024);
+        }
+
+	return 0;
+}
+
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
  * to the given base unit.  'value' and 'unit' are the input value and unit
@@ -5433,6 +5470,7 @@ convert_to_base_unit(int64 value, const char *unit,
 {
 	const unit_conversion *table;
 	int			i;
+	int	multiplier;
 
 	if (base_unit & GUC_UNIT_MEMORY)
 		table = memory_unit_conversion_table;
@@ -5444,13 +5482,20 @@ convert_to_base_unit(int64 value, const char *unit,
 		if (base_unit == table[i].base_unit &&
 			strcmp(unit, table[i].unit) == 0)
 		{
-			if (table[i].multiplier < 0)
-				*base_value = value / (-table[i].multiplier);
+			if (table[i].func != NULL)
+				multiplier = table[i].func(unit, base_unit);
+			else 
+				multiplier = table[i].multiplier;
+
+			if (multiplier < 0)
+				*base_value = value / (-multiplier);
 			else
-				*base_value = value * table[i].multiplier;
+				*base_value = value * multiplier;
+
 			return true;
 		}
 	}
+
 	return false;
 }
 
@@ -5466,6 +5511,7 @@ convert_from_base_unit(int64 base_value, int base_unit,
 {
 	const unit_conversion *table;
 	int			i;
+	int	multiplier;
 
 	*unit = NULL;
 
@@ -5483,15 +5529,20 @@ convert_from_base_unit(int64 base_value, int base_unit,
 			 * assume that the conversions for each base unit are ordered from
 			 * greatest unit to the smallest!
 			 */
-			if (table[i].multiplier < 0)
+			if (table[i].func != NULL)
+                                multiplier = table[i].func(table[i].unit, base_unit);
+                        else
+                                multiplier = table[i].multiplier;
+
+			if (multiplier < 0)
 			{
-				*value = base_value * (-table[i].multiplier);
+				*value = base_value * (-multiplier);
 				*unit = table[i].unit;
 				break;
 			}
-			else if (base_value % table[i].multiplier == 0)
+			else if (base_value % multiplier == 0)
 			{
-				*value = base_value / table[i].multiplier;
+				*value = base_value / multiplier;
 				*unit = table[i].unit;
 				break;
 			}
@@ -8130,11 +8181,11 @@ GetConfigOptionByNum(int varnum, const char **values, bool *noshow)
 				values[2] = "MB";
 				break;
 			case GUC_UNIT_BLOCKS:
-				snprintf(buffer, sizeof(buffer), "%dkB", BLCKSZ / 1024);
+				snprintf(buffer, sizeof(buffer), "%dkB", rel_blck_size / 1024);
 				values[2] = pstrdup(buffer);
 				break;
 			case GUC_UNIT_XBLOCKS:
-				snprintf(buffer, sizeof(buffer), "%dkB", XLOG_BLCKSZ / 1024);
+				snprintf(buffer, sizeof(buffer), "%dkB", wal_blck_size / 1024);
 				values[2] = pstrdup(buffer);
 				break;
 			case GUC_UNIT_MS:
diff --git a/src/backend/utils/misc/pg_controldata.c b/src/backend/utils/misc/pg_controldata.c
index dee6dfc12f..7a3fdaf920 100644
--- a/src/backend/utils/misc/pg_controldata.c
+++ b/src/backend/utils/misc/pg_controldata.c
@@ -139,9 +139,9 @@ pg_control_checkpoint(PG_FUNCTION_ARGS)
 	 * Calculate name of the WAL file containing the latest checkpoint's REDO
 	 * start point.
 	 */
-	XLByteToSeg(ControlFile->checkPointCopy.redo, segno, wal_segment_size);
+	XLByteToSeg(ControlFile->checkPointCopy.redo, segno, wal_file_size);
 	XLogFileName(xlogfilename, ControlFile->checkPointCopy.ThisTimeLineID,
-				 segno, wal_segment_size);
+				 segno, wal_file_size);
 
 	/* Populate the values and null arrays */
 	values[0] = LSNGetDatum(ControlFile->checkPoint);
diff --git a/src/backend/utils/sort/logtape.c b/src/backend/utils/sort/logtape.c
index 5ebb6fb11a..090cef3bfd 100644
--- a/src/backend/utils/sort/logtape.c
+++ b/src/backend/utils/sort/logtape.c
@@ -28,7 +28,7 @@
  * larger size than the underlying OS may support.
  *
  * For simplicity, we allocate and release space in the underlying file
- * in BLCKSZ-size blocks.  Space allocation boils down to keeping track
+ * in rel_blck_size-size blocks.  Space allocation boils down to keeping track
  * of which blocks in the underlying file belong to which logical tape,
  * plus any blocks that are free (recycled and not yet reused).
  * The blocks in each logical tape form a chain, with a prev- and next-
@@ -76,11 +76,12 @@
 #include "postgres.h"
 
 #include "storage/buffile.h"
+#include "storage/md.h"
 #include "utils/logtape.h"
 #include "utils/memutils.h"
 
 /*
- * A TapeBlockTrailer is stored at the end of each BLCKSZ block.
+ * A TapeBlockTrailer is stored at the end of each rel_blck_size block.
  *
  * The first block of a tape has prev == -1.  The last block of a tape
  * stores the number of valid bytes on the block, inverted, in 'next'
@@ -94,7 +95,7 @@ typedef struct TapeBlockTrailer
 								 * bytes on last block (if < 0) */
 } TapeBlockTrailer;
 
-#define TapeBlockPayloadSize  (BLCKSZ - sizeof(TapeBlockTrailer))
+#define TapeBlockPayloadSize  (rel_blck_size - sizeof(TapeBlockTrailer))
 #define TapeBlockGetTrailer(buf) \
 	((TapeBlockTrailer *) ((char *) buf + TapeBlockPayloadSize))
 
@@ -155,7 +156,7 @@ struct LogicalTapeSet
 
 	/*
 	 * File size tracking.  nBlocksWritten is the size of the underlying file,
-	 * in BLCKSZ blocks.  nBlocksAllocated is the number of blocks allocated
+	 * in rel_blck_size blocks.  nBlocksAllocated is the number of blocks allocated
 	 * by ltsGetFreeBlock(), and it is always greater than or equal to
 	 * nBlocksWritten.  Blocks between nBlocksAllocated and nBlocksWritten are
 	 * blocks that have been allocated for a tape, but have not been written
@@ -216,7 +217,7 @@ ltsWriteBlock(LogicalTapeSet *lts, long blocknum, void *buffer)
 	 */
 	while (blocknum > lts->nBlocksWritten)
 	{
-		char		zerobuf[BLCKSZ];
+		char           zerobuf[rel_blck_size];
 
 		MemSet(zerobuf, 0, sizeof(zerobuf));
 
@@ -225,7 +226,7 @@ ltsWriteBlock(LogicalTapeSet *lts, long blocknum, void *buffer)
 
 	/* Write the requested block */
 	if (BufFileSeekBlock(lts->pfile, blocknum) != 0 ||
-		BufFileWrite(lts->pfile, buffer, BLCKSZ) != BLCKSZ)
+		BufFileWrite(lts->pfile, buffer, rel_blck_size) != rel_blck_size)
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not write block %ld of temporary file: %m",
@@ -246,7 +247,7 @@ static void
 ltsReadBlock(LogicalTapeSet *lts, long blocknum, void *buffer)
 {
 	if (BufFileSeekBlock(lts->pfile, blocknum) != 0 ||
-		BufFileRead(lts->pfile, buffer, BLCKSZ) != BLCKSZ)
+		BufFileRead(lts->pfile, buffer, rel_blck_size) != rel_blck_size)
 		ereport(ERROR,
 				(errcode_for_file_access(),
 				 errmsg("could not read block %ld of temporary file: %m",
@@ -289,7 +290,7 @@ ltsReadFillBuffer(LogicalTapeSet *lts, LogicalTape *lt)
 			lt->nextBlockNumber = TapeBlockGetTrailer(thisbuf)->next;
 
 		/* Advance to next block, if we have buffer space left */
-	} while (lt->buffer_size - lt->nbytes > BLCKSZ);
+	} while (lt->buffer_size - lt->nbytes > rel_blck_size);
 
 	return (lt->nbytes > 0);
 }
@@ -474,8 +475,8 @@ LogicalTapeWrite(LogicalTapeSet *lts, int tapenum,
 	/* Allocate data buffer and first block on first write */
 	if (lt->buffer == NULL)
 	{
-		lt->buffer = (char *) palloc(BLCKSZ);
-		lt->buffer_size = BLCKSZ;
+		lt->buffer = (char *) palloc(rel_blck_size);
+		lt->buffer_size = rel_blck_size;
 	}
 	if (lt->curBlockNumber == -1)
 	{
@@ -488,7 +489,7 @@ LogicalTapeWrite(LogicalTapeSet *lts, int tapenum,
 		TapeBlockGetTrailer(lt->buffer)->prev = -1L;
 	}
 
-	Assert(lt->buffer_size == BLCKSZ);
+	Assert(lt->buffer_size == rel_blck_size);
 	while (size > 0)
 	{
 		if (lt->pos >= TapeBlockPayloadSize)
@@ -542,9 +543,9 @@ LogicalTapeWrite(LogicalTapeSet *lts, int tapenum,
  *
  * 'buffer_size' specifies how much memory to use for the read buffer.
  * Regardless of the argument, the actual amount of memory used is between
- * BLCKSZ and MaxAllocSize, and is a multiple of BLCKSZ.  The given value is
+ * rel_blck_size and MaxAllocSize, and is a multiple of rel_blck_size.  The given value is
  * rounded down and truncated to fit those constraints, if necessary.  If the
- * tape is frozen, the 'buffer_size' argument is ignored, and a small BLCKSZ
+ * tape is frozen, the 'buffer_size' argument is ignored, and a small rel_blck_size
  * byte buffer is used.
  */
 void
@@ -559,12 +560,12 @@ LogicalTapeRewindForRead(LogicalTapeSet *lts, int tapenum, size_t buffer_size)
 	 * Round and cap buffer_size if needed.
 	 */
 	if (lt->frozen)
-		buffer_size = BLCKSZ;
+		buffer_size = rel_blck_size;
 	else
 	{
 		/* need at least one block */
-		if (buffer_size < BLCKSZ)
-			buffer_size = BLCKSZ;
+		if (buffer_size < rel_blck_size)
+			buffer_size = rel_blck_size;
 
 		/*
 		 * palloc() larger than MaxAllocSize would fail (a multi-gigabyte
@@ -573,8 +574,8 @@ LogicalTapeRewindForRead(LogicalTapeSet *lts, int tapenum, size_t buffer_size)
 		if (buffer_size > MaxAllocSize)
 			buffer_size = MaxAllocSize;
 
-		/* round down to BLCKSZ boundary */
-		buffer_size -= buffer_size % BLCKSZ;
+		/* round down to rel_blck_size boundary */
+		buffer_size -= buffer_size % rel_blck_size;
 	}
 
 	if (lt->writing)
@@ -728,12 +729,12 @@ LogicalTapeFreeze(LogicalTapeSet *lts, int tapenum)
 	 * we're reading from multiple tapes.  But at the end of a sort, when a
 	 * tape is frozen, we only read from a single tape anyway.
 	 */
-	if (!lt->buffer || lt->buffer_size != BLCKSZ)
+	if (!lt->buffer || lt->buffer_size != rel_blck_size)
 	{
 		if (lt->buffer)
 			pfree(lt->buffer);
-		lt->buffer = palloc(BLCKSZ);
-		lt->buffer_size = BLCKSZ;
+		lt->buffer = palloc(rel_blck_size);
+		lt->buffer_size = rel_blck_size;
 	}
 
 	/* Read the first block, or reset if tape is empty */
@@ -773,7 +774,7 @@ LogicalTapeBackspace(LogicalTapeSet *lts, int tapenum, size_t size)
 	Assert(tapenum >= 0 && tapenum < lts->nTapes);
 	lt = &lts->tapes[tapenum];
 	Assert(lt->frozen);
-	Assert(lt->buffer_size == BLCKSZ);
+	Assert(lt->buffer_size == rel_blck_size);
 
 	/*
 	 * Easy case for seek within current block.
@@ -845,7 +846,7 @@ LogicalTapeSeek(LogicalTapeSet *lts, int tapenum,
 	lt = &lts->tapes[tapenum];
 	Assert(lt->frozen);
 	Assert(offset >= 0 && offset <= TapeBlockPayloadSize);
-	Assert(lt->buffer_size == BLCKSZ);
+	Assert(lt->buffer_size == rel_blck_size);
 
 	if (blocknum != lt->curBlockNumber)
 	{
@@ -876,7 +877,7 @@ LogicalTapeTell(LogicalTapeSet *lts, int tapenum,
 	lt = &lts->tapes[tapenum];
 
 	/* With a larger buffer, 'pos' wouldn't be the same as offset within page */
-	Assert(lt->buffer_size == BLCKSZ);
+	Assert(lt->buffer_size == rel_blck_size);
 
 	*blocknum = lt->curBlockNumber;
 	*offset = lt->pos;
diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c
index 34af8d6334..befd4996e3 100644
--- a/src/backend/utils/sort/tuplesort.c
+++ b/src/backend/utils/sort/tuplesort.c
@@ -208,8 +208,8 @@ typedef enum
  */
 #define MINORDER		6		/* minimum merge order */
 #define MAXORDER		500		/* maximum merge order */
-#define TAPE_BUFFER_OVERHEAD		BLCKSZ
-#define MERGE_BUFFER_SIZE			(BLCKSZ * 32)
+#define TAPE_BUFFER_OVERHEAD		rel_blck_size
+#define MERGE_BUFFER_SIZE			(rel_blck_size * 32)
 
 typedef int (*SortTupleComparator) (const SortTuple *a, const SortTuple *b,
 									Tuplesortstate *state);
@@ -2953,7 +2953,7 @@ tuplesort_get_stats(Tuplesortstate *state,
 	if (state->tapeset)
 	{
 		stats->spaceType = SORT_SPACE_TYPE_DISK;
-		stats->spaceUsed = LogicalTapeSetBlocks(state->tapeset) * (BLCKSZ / 1024);
+		stats->spaceUsed = LogicalTapeSetBlocks(state->tapeset) * (rel_blck_size / 1024);
 	}
 	else
 	{
diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c
index ad0d0e2ac0..c10f3b40f8 100644
--- a/src/bin/initdb/initdb.c
+++ b/src/bin/initdb/initdb.c
@@ -72,6 +72,15 @@
 #include "getopt_long.h"
 #include "mb/pg_wchar.h"
 #include "miscadmin.h"
+#include "pg_control_def.h"
+
+
+#define DEBUG_INITDB                   0
+
+#define debug_initdb(format, ...)      \
+       if (DEBUG_INITDB)               \
+               fprintf(stderr, "initdb --> " format, ##__VA_ARGS__);
+
 
 
 /* Ideally this would be in a .h file, but it hardly seems worth the trouble */
@@ -142,8 +151,6 @@ static bool sync_only = false;
 static bool show_setting = false;
 static bool data_checksums = false;
 static char *xlog_dir = NULL;
-static char *str_wal_segment_size_mb = NULL;
-static int	wal_segment_size_mb;
 
 
 /* internal vars */
@@ -167,9 +174,24 @@ static bool found_existing_xlogdir = false;
 static char infoversion[100];
 static bool caught_signal = false;
 static bool output_failed = false;
-static int	output_errno = 0;
+static int output_errno = 0;
 static char *pgdata_native;
 
+/*
+ * Relation files
+ *
+ * This is imported from pg_control_def.h and shared with the backend.
+ */
+unsigned int rel_blck_size = REL_BLCK_SIZE_DEF;		/* in bytes, default 8KB */
+unsigned int rel_file_blck = REL_FILE_BLCK_DEF;		/* in blocks, default 131072 */
+unsigned long int rel_file_size = REL_FILE_SIZE_DEF;	/* in bytes, default 1GB */
+
+/* Wal files */
+unsigned int wal_blck_size = WAL_BLCK_SIZE_DEF;		/* in bytes, default 8KB */
+unsigned int wal_file_blck = WAL_FILE_BLCK_DEF;		/* in blocks, default 2048 */
+unsigned long wal_file_size = WAL_FILE_SIZE_DEF;	/* in bytes, default 16MB */
+
+
 /* defaults */
 static int	n_connections = 10;
 static int	n_buffers = 50;
@@ -197,6 +219,7 @@ static char *authwarning = NULL;
  */
 static const char *boot_options = "-F";
 static const char *backend_options = "--single -F -O -j -c search_path=pg_catalog -c exit_on_error=true";
+static const char *backend_options_debug = "--single -F -E -O -j -c search_path=pg_catalog -c exit_on_error=true";
 
 static const char *const subdirs[] = {
 	"global",
@@ -234,6 +257,7 @@ static char **replace_token(char **lines,
 #ifndef HAVE_UNIX_SOCKETS
 static char **filter_lines_with_token(char **lines, const char *token);
 #endif
+
 static char **readfile(const char *path);
 static void writefile(char *path, char **lines);
 static FILE *popen_check(const char *command, const char *mode);
@@ -242,6 +266,8 @@ static char *get_id(void);
 static int get_encoding_id(const char *encoding_name);
 static void set_input(char **dest, const char *filename);
 static void check_input(char *path);
+static bool ispowerof2(unsigned int value);
+static bool check_block_file_sizes(void);
 static void write_version_file(const char *extrapath);
 static void set_null_conf(void);
 static void test_config_settings(void);
@@ -282,6 +308,8 @@ void		create_xlog_or_symlink(void);
 void		warn_on_mount_point(int error);
 void		initialize_data_directory(void);
 
+void gets_interactive(const char *prompt);
+
 /*
  * macros for running pipes to postgres
  */
@@ -941,7 +969,17 @@ test_config_settings(void)
 		test_conns = trial_conns[i];
 		test_buffs = MIN_BUFS_FOR_CONNS(test_conns);
 
-		snprintf(cmd, sizeof(cmd),
+		if (debug == true) {
+			snprintf(cmd, sizeof(cmd),
+				 "\"%s\" --boot -x0 %s "
+				 "-c max_connections=%d "
+				 "-c shared_buffers=%d "
+				 "-c dynamic_shared_memory_type=none "
+				 "-d 5",
+				 backend_exec, boot_options,
+				 test_conns, test_buffs);
+		} else {
+			snprintf(cmd, sizeof(cmd),
 				 "\"%s\" --boot -x0 %s "
 				 "-c max_connections=%d "
 				 "-c shared_buffers=%d "
@@ -950,6 +988,8 @@ test_config_settings(void)
 				 backend_exec, boot_options,
 				 test_conns, test_buffs,
 				 DEVNULL, DEVNULL);
+		}
+
 		status = system(cmd);
 		if (status == 0)
 		{
@@ -968,8 +1008,8 @@ test_config_settings(void)
 
 	for (i = 0; i < bufslen; i++)
 	{
-		/* Use same amount of memory, independent of BLCKSZ */
-		test_buffs = (trial_bufs[i] * 8192) / BLCKSZ;
+		/* Use same amount of memory, independent of rel_blck_size */
+		test_buffs = (trial_bufs[i] * 8192) /rel_blck_size; 
 		if (test_buffs <= ok_buffers)
 		{
 			test_buffs = ok_buffers;
@@ -991,10 +1031,10 @@ test_config_settings(void)
 	}
 	n_buffers = test_buffs;
 
-	if ((n_buffers * (BLCKSZ / 1024)) % 1024 == 0)
-		printf("%dMB\n", (n_buffers * (BLCKSZ / 1024)) / 1024);
+	if ((n_buffers * (rel_blck_size / 1024)) % 1024 == 0)
+		printf("%dMB\n", (n_buffers * (rel_blck_size / 1024)) / 1024);
 	else
-		printf("%dkB\n", n_buffers * (BLCKSZ / 1024));
+		printf("%dkB\n", n_buffers * (rel_blck_size / 1024));
 
 	printf(_("selecting dynamic shared memory implementation ... "));
 	fflush(stdout);
@@ -1002,14 +1042,11 @@ test_config_settings(void)
 	printf("%s\n", dynamic_shared_memory_type);
 }
 
-/*
- * Calculate the default wal_size with a "pretty" unit.
- */
 static char *
 pretty_wal_size(int segment_count)
 {
-	int			sz = wal_segment_size_mb * segment_count;
-	char	   *result = pg_malloc(11);
+	int                     sz = (wal_file_size / MB) * segment_count;
+	char       *result = pg_malloc(11);
 
 	if ((sz % 1024) == 0)
 		snprintf(result, 11, "%dGB", sz / 1024);
@@ -1041,12 +1078,12 @@ setup_config(void)
 	snprintf(repltok, sizeof(repltok), "max_connections = %d", n_connections);
 	conflines = replace_token(conflines, "#max_connections = 100", repltok);
 
-	if ((n_buffers * (BLCKSZ / 1024)) % 1024 == 0)
+	if ((n_buffers * (rel_blck_size / 1024)) % 1024 == 0)
 		snprintf(repltok, sizeof(repltok), "shared_buffers = %dMB",
-				 (n_buffers * (BLCKSZ / 1024)) / 1024);
+				 (n_buffers * (rel_blck_size / 1024)) / 1024);
 	else
 		snprintf(repltok, sizeof(repltok), "shared_buffers = %dkB",
-				 n_buffers * (BLCKSZ / 1024));
+				 n_buffers * (rel_blck_size / 1024));
 	conflines = replace_token(conflines, "#shared_buffers = 32MB", repltok);
 
 #ifdef HAVE_UNIX_SOCKETS
@@ -1128,21 +1165,21 @@ setup_config(void)
 
 #if DEFAULT_BACKEND_FLUSH_AFTER > 0
 	snprintf(repltok, sizeof(repltok), "#backend_flush_after = %dkB",
-			 DEFAULT_BACKEND_FLUSH_AFTER * (BLCKSZ / 1024));
+			 DEFAULT_BACKEND_FLUSH_AFTER * (rel_blck_size / 1024));
 	conflines = replace_token(conflines, "#backend_flush_after = 0",
 							  repltok);
 #endif
 
 #if DEFAULT_BGWRITER_FLUSH_AFTER > 0
 	snprintf(repltok, sizeof(repltok), "#bgwriter_flush_after = %dkB",
-			 DEFAULT_BGWRITER_FLUSH_AFTER * (BLCKSZ / 1024));
+			 DEFAULT_BGWRITER_FLUSH_AFTER * (rel_blck_size / 1024));
 	conflines = replace_token(conflines, "#bgwriter_flush_after = 0",
 							  repltok);
 #endif
 
 #if DEFAULT_CHECKPOINT_FLUSH_AFTER > 0
 	snprintf(repltok, sizeof(repltok), "#checkpoint_flush_after = %dkB",
-			 DEFAULT_CHECKPOINT_FLUSH_AFTER * (BLCKSZ / 1024));
+			 DEFAULT_CHECKPOINT_FLUSH_AFTER * (rel_blck_size / 1024));
 	conflines = replace_token(conflines, "#checkpoint_flush_after = 0",
 							  repltok);
 #endif
@@ -1381,13 +1418,20 @@ bootstrap_template1(void)
 	unsetenv("PGCLIENTENCODING");
 
 	snprintf(cmd, sizeof(cmd),
-			 "\"%s\" --boot -x1 -X %u %s %s %s",
-			 backend_exec,
-			 wal_segment_size_mb * (1024 * 1024),
-			 data_checksums ? "-k" : "",
-			 boot_options,
-			 debug ? "-d 5" : "");
-
+			"\"%s\" --boot -x1 %s %s %s"
+			" -c block_size=%u -c segment_size=%u"
+			" -c wal_block_size=%u -c wal_segment_size=%u",
+			backend_exec,
+			data_checksums ? "-k" : "",
+			boot_options,
+			debug ? "-d 5" : "",
+			rel_blck_size,
+			rel_file_blck,
+			wal_blck_size,
+			wal_file_blck * wal_blck_size);
+
+	debug_initdb("starting postgres\n");
+	debug_initdb("cmd = %s\n", cmd);
 
 	PG_CMD_OPEN;
 
@@ -1768,7 +1812,7 @@ setup_privileges(FILE *cmdfd)
 		"        relacl IS NOT NULL"
 		"        AND relkind IN (" CppAsString2(RELKIND_RELATION) ", "
 		CppAsString2(RELKIND_VIEW) ", " CppAsString2(RELKIND_MATVIEW) ", "
-		CppAsString2(RELKIND_SEQUENCE) ");",
+		CppAsString2(RELKIND_SEQUENCE) ");\n\n",
 		"INSERT INTO pg_init_privs "
 		"  (objoid, classoid, objsubid, initprivs, privtype)"
 		"    SELECT"
@@ -1784,7 +1828,7 @@ setup_privileges(FILE *cmdfd)
 		"        pg_attribute.attacl IS NOT NULL"
 		"        AND pg_class.relkind IN (" CppAsString2(RELKIND_RELATION) ", "
 		CppAsString2(RELKIND_VIEW) ", " CppAsString2(RELKIND_MATVIEW) ", "
-		CppAsString2(RELKIND_SEQUENCE) ");",
+		CppAsString2(RELKIND_SEQUENCE) ");\n\n",
 		"INSERT INTO pg_init_privs "
 		"  (objoid, classoid, objsubid, initprivs, privtype)"
 		"    SELECT"
@@ -1796,7 +1840,7 @@ setup_privileges(FILE *cmdfd)
 		"    FROM"
 		"        pg_proc"
 		"    WHERE"
-		"        proacl IS NOT NULL;",
+		"        proacl IS NOT NULL;\n\n",
 		"INSERT INTO pg_init_privs "
 		"  (objoid, classoid, objsubid, initprivs, privtype)"
 		"    SELECT"
@@ -1808,7 +1852,7 @@ setup_privileges(FILE *cmdfd)
 		"    FROM"
 		"        pg_type"
 		"    WHERE"
-		"        typacl IS NOT NULL;",
+		"        typacl IS NOT NULL;\n\n",
 		"INSERT INTO pg_init_privs "
 		"  (objoid, classoid, objsubid, initprivs, privtype)"
 		"    SELECT"
@@ -1820,7 +1864,7 @@ setup_privileges(FILE *cmdfd)
 		"    FROM"
 		"        pg_language"
 		"    WHERE"
-		"        lanacl IS NOT NULL;",
+		"        lanacl IS NOT NULL;\n\n",
 		"INSERT INTO pg_init_privs "
 		"  (objoid, classoid, objsubid, initprivs, privtype)"
 		"    SELECT"
@@ -1833,7 +1877,7 @@ setup_privileges(FILE *cmdfd)
 		"    FROM"
 		"        pg_largeobject_metadata"
 		"    WHERE"
-		"        lomacl IS NOT NULL;",
+		"        lomacl IS NOT NULL;\n\n",
 		"INSERT INTO pg_init_privs "
 		"  (objoid, classoid, objsubid, initprivs, privtype)"
 		"    SELECT"
@@ -1845,7 +1889,7 @@ setup_privileges(FILE *cmdfd)
 		"    FROM"
 		"        pg_namespace"
 		"    WHERE"
-		"        nspacl IS NOT NULL;",
+		"        nspacl IS NOT NULL;\n\n",
 		"INSERT INTO pg_init_privs "
 		"  (objoid, classoid, objsubid, initprivs, privtype)"
 		"    SELECT"
@@ -1858,7 +1902,7 @@ setup_privileges(FILE *cmdfd)
 		"    FROM"
 		"        pg_foreign_data_wrapper"
 		"    WHERE"
-		"        fdwacl IS NOT NULL;",
+		"        fdwacl IS NOT NULL;\n\n",
 		"INSERT INTO pg_init_privs "
 		"  (objoid, classoid, objsubid, initprivs, privtype)"
 		"    SELECT"
@@ -1871,14 +1915,15 @@ setup_privileges(FILE *cmdfd)
 		"    FROM"
 		"        pg_foreign_server"
 		"    WHERE"
-		"        srvacl IS NOT NULL;",
+		"        srvacl IS NOT NULL;\n\n",
 		NULL
 	};
 
 	priv_lines = replace_token(privileges_setup, "$POSTGRES_SUPERUSERNAME",
 							   escape_quotes(username));
-	for (line = priv_lines; *line != NULL; line++)
+	for (line = priv_lines; *line != NULL; line++) {
 		PG_CMD_PUTS(*line);
+	}
 }
 
 /*
@@ -2323,7 +2368,14 @@ usage(const char *progname)
 	printf(_("  -U, --username=NAME       database superuser name\n"));
 	printf(_("  -W, --pwprompt            prompt for a password for the new superuser\n"));
 	printf(_("  -X, --waldir=WALDIR       location for the write-ahead log directory\n"));
-	printf(_("      --wal-segsize=SIZE    size of wal segment size\n"));
+	printf(_("      --rel_blck_size=REL_BLCK_SIZE\n"
+			"                            block size of relation files\n"));
+	printf(_("      --rel_file_blck=REL_FILE_BLCK\n"
+			"                            size of relation files in rel block size above\n"));
+	printf(_("      --wal_blck_size=WAL_BLCK_SIZE\n"
+			"                            block size of wal files\n"));
+	printf(_("      --wal_file_blck=WAL_BLCK_BLCK\n"
+			"                            size of wal files in wal block size above\n"));
 	printf(_("\nLess commonly used options:\n"));
 	printf(_("  -d, --debug               generate lots of debugging output\n"));
 	printf(_("  -k, --data-checksums      use data page checksums\n"));
@@ -2393,6 +2445,111 @@ check_need_password(const char *authmethodlocal, const char *authmethodhost)
 	}
 }
 
+static bool
+ispowerof2(unsigned int value)
+{
+	return value && !(value & (value - 1));
+}
+
+/*
+ * Below function should be shared with both server and client code (src/common code)
+ */
+static bool
+check_block_file_sizes(void)
+{
+	/*
+	 * Relation file checking
+	 */
+	if (rel_blck_size < REL_BLCK_SIZE_MIN
+		|| rel_blck_size > REL_BLCK_SIZE_MAX
+		|| !ispowerof2(rel_blck_size)) {
+		fprintf(stderr, _("rel_block_size must be:\n"
+			"1/ a power of 2\n"
+			"2/ above or equal %u bytes\n"
+			"3/ below or equal %u bytes (2^15)\n"
+			"This is determined by the 15-bit width of the lp_off/lp_len fields"
+			" in ItemIdData (see include/storage/itemid.h).\n"
+			"Default value is %u bytes"),
+			REL_BLCK_SIZE_MIN,
+			REL_BLCK_SIZE_MAX,
+			REL_BLCK_SIZE_DEF);
+		fprintf(stdout, _("rel_blck_size set to %u bytes\n"), rel_blck_size);
+		return 1;
+	} else {
+		fprintf(stderr, _("rel_block_size %u KB ok\n"), rel_blck_size >> 10);
+	}
+
+	rel_file_size = rel_file_blck * rel_blck_size;
+	if (rel_file_size < (unsigned long) REL_FILE_SIZE_MIN 
+		|| rel_file_size > (unsigned long) REL_FILE_SIZE_MAX
+		|| !ispowerof2(rel_file_blck)) {
+		fprintf(stderr, _("rel_file_size must be:\n"
+			"1/ a power of 2\n"
+			"2/ above or equal to %lu bytes\n"
+			"3/ below or equal to %lu bytes\n"
+			"Default is %lu bytes\n"),
+			(unsigned long) REL_FILE_SIZE_MIN,
+			(unsigned long) REL_FILE_SIZE_MAX,
+			(unsigned long) REL_FILE_SIZE_DEF);
+		fprintf(stdout, "rel_file_blck set to %u blocks or %lu bytes\n",
+			rel_file_blck, rel_file_size);
+		return 1;
+	} else {
+		fprintf(stderr, _("rel_file_size %lu MB or %u blocks ok\n"),
+			rel_file_size >> 20,
+			rel_file_blck);
+	}
+
+        /*
+         * Wal file checking
+         */
+	if (wal_blck_size < WAL_BLCK_SIZE_MIN
+		|| wal_blck_size > WAL_BLCK_SIZE_MAX
+		|| !ispowerof2(wal_blck_size)
+		|| (wal_blck_size != 1024
+			&& wal_blck_size != 2048
+			&& wal_blck_size != 4096
+			&& wal_blck_size != 8192
+			&& wal_blck_size != 16384
+			&& wal_blck_size != 32768
+			&& wal_blck_size != 65536)) {
+		fprintf(stderr, _("wal_block_size must be:\n"
+			"1/ a power of 2\n"
+			"2/ above or equal %u bytes size\n"
+			"3/ below or equal %u bytes size\n"
+			"4/ one of the following values: 1024, 2048, 4096, 8192, 16384, 32768, 65536\n"
+			"Default value is %lu bytes\n"),
+			WAL_BLCK_SIZE_MIN,
+			WAL_BLCK_SIZE_MAX,
+			(unsigned long)WAL_BLCK_SIZE_DEF);
+		fprintf(stdout, "wal_blck_size set to %u bytes\n", wal_blck_size);
+		return 1;
+	} else {
+		fprintf(stderr, _("wal_block_size %u KB ok\n"), wal_blck_size >> 10);
+	}
+
+	wal_file_size = wal_file_blck * wal_blck_size;
+	if (wal_file_size < (unsigned long) WAL_FILE_SIZE_MIN
+		|| wal_file_size > (unsigned long) WAL_FILE_SIZE_MAX
+		|| !ispowerof2(wal_file_blck)) {
+		fprintf(stderr, _("wal_block_size must be:"
+        		"1/ a power of 2\n"
+			"2/ above or equal %lu bytes size\n"
+			"3/ below or equal %lu bytes size (2^15)\n"
+			"Default value is %lu bytes\n"), 
+			(unsigned long) WAL_FILE_SIZE_MIN,
+			(unsigned long) WAL_FILE_SIZE_MAX,
+			(unsigned long) WAL_FILE_SIZE_DEF);
+		fprintf(stdout, "wal_file_blck set to %lu blocks\n", wal_file_size);
+		return 1;
+	} else {
+		fprintf(stderr, _("wal_file_size %lu MB or %u blocks ok\n"),
+			wal_file_size >> 20,
+			wal_file_blck);
+	}
+
+	return 0;
+}
 
 void
 setup_pgdata(void)
@@ -2939,10 +3096,18 @@ initialize_data_directory(void)
 	fputs(_("performing post-bootstrap initialization ... "), stdout);
 	fflush(stdout);
 
-	snprintf(cmd, sizeof(cmd),
-			 "\"%s\" %s template1 >%s",
-			 backend_exec, backend_options,
-			 DEVNULL);
+	if (debug) {
+		snprintf(cmd, sizeof(cmd), 
+			"\"%s\" %s template1",
+			backend_exec, backend_options_debug);
+	} else {
+		snprintf(cmd, sizeof(cmd), 
+			"\"%s\" %s template1 >%s",
+			backend_exec, backend_options,
+			DEVNULL);
+	}
+
+	debug_initdb("cmd = %s\n", cmd);
 
 	PG_CMD_OPEN;
 
@@ -2982,6 +3147,17 @@ initialize_data_directory(void)
 	check_ok();
 }
 
+void
+gets_interactive(const char *prompt)
+{
+#ifdef USE_READLINE
+	char* string;
+
+	string = readline((char*) prompt);
+	PG_CMD_PUTS(string);
+	free(string);
+#endif
+}
 
 int
 main(int argc, char *argv[])
@@ -3014,8 +3190,12 @@ main(int argc, char *argv[])
 		{"no-sync", no_argument, NULL, 'N'},
 		{"sync-only", no_argument, NULL, 'S'},
 		{"waldir", required_argument, NULL, 'X'},
-		{"wal-segsize", required_argument, NULL, 12},
 		{"data-checksums", no_argument, NULL, 'k'},
+		{"wal-segsize", required_argument, NULL, 12},
+		{"rel_blck_size", required_argument, NULL, 13},
+		{"rel_file_blck", required_argument, NULL, 14},
+		{"wal_blck_size", required_argument, NULL, 15},
+		{"wal_file_blck", required_argument, NULL, 16},
 		{NULL, 0, NULL, 0}
 	};
 
@@ -3148,8 +3328,17 @@ main(int argc, char *argv[])
 			case 'X':
 				xlog_dir = pg_strdup(optarg);
 				break;
-			case 12:
-				str_wal_segment_size_mb = pg_strdup(optarg);
+			case 13:
+				rel_blck_size = atoi(pg_strdup(optarg));
+				break;
+			case 14:
+				rel_file_blck = atoi(pg_strdup(optarg));
+				break;
+			case 15:
+				wal_blck_size = atoi(pg_strdup(optarg));
+				break;
+			case 16:
+				wal_file_blck = atoi(pg_strdup(optarg));
 				break;
 			default:
 				/* getopt_long already emitted a complaint */
@@ -3159,7 +3348,6 @@ main(int argc, char *argv[])
 		}
 	}
 
-
 	/*
 	 * Non-option argument specifies data directory as long as it wasn't
 	 * already specified with -D / --pgdata
@@ -3213,26 +3401,11 @@ main(int argc, char *argv[])
 
 	check_need_password(authmethodlocal, authmethodhost);
 
-	/* set wal segment size */
-	if (str_wal_segment_size_mb == NULL)
-		wal_segment_size_mb = (DEFAULT_XLOG_SEG_SIZE) / (1024 * 1024);
-	else
-	{
-		char	   *endptr;
-
-		/* check that the argument is a number */
-		wal_segment_size_mb = strtol(str_wal_segment_size_mb, &endptr, 10);
-
-		/* verify that wal segment size is valid */
-		if (*endptr != '\0' ||
-			!IsValidWalSegSize(wal_segment_size_mb * 1024 * 1024))
-		{
-			fprintf(stderr,
-					_("%s: --wal-segsize must be a power of two between 1 and 1024\n"),
-					progname);
-			exit(1);
-		}
-	}
+	/*
+	 * Check file and block size for relations and wal.
+	 */
+	if (check_block_file_sizes())
+		exit(1);
 
 	get_restricted_token(progname);
 
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
index 8427c97fe4..4264972cfa 100644
--- a/src/bin/pg_basebackup/pg_basebackup.c
+++ b/src/bin/pg_basebackup/pg_basebackup.c
@@ -130,6 +130,17 @@ static volatile LONG has_xlogendptr = 0;
 /* Contents of recovery.conf to be generated */
 static PQExpBuffer recoveryconfcontents = NULL;
 
+/*
+ * Wal and relation file and block sizes
+ */
+unsigned int wal_blck_size = 0;
+unsigned int wal_file_blck = 0;
+unsigned long rel_file_size = 0;
+unsigned int rel_blck_size = 0;
+unsigned int rel_file_blck = 0;
+unsigned long wal_file_size = 0;
+
+
 /* Function headers */
 static void usage(void);
 static void disconnect_and_exit(int code) pg_attribute_noreturn();
@@ -560,7 +571,7 @@ StartLogStreamer(char *startpos, uint32 timeline, char *sysidentifier)
 	}
 	param->startptr = ((uint64) hi) << 32 | lo;
 	/* Round off to even segment position */
-	param->startptr -= XLogSegmentOffset(param->startptr, WalSegSz);
+	param->startptr -= XLogSegmentOffset(param->startptr, wal_file_size);
 
 #ifndef WIN32
 	/* Create our background pipe */
@@ -2453,8 +2464,8 @@ main(int argc, char **argv)
 		exit(1);
 	}
 
-	/* determine remote server's xlog segment size */
-	if (!RetrieveWalSegSize(conn))
+	/* determine wal and relation file and block sizes */
+	if (!FetchWalRelBlckFileSize(conn))
 		disconnect_and_exit(1);
 
 	/* Create pg_wal symlink, if required */
@@ -2486,6 +2497,7 @@ main(int argc, char **argv)
 		free(linkloc);
 	}
 
+
 	BaseBackup();
 
 	success = true;
diff --git a/src/bin/pg_basebackup/pg_receivewal.c b/src/bin/pg_basebackup/pg_receivewal.c
index d801ea07fc..e7eb30c011 100644
--- a/src/bin/pg_basebackup/pg_receivewal.c
+++ b/src/bin/pg_basebackup/pg_receivewal.c
@@ -45,6 +45,16 @@ static bool synchronous = false;
 static char *replication_slot = NULL;
 static XLogRecPtr endpos = InvalidXLogRecPtr;
 
+/*
+ * Wal and relation file and block sizes
+ */
+unsigned int wal_blck_size;
+unsigned int wal_file_blck;
+unsigned long rel_file_size;
+unsigned int rel_blck_size;
+unsigned int rel_file_blck;
+unsigned long wal_file_size;
+
 
 static void usage(void);
 static DIR *get_destination_dir(char *dest_folder);
@@ -244,7 +254,7 @@ FindStreamingStart(uint32 *tli)
 		/*
 		 * Looks like an xlog file. Parse its position.
 		 */
-		XLogFromFileName(dirent->d_name, &tli, &segno, WalSegSz);
+		XLogFromFileName(dirent->d_name, &tli, &segno, wal_file_size);
 
 		/*
 		 * Check that the segment has the right size, if it's supposed to be
@@ -269,7 +279,7 @@ FindStreamingStart(uint32 *tli)
 				disconnect_and_exit(1);
 			}
 
-			if (statbuf.st_size != WalSegSz)
+			if (statbuf.st_size != wal_file_size)
 			{
 				fprintf(stderr,
 						_("%s: segment file \"%s\" has incorrect size %d, skipping\n"),
@@ -310,7 +320,7 @@ FindStreamingStart(uint32 *tli)
 			bytes_out = (buf[3] << 24) | (buf[2] << 16) |
 				(buf[1] << 8) | buf[0];
 
-			if (bytes_out != WalSegSz)
+			if (bytes_out != wal_file_size)
 			{
 				fprintf(stderr,
 						_("%s: compressed segment file \"%s\" has incorrect uncompressed size %d, skipping\n"),
@@ -351,7 +361,7 @@ FindStreamingStart(uint32 *tli)
 		if (!high_ispartial)
 			high_segno++;
 
-		XLogSegNoOffsetToRecPtr(high_segno, 0, high_ptr, WalSegSz);
+		XLogSegNoOffsetToRecPtr(high_segno, 0, high_ptr, wal_file_size);
 
 		*tli = high_tli;
 		return high_ptr;
@@ -412,7 +422,7 @@ StreamLog(void)
 	/*
 	 * Always start streaming at the beginning of a segment
 	 */
-	stream.startpos -= XLogSegmentOffset(stream.startpos, WalSegSz);
+	stream.startpos -= XLogSegmentOffset(stream.startpos, wal_file_size);
 
 	/*
 	 * Start the replication
@@ -702,9 +712,9 @@ main(int argc, char **argv)
 	if (!RunIdentifySystem(conn, NULL, NULL, NULL, &db_name))
 		disconnect_and_exit(1);
 
-	/* determine remote server's xlog segment size */
-	if (!RetrieveWalSegSize(conn))
-		disconnect_and_exit(1);
+	/* determine wal and relation file and block sizes */
+        if (!FetchWalRelBlckFileSize(conn))
+                disconnect_and_exit(1);
 
 	/*
 	 * Check that there is a database associated with connection, none should
diff --git a/src/bin/pg_basebackup/pg_recvlogical.c b/src/bin/pg_basebackup/pg_recvlogical.c
index c7893c10ca..ec5cdc847c 100644
--- a/src/bin/pg_basebackup/pg_recvlogical.c
+++ b/src/bin/pg_basebackup/pg_recvlogical.c
@@ -62,6 +62,17 @@ static bool output_needs_fsync = false;
 static XLogRecPtr output_written_lsn = InvalidXLogRecPtr;
 static XLogRecPtr output_fsync_lsn = InvalidXLogRecPtr;
 
+/*
+ * Wal and relation file and block sizes
+ */
+unsigned int wal_blck_size;
+unsigned int wal_file_blck;
+unsigned long rel_file_size;
+unsigned int rel_blck_size;
+unsigned int rel_file_blck;
+unsigned long wal_file_size;
+
+
 static void usage(void);
 static void StreamLogicalLog(void);
 static void disconnect_and_exit(int code) pg_attribute_noreturn();
diff --git a/src/bin/pg_basebackup/receivelog.c b/src/bin/pg_basebackup/receivelog.c
index d29b501740..2e3b2e8083 100644
--- a/src/bin/pg_basebackup/receivelog.c
+++ b/src/bin/pg_basebackup/receivelog.c
@@ -95,15 +95,15 @@ open_walfile(StreamCtl *stream, XLogRecPtr startpoint)
 	ssize_t		size;
 	XLogSegNo	segno;
 
-	XLByteToSeg(startpoint, segno, WalSegSz);
-	XLogFileName(current_walfile_name, stream->timeline, segno, WalSegSz);
+	XLByteToSeg(startpoint, segno, wal_file_size);
+	XLogFileName(current_walfile_name, stream->timeline, segno, wal_file_size);
 
 	snprintf(fn, sizeof(fn), "%s%s", current_walfile_name,
 			 stream->partial_suffix ? stream->partial_suffix : "");
 
 	/*
 	 * When streaming to files, if an existing file exists we verify that it's
-	 * either empty (just created), or a complete WalSegSz segment (in which
+	 * either empty (just created), or a complete wal_file_size segment (in which
 	 * case it has been created and padded). Anything else indicates a corrupt
 	 * file.
 	 *
@@ -120,7 +120,7 @@ open_walfile(StreamCtl *stream, XLogRecPtr startpoint)
 					progname, fn, stream->walmethod->getlasterror());
 			return false;
 		}
-		if (size == WalSegSz)
+		if (size == wal_file_size)
 		{
 			/* Already padded file. Open it for use */
 			f = stream->walmethod->open_for_write(current_walfile_name, stream->partial_suffix, 0);
@@ -152,9 +152,9 @@ open_walfile(StreamCtl *stream, XLogRecPtr startpoint)
 				errno = ENOSPC;
 			fprintf(stderr,
 					ngettext("%s: write-ahead log file \"%s\" has %d byte, should be 0 or %d\n",
-							 "%s: write-ahead log file \"%s\" has %d bytes, should be 0 or %d\n",
+							 "%s: write-ahead log file \"%s\" has %d bytes, should be 0 or %lu\n",
 							 size),
-					progname, fn, (int) size, WalSegSz);
+					progname, fn, (int) size, wal_file_size);
 			return false;
 		}
 		/* File existed and was empty, so fall through and open */
@@ -163,7 +163,7 @@ open_walfile(StreamCtl *stream, XLogRecPtr startpoint)
 	/* No file existed, so create one */
 
 	f = stream->walmethod->open_for_write(current_walfile_name,
-										  stream->partial_suffix, WalSegSz);
+										  stream->partial_suffix, wal_file_size);
 	if (f == NULL)
 	{
 		fprintf(stderr,
@@ -204,7 +204,7 @@ close_walfile(StreamCtl *stream, XLogRecPtr pos)
 
 	if (stream->partial_suffix)
 	{
-		if (currpos == WalSegSz)
+		if (currpos == wal_file_size)
 			r = stream->walmethod->close(walfile, CLOSE_NORMAL);
 		else
 		{
@@ -232,7 +232,7 @@ close_walfile(StreamCtl *stream, XLogRecPtr pos)
 	 * new node. This is in line with walreceiver.c always doing a
 	 * XLogArchiveForceDone() after a complete segment.
 	 */
-	if (currpos == WalSegSz && stream->mark_done)
+	if (currpos == wal_file_size && stream->mark_done)
 	{
 		/* writes error message if failed */
 		if (!mark_file_as_archived(stream, current_walfile_name))
@@ -660,7 +660,7 @@ ReceiveXlogStream(PGconn *conn, StreamCtl *stream)
 			 */
 			stream->timeline = newtimeline;
 			stream->startpos = stream->startpos -
-				XLogSegmentOffset(stream->startpos, WalSegSz);
+				XLogSegmentOffset(stream->startpos, wal_file_size);
 			continue;
 		}
 		else if (PQresultStatus(res) == PGRES_COMMAND_OK)
@@ -1095,7 +1095,7 @@ ProcessXLogDataMsg(PGconn *conn, StreamCtl *stream, char *copybuf, int len,
 	*blockpos = fe_recvint64(&copybuf[1]);
 
 	/* Extract WAL location for this block */
-	xlogoff = XLogSegmentOffset(*blockpos, WalSegSz);
+	xlogoff = XLogSegmentOffset(*blockpos, wal_file_size);
 
 	/*
 	 * Verify that the initial location in the stream matches where we think
@@ -1135,8 +1135,8 @@ ProcessXLogDataMsg(PGconn *conn, StreamCtl *stream, char *copybuf, int len,
 		 * If crossing a WAL boundary, only write up until we reach wal
 		 * segment size.
 		 */
-		if (xlogoff + bytes_left > WalSegSz)
-			bytes_to_write = WalSegSz - xlogoff;
+		if (xlogoff + bytes_left > wal_file_size)
+			bytes_to_write = wal_file_size - xlogoff;
 		else
 			bytes_to_write = bytes_left;
 
@@ -1166,7 +1166,7 @@ ProcessXLogDataMsg(PGconn *conn, StreamCtl *stream, char *copybuf, int len,
 		xlogoff += bytes_to_write;
 
 		/* Did we reach the end of a WAL segment? */
-		if (XLogSegmentOffset(*blockpos, WalSegSz) == 0)
+		if (XLogSegmentOffset(*blockpos, wal_file_size) == 0)
 		{
 			if (!close_walfile(stream, *blockpos))
 				/* Error message written in close_walfile() */
diff --git a/src/bin/pg_basebackup/streamutil.c b/src/bin/pg_basebackup/streamutil.c
index a57ff8f2c4..248acd92af 100644
--- a/src/bin/pg_basebackup/streamutil.c
+++ b/src/bin/pg_basebackup/streamutil.c
@@ -22,14 +22,15 @@
 #include "streamutil.h"
 
 #include "access/xlog_internal.h"
+#include "storage/md.h"
 #include "common/fe_memutils.h"
 #include "datatype/timestamp.h"
 #include "port/pg_bswap.h"
 #include "pqexpbuffer.h"
+#include "pg_control_def.h"
 
-#define ERRCODE_DUPLICATE_OBJECT  "42710"
 
-uint32		WalSegSz;
+#define ERRCODE_DUPLICATE_OBJECT  "42710"
 
 /* SHOW command for replication connection was introduced in version 10 */
 #define MINIMUM_VERSION_FOR_SHOW_CMD 100000
@@ -235,68 +236,63 @@ GetConnection(void)
 }
 
 /*
- * From version 10, explicitly set wal segment size using SHOW wal_segment_size
+ * From version 10, explicitly set wal segment size using SHOW wal_file_size
  * since ControlFile is not accessible here.
  */
 bool
-RetrieveWalSegSize(PGconn *conn)
+RetrieveServerParameterUnsignedInt(PGconn *conn, const char* name, unsigned int* value)
 {
 	PGresult   *res;
-	char		xlog_unit[3];
-	int			xlog_val,
-				multiplier = 1;
+	PQExpBufferData buf;
 
 	/* check connection existence */
 	Assert(conn != NULL);
 
+	initPQExpBuffer(&buf);
+
 	/* for previous versions set the default xlog seg size */
 	if (PQserverVersion(conn) < MINIMUM_VERSION_FOR_SHOW_CMD)
 	{
-		WalSegSz = DEFAULT_XLOG_SEG_SIZE;
+		if (strcmp(name, "rel_blck_size") == 0)
+			rel_blck_size = REL_BLCK_SIZE_DEF;
+		else if (strcmp(name, "rel_file_blck") == 0)
+			rel_file_blck = REL_FILE_BLCK_DEF;
+		else if (strcmp(name, "wal_blck_size") == 0)
+			wal_blck_size = WAL_BLCK_SIZE_DEF;
+		else if (strcmp(name, "wal_file_blck") == 0)
+			wal_file_blck = WAL_FILE_BLCK_DEF;
+		else 
+			return false;
+
 		return true;
 	}
 
-	res = PQexec(conn, "SHOW wal_segment_size");
+	printfPQExpBuffer(&buf, "SHOW %s", name);
+	res = PQexec(conn, buf.data);
 	if (PQresultStatus(res) != PGRES_TUPLES_OK)
 	{
 		fprintf(stderr, _("%s: could not send replication command \"%s\": %s\n"),
-				progname, "SHOW wal_segment_size", PQerrorMessage(conn));
+				progname, buf.data, PQerrorMessage(conn));
 
 		PQclear(res);
 		return false;
 	}
+
 	if (PQntuples(res) != 1 || PQnfields(res) < 1)
 	{
 		fprintf(stderr,
-				_("%s: could not fetch WAL segment size: got %d rows and %d fields, expected %d rows and %d or more fields\n"),
-				progname, PQntuples(res), PQnfields(res), 1, 1);
+				_("%s: could not fetch %s: got %d rows and %d fields, expected %d rows and %d or more fields\n"),
+				progname, name, PQntuples(res), PQnfields(res), 1, 1);
 
 		PQclear(res);
 		return false;
 	}
 
 	/* fetch xlog value and unit from the result */
-	if (sscanf(PQgetvalue(res, 0, 0), "%d%s", &xlog_val, xlog_unit) != 2)
+	if (sscanf(PQgetvalue(res, 0, 0), "%d", value) != 1)
 	{
-		fprintf(stderr, _("%s: WAL segment size could not be parsed\n"),
-				progname);
-		return false;
-	}
-
-	/* set the multiplier based on unit to convert xlog_val to bytes */
-	if (strcmp(xlog_unit, "MB") == 0)
-		multiplier = 1024 * 1024;
-	else if (strcmp(xlog_unit, "GB") == 0)
-		multiplier = 1024 * 1024 * 1024;
-
-	/* convert and set WalSegSz */
-	WalSegSz = xlog_val * multiplier;
-
-	if (!IsValidWalSegSize(WalSegSz))
-	{
-		fprintf(stderr,
-				_("%s: WAL segment size must be a power of two between 1MB and 1GB, but the remote server reported a value of %d bytes\n"),
-				progname, WalSegSz);
+		fprintf(stderr, _("%s: %s could not be parsed\n"),
+				progname, name);
 		return false;
 	}
 
@@ -304,6 +300,22 @@ RetrieveWalSegSize(PGconn *conn)
 	return true;
 }
 
+bool
+FetchWalRelBlckFileSize(PGconn* conn)
+{
+        if (!RetrieveServerParameterUnsignedInt(conn, "rel_blck_size", &rel_blck_size)
+                || !RetrieveServerParameterUnsignedInt(conn, "rel_file_blck", &rel_file_blck)
+                || !RetrieveServerParameterUnsignedInt(conn, "wal_blck_size", &wal_blck_size)
+                || !RetrieveServerParameterUnsignedInt(conn, "wal_blck_size", &wal_file_blck)) {
+                return false;
+	} else {
+		rel_file_size = rel_blck_size * rel_file_blck;
+		wal_file_size = wal_blck_size * wal_file_blck;
+
+                return true;
+	}
+}
+
 /*
  * Run IDENTIFY_SYSTEM through a given connection and give back to caller
  * some result information if requested:
diff --git a/src/bin/pg_basebackup/streamutil.h b/src/bin/pg_basebackup/streamutil.h
index 908fd68c2b..83e07a56c0 100644
--- a/src/bin/pg_basebackup/streamutil.h
+++ b/src/bin/pg_basebackup/streamutil.h
@@ -41,7 +41,11 @@ extern bool RunIdentifySystem(PGconn *conn, char **sysid,
 				  TimeLineID *starttli,
 				  XLogRecPtr *startpos,
 				  char **db_name);
-extern bool RetrieveWalSegSize(PGconn *conn);
+
+extern bool RetrieveServerParameterUnsignedInt(PGconn *conn,
+	const char* name, unsigned int* value);
+extern bool FetchWalRelBlckFileSize(PGconn* conn);
+
 extern TimestampTz feGetCurrentTimestamp(void);
 extern void feTimestampDifference(TimestampTz start_time, TimestampTz stop_time,
 					  long *secs, int *microsecs);
diff --git a/src/bin/pg_basebackup/walmethods.c b/src/bin/pg_basebackup/walmethods.c
index 02d368b242..7467264466 100644
--- a/src/bin/pg_basebackup/walmethods.c
+++ b/src/bin/pg_basebackup/walmethods.c
@@ -26,10 +26,14 @@
 
 #include "receivelog.h"
 #include "streamutil.h"
+#include "storage/md.h"
 
 /* Size of zlib buffer for .tar.gz */
 #define ZLIB_OUT_SIZE 4096
 
+extern unsigned int wal_blck_size;
+
+
 /*-------------------------------------------------------------------------
  * WalDirectoryMethod - write wal to a directory looking like pg_wal
  *-------------------------------------------------------------------------
@@ -118,10 +122,10 @@ dir_open_for_write(const char *pathname, const char *temp_suffix, size_t pad_to_
 		char	   *zerobuf;
 		int			bytes;
 
-		zerobuf = pg_malloc0(XLOG_BLCKSZ);
-		for (bytes = 0; bytes < pad_to_size; bytes += XLOG_BLCKSZ)
+		zerobuf = pg_malloc0(wal_blck_size);
+		for (bytes = 0; bytes < pad_to_size; bytes += wal_blck_size)
 		{
-			if (write(fd, zerobuf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+			if (write(fd, zerobuf, wal_blck_size) != wal_blck_size)
 			{
 				int			save_errno = errno;
 
@@ -499,12 +503,12 @@ tar_write(Walfile f, const void *buf, size_t count)
 static bool
 tar_write_padding_data(TarMethodFile *f, size_t bytes)
 {
-	char	   *zerobuf = pg_malloc0(XLOG_BLCKSZ);
+	char	   *zerobuf = pg_malloc0(wal_blck_size);
 	size_t		bytesleft = bytes;
 
 	while (bytesleft)
 	{
-		size_t		bytestowrite = bytesleft > XLOG_BLCKSZ ? XLOG_BLCKSZ : bytesleft;
+		size_t		bytestowrite = bytesleft > wal_blck_size ? wal_blck_size : bytesleft;
 
 		ssize_t		r = tar_write(f, zerobuf, bytestowrite);
 
diff --git a/src/bin/pg_controldata/pg_controldata.c b/src/bin/pg_controldata/pg_controldata.c
index cc73b7d6c2..5182698584 100644
--- a/src/bin/pg_controldata/pg_controldata.c
+++ b/src/bin/pg_controldata/pg_controldata.c
@@ -16,10 +16,9 @@
  */
 #define FRONTEND 1
 
-#include "postgres.h"
-
 #include <time.h>
 
+#include "postgres.h"
 #include "access/xlog.h"
 #include "access/xlog_internal.h"
 #include "catalog/pg_control.h"
@@ -290,15 +289,22 @@ main(int argc, char *argv[])
 		   ControlFile->track_commit_timestamp ? _("on") : _("off"));
 	printf(_("Maximum data alignment:               %u\n"),
 		   ControlFile->maxAlign);
+
 	/* we don't print floatFormat since can't say much useful about it */
-	printf(_("Database block size:                  %u\n"),
+	printf(_("Relation block size:                  %u\n"),
 		   ControlFile->blcksz);
-	printf(_("Blocks per segment of large relation: %u\n"),
+	printf(_("Relation blocks per file:          %u\n"),
 		   ControlFile->relseg_size);
+	printf(_("Relation file size:                   %u\n"),
+		   ControlFile->blcksz * ControlFile->relseg_size);
+
 	printf(_("WAL block size:                       %u\n"),
 		   ControlFile->xlog_blcksz);
-	printf(_("Bytes per WAL segment:                %u\n"),
+	printf(_("WAL blocks per file:                  %u\n"),
+		   ControlFile->xlog_seg_size / ControlFile->xlog_blcksz);
+	printf(_("WAL file size:                        %u\n"),
 		   ControlFile->xlog_seg_size);
+
 	printf(_("Maximum length of identifiers:        %u\n"),
 		   ControlFile->nameDataLen);
 	printf(_("Maximum columns in an index:          %u\n"),
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index 9f93385f44..b416181aaf 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -55,6 +55,7 @@
 #include "common/restricted_token.h"
 #include "storage/large_object.h"
 #include "pg_getopt.h"
+#include "pg_control_def.h"
 
 
 static ControlFileData ControlFile; /* pg_control values */
@@ -70,7 +71,17 @@ static MultiXactId set_mxid = 0;
 static MultiXactOffset set_mxoff = (MultiXactOffset) -1;
 static uint32 minXlogTli = 0;
 static XLogSegNo minXlogSegNo = 0;
-static int	WalSegSz;
+
+/*
+ * Wal and relation file and block sizes
+ */
+unsigned int wal_blck_size = 0;
+unsigned int wal_file_blck = 0;
+unsigned long rel_file_size = 0;
+unsigned int rel_blck_size = 0;
+unsigned int rel_file_blck = 0;
+unsigned long wal_file_size = 0;
+
 
 static void CheckDataVersion(void);
 static bool ReadControlFile(void);
@@ -97,10 +108,30 @@ main(int argc, char *argv[])
 	char	   *DataDir = NULL;
 	char	   *log_fname = NULL;
 	int			fd;
-
-	set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_resetwal"));
+	int option_index;
+
+	static struct option long_options[] = {
+		{"commit", required_argument, NULL, 'c'},
+		{"datadir", required_argument, NULL, 'D'},
+		{"xidepoch", required_argument, NULL, 'e'},
+		{"force", no_argument, NULL, 'f'},
+		{"walfile", required_argument, NULL, 'l'},
+		{"mxid", required_argument, NULL, 'm'},
+		{"noupdate", no_argument, NULL, 'n'},
+		{"nextoid", required_argument, NULL, 'o'},
+		{"mxactoffset", required_argument, NULL, 'O'},
+		{"version", no_argument, NULL, 'V'},
+		{"nextxid", required_argument, NULL, 'x'},
+		{"rel_blck_size", required_argument, NULL, 10},
+		{"rel_file_blck", required_argument, NULL, 11},
+		{"wal_blck_size", required_argument, NULL, 12},
+		{"wal_file_blck", required_argument, NULL, 13},
+		{"help", required_argument, NULL, '?'},
+		{NULL, 0, NULL, 0}
+	};
 
 	progname = get_progname(argv[0]);
+	set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_resetwal"));
 
 	if (argc > 1)
 	{
@@ -117,7 +148,7 @@ main(int argc, char *argv[])
 	}
 
 
-	while ((c = getopt(argc, argv, "c:D:e:fl:m:no:O:x:")) != -1)
+	while ((c = getopt_long(argc, argv, "c:D:e:fl:m:no:O:x:", long_options, &option_index)) != -1)
 	{
 		switch (c)
 		{
@@ -275,6 +306,22 @@ main(int argc, char *argv[])
 				log_fname = pg_strdup(optarg);
 				break;
 
+			case 10:
+				rel_blck_size = atoi(pg_strdup(optarg));
+				break;
+
+			case 11:
+				rel_file_blck = atoi(pg_strdup(optarg));
+				break;
+
+			case 12:
+				wal_blck_size = atoi(pg_strdup(optarg));
+				break;
+
+			case 13:
+				wal_file_blck = atoi(pg_strdup(optarg));
+				break;
+
 			default:
 				fprintf(stderr, _("Try \"%s --help\" for more information.\n"), progname);
 				exit(1);
@@ -358,7 +405,7 @@ main(int argc, char *argv[])
 		GuessControlValues();
 
 	if (log_fname != NULL)
-		XLogFromFileName(log_fname, &minXlogTli, &minXlogSegNo, WalSegSz);
+		XLogFromFileName(log_fname, &minXlogTli, &minXlogSegNo, wal_file_size);
 
 	/*
 	 * Also look at existing segment files to set up newXlogSegNo
@@ -593,14 +640,21 @@ ReadControlFile(void)
 		}
 
 		memcpy(&ControlFile, buffer, sizeof(ControlFile));
-		WalSegSz = ControlFile.xlog_seg_size;
 
-		/* return false if WalSegSz is not valid */
-		if (!IsValidWalSegSize(WalSegSz))
+		rel_blck_size = ControlFile.blcksz;
+		rel_file_blck = ControlFile.relseg_size;
+		rel_file_size = rel_blck_size * rel_file_blck;
+
+		wal_blck_size = ControlFile.xlog_blcksz;
+		wal_file_size = ControlFile.xlog_seg_size;
+		wal_file_blck = wal_file_size / wal_blck_size;
+
+		/* return false if wal_file_size is not valid */
+		if (!IsValidWalSegSize(wal_file_size))
 		{
 			fprintf(stderr,
-					_("%s: pg_control specifies invalid WAL segment size (%d bytes); proceed with caution \n"),
-					progname, WalSegSz);
+					_("%s: pg_control specifies invalid WAL segment size (%lu bytes); proceed with caution \n"),
+					progname, wal_file_size);
 			guessed = true;
 		}
 
@@ -676,10 +730,27 @@ GuessControlValues(void)
 
 	ControlFile.maxAlign = MAXIMUM_ALIGNOF;
 	ControlFile.floatFormat = FLOATFORMAT_VALUE;
-	ControlFile.blcksz = BLCKSZ;
-	ControlFile.relseg_size = RELSEG_SIZE;
-	ControlFile.xlog_blcksz = XLOG_BLCKSZ;
-	ControlFile.xlog_seg_size = DEFAULT_XLOG_SEG_SIZE;
+
+	if (rel_blck_size != 0)
+		ControlFile.blcksz = rel_blck_size;
+	else
+		ControlFile.blcksz = REL_BLCK_SIZE_DEF;
+
+	if (rel_file_blck!= 0)
+		ControlFile.relseg_size = rel_file_blck;
+	else
+		ControlFile.relseg_size = REL_FILE_BLCK_DEF;
+
+	if (wal_blck_size != 0)
+		ControlFile.xlog_blcksz = wal_blck_size;
+	else
+		ControlFile.xlog_blcksz = WAL_BLCK_SIZE_DEF;
+
+	if (wal_file_blck != 0)
+		ControlFile.xlog_seg_size = wal_file_blck * ControlFile.xlog_blcksz;
+	else
+		ControlFile.xlog_seg_size = WAL_FILE_BLCK_DEF * ControlFile.xlog_blcksz;
+
 	ControlFile.nameDataLen = NAMEDATALEN;
 	ControlFile.indexMaxKeys = INDEX_MAX_KEYS;
 	ControlFile.toast_max_chunk_size = TOAST_MAX_CHUNK_SIZE;
@@ -793,7 +864,7 @@ PrintNewControlValues(void)
 	printf(_("\n\nValues to be changed:\n\n"));
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
-				 newXlogSegNo, WalSegSz);
+				 newXlogSegNo, wal_file_size);
 	printf(_("First log segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
@@ -870,7 +941,7 @@ RewriteControlFile(void)
 	 * newXlogSegNo.
 	 */
 	XLogSegNoOffsetToRecPtr(newXlogSegNo, SizeOfXLogLongPHD,
-							ControlFile.checkPointCopy.redo, WalSegSz);
+							ControlFile.checkPointCopy.redo, wal_file_size);
 	ControlFile.checkPointCopy.time = (pg_time_t) time(NULL);
 
 	ControlFile.state = DB_SHUTDOWNED;
@@ -896,7 +967,7 @@ RewriteControlFile(void)
 	ControlFile.max_locks_per_xact = 64;
 
 	/* Now we can force the recorded xlog seg size to the right thing. */
-	ControlFile.xlog_seg_size = WalSegSz;
+	ControlFile.xlog_seg_size = wal_file_size;
 
 	/* Contents are protected with a CRC */
 	INIT_CRC32C(ControlFile.crc);
@@ -1033,7 +1104,7 @@ FindEndOfXLOG(void)
 	 * are in virgin territory.
 	 */
 	xlogbytepos = newXlogSegNo * ControlFile.xlog_seg_size;
-	newXlogSegNo = (xlogbytepos + WalSegSz - 1) / WalSegSz;
+	newXlogSegNo = (xlogbytepos + wal_file_size - 1) / wal_file_size;
 	newXlogSegNo++;
 }
 
@@ -1159,9 +1230,9 @@ WriteEmptyXLOG(void)
 	char	   *recptr;
 
 	/* Use malloc() to ensure buffer is MAXALIGNED */
-	buffer = (char *) pg_malloc(XLOG_BLCKSZ);
+	buffer = (char *) pg_malloc(wal_blck_size);
 	page = (XLogPageHeader) buffer;
-	memset(buffer, 0, XLOG_BLCKSZ);
+	memset(buffer, 0, wal_blck_size);
 
 	/* Set up the XLOG page header */
 	page->xlp_magic = XLOG_PAGE_MAGIC;
@@ -1170,8 +1241,8 @@ WriteEmptyXLOG(void)
 	page->xlp_pageaddr = ControlFile.checkPointCopy.redo - SizeOfXLogLongPHD;
 	longpage = (XLogLongPageHeader) page;
 	longpage->xlp_sysid = ControlFile.system_identifier;
-	longpage->xlp_seg_size = WalSegSz;
-	longpage->xlp_xlog_blcksz = XLOG_BLCKSZ;
+	longpage->xlp_seg_size = wal_file_size;
+	longpage->xlp_xlog_blcksz = wal_blck_size;
 
 	/* Insert the initial checkpoint record */
 	recptr = (char *) page + SizeOfXLogLongPHD;
@@ -1196,7 +1267,7 @@ WriteEmptyXLOG(void)
 
 	/* Write the first page */
 	XLogFilePath(path, ControlFile.checkPointCopy.ThisTimeLineID,
-				 newXlogSegNo, WalSegSz);
+				 newXlogSegNo, wal_file_size);
 
 	unlink(path);
 
@@ -1210,7 +1281,7 @@ WriteEmptyXLOG(void)
 	}
 
 	errno = 0;
-	if (write(fd, buffer, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+	if (write(fd, buffer, wal_blck_size) != wal_blck_size)
 	{
 		/* if write didn't set errno, assume problem is no disk space */
 		if (errno == 0)
@@ -1221,11 +1292,11 @@ WriteEmptyXLOG(void)
 	}
 
 	/* Fill the rest of the file with zeroes */
-	memset(buffer, 0, XLOG_BLCKSZ);
-	for (nbytes = XLOG_BLCKSZ; nbytes < WalSegSz; nbytes += XLOG_BLCKSZ)
+	memset(buffer, 0, wal_blck_size);
+	for (nbytes = wal_blck_size; nbytes < wal_file_size; nbytes += wal_blck_size)
 	{
 		errno = 0;
-		if (write(fd, buffer, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+		if (write(fd, buffer, wal_blck_size) != wal_blck_size)
 		{
 			if (errno == 0)
 				errno = ENOSPC;
diff --git a/src/bin/pg_rewind/copy_fetch.c b/src/bin/pg_rewind/copy_fetch.c
index f7ac5b30b5..d5d0d9f0d7 100644
--- a/src/bin/pg_rewind/copy_fetch.c
+++ b/src/bin/pg_rewind/copy_fetch.c
@@ -22,6 +22,7 @@
 #include "pg_rewind.h"
 
 #include "catalog/catalog.h"
+#include "storage/md.h"
 
 static void recurse_dir(const char *datadir, const char *path,
 			process_file_callback_t callback);
@@ -158,12 +159,11 @@ recurse_dir(const char *datadir, const char *parentpath,
 static void
 copy_file_range(const char *path, off_t begin, off_t end, bool trunc)
 {
-	char		buf[BLCKSZ];
+	char	buf[rel_blck_size];
 	char		srcpath[MAXPGPATH];
 	int			srcfd;
 
 	snprintf(srcpath, sizeof(srcpath), "%s/%s", datadir_source, path);
-
 	srcfd = open(srcpath, O_RDONLY | PG_BINARY, 0);
 	if (srcfd < 0)
 		pg_fatal("could not open source file \"%s\": %s\n",
@@ -179,6 +179,7 @@ copy_file_range(const char *path, off_t begin, off_t end, bool trunc)
 		int			readlen;
 		int			len;
 
+
 		if (end - begin > sizeof(buf))
 			len = sizeof(buf);
 		else
@@ -256,8 +257,8 @@ execute_pagemap(datapagemap_t *pagemap, const char *path)
 	iter = datapagemap_iterate(pagemap);
 	while (datapagemap_next(iter, &blkno))
 	{
-		offset = blkno * BLCKSZ;
-		copy_file_range(path, offset, offset + BLCKSZ, false);
+		offset = blkno * rel_blck_size;
+		copy_file_range(path, offset, offset + rel_blck_size, false);
 		/* Ok, this block has now been copied from new data dir to old */
 	}
 	pg_free(iter);
diff --git a/src/bin/pg_rewind/filemap.c b/src/bin/pg_rewind/filemap.c
index dd6919025d..8bbe667436 100644
--- a/src/bin/pg_rewind/filemap.c
+++ b/src/bin/pg_rewind/filemap.c
@@ -21,6 +21,7 @@
 #include "common/string.h"
 #include "catalog/pg_tablespace.h"
 #include "storage/fd.h"
+#include "storage/md.h"
 
 filemap_t  *filemap = NULL;
 
@@ -353,8 +354,8 @@ process_block_change(ForkNumber forknum, RelFileNode rnode, BlockNumber blkno)
 
 	Assert(map->array);
 
-	segno = blkno / RELSEG_SIZE;
-	blkno_inseg = blkno % RELSEG_SIZE;
+	segno = blkno / rel_file_blck;
+	blkno_inseg = blkno % rel_file_blck;
 
 	path = datasegpath(rnode, forknum, segno);
 
@@ -378,7 +379,7 @@ process_block_change(ForkNumber forknum, RelFileNode rnode, BlockNumber blkno)
 			case FILE_ACTION_NONE:
 			case FILE_ACTION_TRUNCATE:
 				/* skip if we're truncating away the modified block anyway */
-				if ((blkno_inseg + 1) * BLCKSZ <= entry->newsize)
+				if ((blkno_inseg + 1) * rel_blck_size <= entry->newsize)
 					datapagemap_add(&entry->pagemap, blkno_inseg);
 				break;
 
@@ -388,7 +389,7 @@ process_block_change(ForkNumber forknum, RelFileNode rnode, BlockNumber blkno)
 				 * skip the modified block if it is part of the "tail" that
 				 * we're copying anyway.
 				 */
-				if ((blkno_inseg + 1) * BLCKSZ <= entry->oldsize)
+				if ((blkno_inseg + 1) * rel_blck_size <= entry->oldsize)
 					datapagemap_add(&entry->pagemap, blkno_inseg);
 				break;
 
@@ -510,7 +511,7 @@ calculate_totals(void)
 
 			iter = datapagemap_iterate(&entry->pagemap);
 			while (datapagemap_next(iter, &blk))
-				map->fetch_size += BLCKSZ;
+				map->fetch_size += rel_blck_size;
 
 			pg_free(iter);
 		}
diff --git a/src/bin/pg_rewind/libpq_fetch.c b/src/bin/pg_rewind/libpq_fetch.c
index 79bec40b02..73e15d0b2a 100644
--- a/src/bin/pg_rewind/libpq_fetch.c
+++ b/src/bin/pg_rewind/libpq_fetch.c
@@ -24,6 +24,7 @@
 #include "libpq-fe.h"
 #include "catalog/catalog.h"
 #include "catalog/pg_type.h"
+#include "storage/md.h"
 #include "port/pg_bswap.h"
 
 static PGconn *conn = NULL;
@@ -33,7 +34,7 @@ static PGconn *conn = NULL;
  *
  * (This only applies to files that are copied in whole, or for truncated
  * files where we copy the tail. Relation files, where we know the individual
- * blocks that need to be fetched, are fetched in BLCKSZ chunks.)
+ * blocks that need to be fetched, are fetched in rel_blck_size chunks.)
  */
 #define CHUNKSIZE 1000000
 
@@ -510,9 +511,9 @@ execute_pagemap(datapagemap_t *pagemap, const char *path)
 	iter = datapagemap_iterate(pagemap);
 	while (datapagemap_next(iter, &blkno))
 	{
-		offset = blkno * BLCKSZ;
+		offset = blkno * rel_blck_size;
 
-		fetch_file_range(path, offset, offset + BLCKSZ);
+		fetch_file_range(path, offset, offset + rel_blck_size);
 	}
 	pg_free(iter);
 }
diff --git a/src/bin/pg_rewind/parsexlog.c b/src/bin/pg_rewind/parsexlog.c
index 0fc71d2a13..c466e2a315 100644
--- a/src/bin/pg_rewind/parsexlog.c
+++ b/src/bin/pg_rewind/parsexlog.c
@@ -69,7 +69,7 @@ extractPageMap(const char *datadir, XLogRecPtr startpoint, int tliIndex,
 
 	private.datadir = datadir;
 	private.tliIndex = tliIndex;
-	xlogreader = XLogReaderAllocate(WalSegSz, &SimpleXLogPageRead,
+	xlogreader = XLogReaderAllocate(wal_file_size, &SimpleXLogPageRead,
 									&private);
 	if (xlogreader == NULL)
 		pg_fatal("out of memory\n");
@@ -123,7 +123,7 @@ readOneRecord(const char *datadir, XLogRecPtr ptr, int tliIndex)
 
 	private.datadir = datadir;
 	private.tliIndex = tliIndex;
-	xlogreader = XLogReaderAllocate(WalSegSz, &SimpleXLogPageRead,
+	xlogreader = XLogReaderAllocate(wal_file_size, &SimpleXLogPageRead,
 									&private);
 	if (xlogreader == NULL)
 		pg_fatal("out of memory\n");
@@ -171,9 +171,9 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,
 	 * previous record happens to end at a page boundary. Skip over the page
 	 * header in that case to find the next record.
 	 */
-	if (forkptr % XLOG_BLCKSZ == 0)
+	if (forkptr % wal_blck_size == 0)
 	{
-		if (XLogSegmentOffset(forkptr, WalSegSz) == 0)
+		if (XLogSegmentOffset(forkptr, wal_file_size) == 0)
 			forkptr += SizeOfXLogLongPHD;
 		else
 			forkptr += SizeOfXLogShortPHD;
@@ -181,7 +181,7 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,
 
 	private.datadir = datadir;
 	private.tliIndex = tliIndex;
-	xlogreader = XLogReaderAllocate(WalSegSz, &SimpleXLogPageRead,
+	xlogreader = XLogReaderAllocate(wal_file_size, &SimpleXLogPageRead,
 									&private);
 	if (xlogreader == NULL)
 		pg_fatal("out of memory\n");
@@ -247,22 +247,22 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr,
 	XLogRecPtr	targetSegEnd;
 	XLogSegNo	targetSegNo;
 
-	XLByteToSeg(targetPagePtr, targetSegNo, WalSegSz);
-	XLogSegNoOffsetToRecPtr(targetSegNo + 1, 0, targetSegEnd, WalSegSz);
-	targetPageOff = XLogSegmentOffset(targetPagePtr, WalSegSz);
+	XLByteToSeg(targetPagePtr, targetSegNo, wal_file_size);
+	XLogSegNoOffsetToRecPtr(targetSegNo + 1, 0, targetSegEnd, wal_file_size);
+	targetPageOff = XLogSegmentOffset(targetPagePtr, wal_file_size);
 
 	/*
 	 * See if we need to switch to a new segment because the requested record
 	 * is not in the currently open one.
 	 */
 	if (xlogreadfd >= 0 &&
-		!XLByteInSeg(targetPagePtr, xlogreadsegno, WalSegSz))
+		!XLByteInSeg(targetPagePtr, xlogreadsegno, wal_file_size))
 	{
 		close(xlogreadfd);
 		xlogreadfd = -1;
 	}
 
-	XLByteToSeg(targetPagePtr, xlogreadsegno, WalSegSz);
+	XLByteToSeg(targetPagePtr, xlogreadsegno, wal_file_size);
 
 	if (xlogreadfd < 0)
 	{
@@ -282,7 +282,7 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr,
 			private->tliIndex--;
 
 		XLogFileName(xlogfname, targetHistory[private->tliIndex].tli,
-					 xlogreadsegno, WalSegSz);
+					 xlogreadsegno, wal_file_size);
 
 		snprintf(xlogfpath, MAXPGPATH, "%s/" XLOGDIR "/%s", private->datadir, xlogfname);
 
@@ -309,7 +309,7 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr,
 		return -1;
 	}
 
-	if (read(xlogreadfd, readBuf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+	if (read(xlogreadfd, readBuf, wal_blck_size) != wal_blck_size)
 	{
 		printf(_("could not read from file \"%s\": %s\n"), xlogfpath,
 			   strerror(errno));
@@ -319,7 +319,7 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr,
 	Assert(targetSegNo == xlogreadsegno);
 
 	*pageTLI = targetHistory[private->tliIndex].tli;
-	return XLOG_BLCKSZ;
+	return wal_blck_size;
 }
 
 /*
diff --git a/src/bin/pg_rewind/pg_rewind.c b/src/bin/pg_rewind/pg_rewind.c
index 6079156e80..8a525e75ac 100644
--- a/src/bin/pg_rewind/pg_rewind.c
+++ b/src/bin/pg_rewind/pg_rewind.c
@@ -27,6 +27,7 @@
 #include "common/restricted_token.h"
 #include "getopt_long.h"
 #include "storage/bufpage.h"
+#include "storage/md.h"
 
 static void usage(const char *progname);
 
@@ -44,7 +45,6 @@ static ControlFileData ControlFile_target;
 static ControlFileData ControlFile_source;
 
 const char *progname;
-int			WalSegSz;
 
 /* Configuration options */
 char	   *datadir_target = NULL;
@@ -59,6 +59,17 @@ bool		dry_run = false;
 TimeLineHistoryEntry *targetHistory;
 int			targetNentries;
 
+/*
+ * Wal and relation file and block sizes
+ */
+unsigned int wal_blck_size = 0;
+unsigned int wal_file_blck = 0;
+unsigned long rel_file_size = 0;
+unsigned int rel_blck_size = 0;
+unsigned int rel_file_blck = 0;
+unsigned long wal_file_size =0;
+
+
 static void
 usage(const char *progname)
 {
@@ -573,8 +584,8 @@ createBackupLabel(XLogRecPtr startpoint, TimeLineID starttli, XLogRecPtr checkpo
 	char		buf[1000];
 	int			len;
 
-	XLByteToSeg(startpoint, startsegno, WalSegSz);
-	XLogFileName(xlogfilename, starttli, startsegno, WalSegSz);
+	XLByteToSeg(startpoint, startsegno, wal_file_size);
+	XLogFileName(xlogfilename, starttli, startsegno, wal_file_size);
 
 	/*
 	 * Construct backup label file
@@ -632,12 +643,18 @@ digestControlFile(ControlFileData *ControlFile, char *src, size_t size)
 
 	memcpy(ControlFile, src, sizeof(ControlFileData));
 
-	/* set and validate WalSegSz */
-	WalSegSz = ControlFile->xlog_seg_size;
+	/* set and validate wal_file_size */
+	rel_blck_size = ControlFile->blcksz;
+        rel_file_blck = ControlFile->relseg_size;
+        rel_file_size = rel_blck_size * rel_file_blck;
+
+        wal_blck_size = ControlFile->xlog_blcksz;
+        wal_file_size = ControlFile->xlog_seg_size;
+        wal_file_blck = wal_file_size / wal_blck_size;
 
-	if (!IsValidWalSegSize(WalSegSz))
-		pg_fatal("WAL segment size must be a power of two between 1MB and 1GB, but the control file specifies %d bytes\n",
-				 WalSegSz);
+	if (!IsValidWalSegSize(wal_file_size))
+		pg_fatal("WAL segment size must be a power of two between 1MB and 1GB, but the control file specifies %lu bytes\n",
+				 wal_file_size);
 
 	/* Additional checks on control file */
 	checkControlFile(ControlFile);
diff --git a/src/bin/pg_test_fsync/pg_test_fsync.c b/src/bin/pg_test_fsync/pg_test_fsync.c
index e6f7ef8557..beebcbe1b4 100644
--- a/src/bin/pg_test_fsync/pg_test_fsync.c
+++ b/src/bin/pg_test_fsync/pg_test_fsync.c
@@ -14,6 +14,7 @@
 
 #include "getopt_long.h"
 #include "access/xlogdefs.h"
+#include "pg_control_def.h"
 
 
 /*
@@ -22,7 +23,7 @@
  */
 #define FSYNC_FILENAME	"./pg_test_fsync.out"
 
-#define XLOG_BLCKSZ_K	(XLOG_BLCKSZ / 1024)
+#define block_size_K	(block_size / 1024)
 
 #define LABEL_FORMAT		"        %-30s"
 #define NA_FORMAT			"%21s\n"
@@ -64,7 +65,7 @@ static const char *progname;
 
 static int	secs_per_test = 5;
 static int	needs_unlink = 0;
-static char full_buf[DEFAULT_XLOG_SEG_SIZE],
+static char full_buf[WAL_FILE_SIZE_DEF],
 		   *buf,
 		   *filename = FSYNC_FILENAME;
 static struct timeval start_t,
@@ -73,13 +74,13 @@ static bool alarm_triggered = false;
 
 
 static void handle_args(int argc, char *argv[]);
-static void prepare_buf(void);
+static void prepare_buf(unsigned int block_size);
 static void test_open(void);
-static void test_non_sync(void);
-static void test_sync(int writes_per_op);
+static void test_non_sync(unsigned int block_size);
+static void test_sync(int writes_per_op, unsigned int block_size);
 static void test_open_syncs(void);
 static void test_open_sync(const char *msg, int writes_size);
-static void test_file_descriptor_sync(void);
+static void test_file_descriptor_sync(unsigned int block_size);
 
 #ifndef WIN32
 static void process_alarm(int sig);
@@ -98,6 +99,8 @@ static void die(const char *str);
 int
 main(int argc, char *argv[])
 {
+	unsigned int block_size;
+
 	set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_test_fsync"));
 	progname = get_progname(argv[0]);
 
@@ -114,21 +117,21 @@ main(int argc, char *argv[])
 	pqsignal(SIGHUP, signal_cleanup);
 #endif
 
-	prepare_buf();
+	block_size = WAL_BLCK_SIZE_DEF;
+
+	prepare_buf(block_size);
 
 	test_open();
 
-	/* Test using 1 XLOG_BLCKSZ write */
-	test_sync(1);
+	/* Test using 1 block_size write */
+	test_sync(1, block_size);
 
-	/* Test using 2 XLOG_BLCKSZ writes */
-	test_sync(2);
+	/* Test using 2 block_size writes */
+	test_sync(2, block_size);
 
 	test_open_syncs();
-
-	test_file_descriptor_sync();
-
-	test_non_sync();
+	test_file_descriptor_sync(block_size);
+	test_non_sync(block_size);
 
 	unlink(filename);
 
@@ -204,15 +207,15 @@ handle_args(int argc, char *argv[])
 }
 
 static void
-prepare_buf(void)
+prepare_buf(unsigned int block_size)
 {
 	int			ops;
 
 	/* write random data into buffer */
-	for (ops = 0; ops < DEFAULT_XLOG_SEG_SIZE; ops++)
+	for (ops = 0; ops < WAL_FILE_SIZE_DEF; ops++)
 		full_buf[ops] = random();
 
-	buf = (char *) TYPEALIGN(XLOG_BLCKSZ, full_buf);
+	buf = (char *) TYPEALIGN(block_size, full_buf);
 }
 
 static void
@@ -226,8 +229,8 @@ test_open(void)
 	if ((tmpfile = open(filename, O_RDWR | O_CREAT, S_IRUSR | S_IWUSR)) == -1)
 		die("could not open output file");
 	needs_unlink = 1;
-	if (write(tmpfile, full_buf, DEFAULT_XLOG_SEG_SIZE) !=
-		DEFAULT_XLOG_SEG_SIZE)
+	if (write(tmpfile, full_buf, WAL_FILE_SIZE_DEF) !=
+		WAL_FILE_SIZE_DEF)
 		die("write failed");
 
 	/* fsync now so that dirty buffers don't skew later tests */
@@ -238,7 +241,7 @@ test_open(void)
 }
 
 static void
-test_sync(int writes_per_op)
+test_sync(int writes_per_op, unsigned int block_size)
 {
 	int			tmpfile,
 				ops,
@@ -246,9 +249,9 @@ test_sync(int writes_per_op)
 	bool		fs_warning = false;
 
 	if (writes_per_op == 1)
-		printf(_("\nCompare file sync methods using one %dkB write:\n"), XLOG_BLCKSZ_K);
+		printf(_("\nCompare file sync methods using one %dkB write:\n"), block_size_K);
 	else
-		printf(_("\nCompare file sync methods using two %dkB writes:\n"), XLOG_BLCKSZ_K);
+		printf(_("\nCompare file sync methods using two %dkB writes:\n"), block_size_K);
 	printf(_("(in wal_sync_method preference order, except fdatasync is Linux's default)\n"));
 
 	/*
@@ -269,7 +272,7 @@ test_sync(int writes_per_op)
 		for (ops = 0; alarm_triggered == false; ops++)
 		{
 			for (writes = 0; writes < writes_per_op; writes++)
-				if (write(tmpfile, buf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+				if (write(tmpfile, buf, block_size) != block_size)
 					die("write failed");
 			if (lseek(tmpfile, 0, SEEK_SET) == -1)
 				die("seek failed");
@@ -294,7 +297,7 @@ test_sync(int writes_per_op)
 	for (ops = 0; alarm_triggered == false; ops++)
 	{
 		for (writes = 0; writes < writes_per_op; writes++)
-			if (write(tmpfile, buf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+			if (write(tmpfile, buf, block_size) != block_size)
 				die("write failed");
 		fdatasync(tmpfile);
 		if (lseek(tmpfile, 0, SEEK_SET) == -1)
@@ -318,7 +321,7 @@ test_sync(int writes_per_op)
 	for (ops = 0; alarm_triggered == false; ops++)
 	{
 		for (writes = 0; writes < writes_per_op; writes++)
-			if (write(tmpfile, buf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+			if (write(tmpfile, buf, block_size) != block_size)
 				die("write failed");
 		if (fsync(tmpfile) != 0)
 			die("fsync failed");
@@ -341,7 +344,7 @@ test_sync(int writes_per_op)
 	for (ops = 0; alarm_triggered == false; ops++)
 	{
 		for (writes = 0; writes < writes_per_op; writes++)
-			if (write(tmpfile, buf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+			if (write(tmpfile, buf, block_size) != block_size)
 				die("write failed");
 		if (pg_fsync_writethrough(tmpfile) != 0)
 			die("fsync failed");
@@ -372,7 +375,7 @@ test_sync(int writes_per_op)
 		for (ops = 0; alarm_triggered == false; ops++)
 		{
 			for (writes = 0; writes < writes_per_op; writes++)
-				if (write(tmpfile, buf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+				if (write(tmpfile, buf, block_size) != block_size)
 
 					/*
 					 * This can generate write failures if the filesystem has
@@ -451,7 +454,7 @@ test_open_sync(const char *msg, int writes_size)
 }
 
 static void
-test_file_descriptor_sync(void)
+test_file_descriptor_sync(unsigned int block_size)
 {
 	int			tmpfile,
 				ops;
@@ -478,7 +481,7 @@ test_file_descriptor_sync(void)
 	{
 		if ((tmpfile = open(filename, O_RDWR, 0)) == -1)
 			die("could not open output file");
-		if (write(tmpfile, buf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+		if (write(tmpfile, buf, block_size) != block_size)
 			die("write failed");
 		if (fsync(tmpfile) != 0)
 			die("fsync failed");
@@ -506,7 +509,7 @@ test_file_descriptor_sync(void)
 	{
 		if ((tmpfile = open(filename, O_RDWR, 0)) == -1)
 			die("could not open output file");
-		if (write(tmpfile, buf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+		if (write(tmpfile, buf, block_size) != block_size)
 			die("write failed");
 		close(tmpfile);
 		/* reopen file */
@@ -520,7 +523,7 @@ test_file_descriptor_sync(void)
 }
 
 static void
-test_non_sync(void)
+test_non_sync(unsigned int block_size)
 {
 	int			tmpfile,
 				ops;
@@ -528,7 +531,7 @@ test_non_sync(void)
 	/*
 	 * Test a simple write without fsync
 	 */
-	printf(_("\nNon-sync'ed %dkB writes:\n"), XLOG_BLCKSZ_K);
+	printf(_("\nNon-sync'ed %dkB writes:\n"), block_size_K);
 	printf(LABEL_FORMAT, "write");
 	fflush(stdout);
 
@@ -537,7 +540,7 @@ test_non_sync(void)
 	{
 		if ((tmpfile = open(filename, O_RDWR, 0)) == -1)
 			die("could not open output file");
-		if (write(tmpfile, buf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
+		if (write(tmpfile, buf, block_size) != block_size)
 			die("write failed");
 		close(tmpfile);
 	}
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index ca3db1a2f6..881c31d55f 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -8,9 +8,10 @@
  */
 
 #include "postgres_fe.h"
-
 #include "pg_upgrade.h"
 
+#include "storage/md.h"
+
 #include <ctype.h>
 
 /*
@@ -556,9 +557,11 @@ check_control_data(ControlData *oldctrl,
 		pg_fatal("old and new pg_controldata alignments are invalid or do not match\n"
 				 "Likely one cluster is a 32-bit install, the other 64-bit\n");
 
-	if (oldctrl->blocksz == 0 || oldctrl->blocksz != newctrl->blocksz)
+	if (oldctrl->blocksz == 0 || oldctrl->blocksz != newctrl->blocksz) 
 		pg_fatal("old and new pg_controldata block sizes are invalid or do not match\n");
 
+	rel_blck_size = oldctrl->blocksz;
+
 	if (oldctrl->largesz == 0 || oldctrl->largesz != newctrl->largesz)
 		pg_fatal("old and new pg_controldata maximum relation segment sizes are invalid or do not match\n");
 
diff --git a/src/bin/pg_upgrade/file.c b/src/bin/pg_upgrade/file.c
index ae8d89fb66..23a1257789 100644
--- a/src/bin/pg_upgrade/file.c
+++ b/src/bin/pg_upgrade/file.c
@@ -12,6 +12,7 @@
 #include "access/visibilitymap.h"
 #include "pg_upgrade.h"
 #include "storage/bufpage.h"
+#include "storage/md.h"
 #include "storage/checksum.h"
 #include "storage/checksum_impl.h"
 
@@ -49,7 +50,7 @@ copyFile(const char *src, const char *dst,
 				 schemaName, relName, dst, strerror(errno));
 
 	/* copy in fairly large chunks for best efficiency */
-#define COPY_BUF_SIZE (50 * BLCKSZ)
+#define COPY_BUF_SIZE (50 * rel_blck_size)
 
 	buffer = (char *) pg_malloc(COPY_BUF_SIZE);
 
@@ -140,7 +141,7 @@ rewriteVisibilityMap(const char *fromfile, const char *tofile,
 	struct stat statbuf;
 
 	/* Compute number of old-format bytes per new page */
-	rewriteVmBytesPerPage = (BLCKSZ - SizeOfPageHeaderData) / 2;
+	rewriteVmBytesPerPage = (rel_blck_size - SizeOfPageHeaderData) / 2;
 
 	if ((src_fd = open(fromfile, O_RDONLY | PG_BINARY, 0)) < 0)
 		pg_fatal("error while copying relation \"%s.%s\": could not open file \"%s\": %s\n",
@@ -162,8 +163,8 @@ rewriteVisibilityMap(const char *fromfile, const char *tofile,
 	 * Malloc the work buffers, rather than making them local arrays, to
 	 * ensure adequate alignment.
 	 */
-	buffer = (char *) pg_malloc(BLCKSZ);
-	new_vmbuf = (char *) pg_malloc(BLCKSZ);
+	buffer = (char *) pg_malloc(rel_blck_size);
+	new_vmbuf = (char *) pg_malloc(rel_blck_size);
 
 	/*
 	 * Turn each visibility map page into 2 pages one by one. Each new page
@@ -180,7 +181,7 @@ rewriteVisibilityMap(const char *fromfile, const char *tofile,
 		PageHeaderData pageheader;
 		bool		old_lastblk;
 
-		if ((bytesRead = read(src_fd, buffer, BLCKSZ)) != BLCKSZ)
+		if ((bytesRead = read(src_fd, buffer, rel_blck_size)) != rel_blck_size)
 		{
 			if (bytesRead < 0)
 				pg_fatal("error while copying relation \"%s.%s\": could not read file \"%s\": %s\n",
@@ -190,7 +191,7 @@ rewriteVisibilityMap(const char *fromfile, const char *tofile,
 						 schemaName, relName, fromfile);
 		}
 
-		totalBytesRead += BLCKSZ;
+		totalBytesRead += rel_blck_size;
 		old_lastblk = (totalBytesRead == src_filesize);
 
 		/* Save the page header data */
@@ -256,7 +257,7 @@ rewriteVisibilityMap(const char *fromfile, const char *tofile,
 					pg_checksum_page(new_vmbuf, new_blkno);
 
 			errno = 0;
-			if (write(dst_fd, new_vmbuf, BLCKSZ) != BLCKSZ)
+			if (write(dst_fd, new_vmbuf, rel_blck_size) != rel_blck_size)
 			{
 				/* if write didn't set errno, assume problem is no disk space */
 				if (errno == 0)
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index c10103f0bf..7732059b44 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -39,6 +39,7 @@
 #include "pg_upgrade.h"
 #include "catalog/pg_class.h"
 #include "common/restricted_token.h"
+#include "storage/md.h"
 #include "fe_utils/string_utils.h"
 
 #ifdef HAVE_LANGINFO_H
@@ -57,6 +58,8 @@ ClusterInfo old_cluster,
 			new_cluster;
 OSInfo		os_info;
 
+unsigned int rel_blck_size;
+
 char	   *output_files[] = {
 	SERVER_LOG_FILE,
 #ifdef WIN32
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 6443eda6df..2a1aee8451 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -20,6 +20,9 @@
 #include "access/xlogrecord.h"
 #include "access/xlog_internal.h"
 #include "access/transam.h"
+#include "catalog/pg_control.h"
+#include "storage/md.h"
+#include "common/controldata_utils.h"
 #include "common/fe_memutils.h"
 #include "getopt_long.h"
 #include "rmgrdesc.h"
@@ -27,7 +30,10 @@
 
 static const char *progname;
 
-static int	WalSegSz;
+unsigned int wal_blck_size;
+unsigned long wal_file_size;
+unsigned int rel_blck_size;
+
 
 typedef struct XLogDumpPrivate
 {
@@ -206,20 +212,21 @@ search_directory(const char *directory, const char *fname)
 		closedir(xldir);
 	}
 
-	/* set WalSegSz if file is successfully opened */
+	/* set wal_file_size if file is successfully opened */
 	if (fd >= 0)
 	{
-		char		buf[XLOG_BLCKSZ];
+		char		buf[wal_blck_size];
 
-		if (read(fd, buf, XLOG_BLCKSZ) == XLOG_BLCKSZ)
+		if (read(fd, buf, SizeOfXLogLongPHD) ==  SizeOfXLogLongPHD)
 		{
 			XLogLongPageHeader longhdr = (XLogLongPageHeader) buf;
 
-			WalSegSz = longhdr->xlp_seg_size;
+			wal_file_size = longhdr->xlp_seg_size;
+			wal_blck_size = longhdr->xlp_xlog_blcksz;
 
-			if (!IsValidWalSegSize(WalSegSz))
-				fatal_error("WAL segment size must be a power of two between 1MB and 1GB, but the WAL file \"%s\" header specifies %d bytes",
-							fname, WalSegSz);
+			if (!IsValidWalSegSize(wal_file_size))
+				fatal_error("WAL segment size must be a power of two between 1MB and 1GB, but the WAL file \"%s\" header specifies %lu bytes",
+							fname, wal_file_size);
 		}
 		else
 		{
@@ -237,7 +244,7 @@ search_directory(const char *directory, const char *fname)
 }
 
 /*
- * Identify the target directory and set WalSegSz.
+ * Identify the target directory and set wal_file_size.
  *
  * Try to find the file in several places:
  * if directory != NULL:
@@ -336,9 +343,9 @@ XLogDumpXLogRead(const char *directory, TimeLineID timeline_id,
 		int			segbytes;
 		int			readbytes;
 
-		startoff = XLogSegmentOffset(recptr, WalSegSz);
+		startoff = XLogSegmentOffset(recptr, wal_file_size);
 
-		if (sendFile < 0 || !XLByteInSeg(recptr, sendSegNo, WalSegSz))
+		if (sendFile < 0 || !XLByteInSeg(recptr, sendSegNo, wal_file_size))
 		{
 			char		fname[MAXFNAMELEN];
 			int			tries;
@@ -347,9 +354,9 @@ XLogDumpXLogRead(const char *directory, TimeLineID timeline_id,
 			if (sendFile >= 0)
 				close(sendFile);
 
-			XLByteToSeg(recptr, sendSegNo, WalSegSz);
+			XLByteToSeg(recptr, sendSegNo, wal_file_size);
 
-			XLogFileName(fname, timeline_id, sendSegNo, WalSegSz);
+			XLogFileName(fname, timeline_id, sendSegNo, wal_file_size);
 
 			/*
 			 * In follow mode there is a short period of time after the server
@@ -390,7 +397,7 @@ XLogDumpXLogRead(const char *directory, TimeLineID timeline_id,
 				int			err = errno;
 				char		fname[MAXPGPATH];
 
-				XLogFileName(fname, timeline_id, sendSegNo, WalSegSz);
+				XLogFileName(fname, timeline_id, sendSegNo, wal_file_size);
 
 				fatal_error("could not seek in log file %s to offset %u: %s",
 							fname, startoff, strerror(err));
@@ -399,8 +406,8 @@ XLogDumpXLogRead(const char *directory, TimeLineID timeline_id,
 		}
 
 		/* How many bytes are within this segment? */
-		if (nbytes > (WalSegSz - startoff))
-			segbytes = WalSegSz - startoff;
+		if (nbytes > (wal_file_size - startoff))
+			segbytes = wal_file_size - startoff;
 		else
 			segbytes = nbytes;
 
@@ -410,7 +417,7 @@ XLogDumpXLogRead(const char *directory, TimeLineID timeline_id,
 			int			err = errno;
 			char		fname[MAXPGPATH];
 
-			XLogFileName(fname, timeline_id, sendSegNo, WalSegSz);
+			XLogFileName(fname, timeline_id, sendSegNo, wal_file_size);
 
 			fatal_error("could not read from log file %s, offset %u, length %d: %s",
 						fname, sendOff, segbytes, strerror(err));
@@ -433,12 +440,12 @@ XLogDumpReadPage(XLogReaderState *state, XLogRecPtr targetPagePtr, int reqLen,
 				 XLogRecPtr targetPtr, char *readBuff, TimeLineID *curFileTLI)
 {
 	XLogDumpPrivate *private = state->private_data;
-	int			count = XLOG_BLCKSZ;
+	int			count = wal_blck_size;
 
 	if (private->endptr != InvalidXLogRecPtr)
 	{
-		if (targetPagePtr + XLOG_BLCKSZ <= private->endptr)
-			count = XLOG_BLCKSZ;
+		if (targetPagePtr + wal_blck_size <= private->endptr)
+			count = wal_blck_size;
 		else if (targetPagePtr + reqLen <= private->endptr)
 			count = private->endptr - targetPagePtr;
 		else
@@ -611,7 +618,7 @@ XLogDumpDisplayRecord(XLogDumpConfig *config, XLogReaderState *record)
 						   "" : " for WAL verification",
 						   record->blocks[block_id].hole_offset,
 						   record->blocks[block_id].hole_length,
-						   BLCKSZ -
+						   rel_blck_size -
 						   record->blocks[block_id].hole_length -
 						   record->blocks[block_id].bimg_len);
 				}
@@ -1034,11 +1041,11 @@ main(int argc, char **argv)
 		close(fd);
 
 		/* parse position from file */
-		XLogFromFileName(fname, &private.timeline, &segno, WalSegSz);
+		XLogFromFileName(fname, &private.timeline, &segno, wal_file_size);
 
 		if (XLogRecPtrIsInvalid(private.startptr))
-			XLogSegNoOffsetToRecPtr(segno, 0, private.startptr, WalSegSz);
-		else if (!XLByteInSeg(private.startptr, segno, WalSegSz))
+			XLogSegNoOffsetToRecPtr(segno, 0, private.startptr, wal_file_size);
+		else if (!XLByteInSeg(private.startptr, segno, wal_file_size))
 		{
 			fprintf(stderr,
 					_("%s: start WAL location %X/%X is not inside file \"%s\"\n"),
@@ -1051,7 +1058,7 @@ main(int argc, char **argv)
 
 		/* no second file specified, set end position */
 		if (!(optind + 1 < argc) && XLogRecPtrIsInvalid(private.endptr))
-			XLogSegNoOffsetToRecPtr(segno + 1, 0, private.endptr, WalSegSz);
+			XLogSegNoOffsetToRecPtr(segno + 1, 0, private.endptr, wal_file_size);
 
 		/* parse ENDSEG if passed */
 		if (optind + 1 < argc)
@@ -1067,7 +1074,7 @@ main(int argc, char **argv)
 			close(fd);
 
 			/* parse position from file */
-			XLogFromFileName(fname, &private.timeline, &endsegno, WalSegSz);
+			XLogFromFileName(fname, &private.timeline, &endsegno, wal_file_size);
 
 			if (endsegno < segno)
 				fatal_error("ENDSEG %s is before STARTSEG %s",
@@ -1075,15 +1082,15 @@ main(int argc, char **argv)
 
 			if (XLogRecPtrIsInvalid(private.endptr))
 				XLogSegNoOffsetToRecPtr(endsegno + 1, 0, private.endptr,
-										WalSegSz);
+										wal_file_size);
 
 			/* set segno to endsegno for check of --end */
 			segno = endsegno;
 		}
 
 
-		if (!XLByteInSeg(private.endptr, segno, WalSegSz) &&
-			private.endptr != (segno + 1) * WalSegSz)
+		if (!XLByteInSeg(private.endptr, segno, wal_file_size) &&
+			private.endptr != (segno + 1) * wal_file_size)
 		{
 			fprintf(stderr,
 					_("%s: end WAL location %X/%X is not inside file \"%s\"\n"),
@@ -1107,7 +1114,7 @@ main(int argc, char **argv)
 	/* done with argument parsing, do the actual work */
 
 	/* we have everything we need, start reading */
-	xlogreader_state = XLogReaderAllocate(WalSegSz, XLogDumpReadPage,
+	xlogreader_state = XLogReaderAllocate(wal_file_size, XLogDumpReadPage,
 										  &private);
 	if (!xlogreader_state)
 		fatal_error("out of memory");
@@ -1126,7 +1133,7 @@ main(int argc, char **argv)
 	 * a segment (e.g. we were used in file mode).
 	 */
 	if (first_record != private.startptr &&
-		XLogSegmentOffset(private.startptr, WalSegSz) != 0)
+		XLogSegmentOffset(private.startptr, wal_file_size) != 0)
 		printf(ngettext("first record is after %X/%X, at %X/%X, skipping over %u byte\n",
 						"first record is after %X/%X, at %X/%X, skipping over %u bytes\n",
 						(first_record - private.startptr)),
diff --git a/src/common/controldata_utils.c b/src/common/controldata_utils.c
index f1a097a974..bba49f520d 100644
--- a/src/common/controldata_utils.c
+++ b/src/common/controldata_utils.c
@@ -28,6 +28,9 @@
 #include "common/controldata_utils.h"
 #include "port/pg_crc32c.h"
 
+static ControlFileData* __get_controlfile(const char* progname);
+
+
 /*
  * get_controlfile(char *DataDir, const char *progname, bool *crc_ok_p)
  *
@@ -102,3 +105,98 @@ get_controlfile(const char *DataDir, const char *progname, bool *crc_ok_p)
 
 	return ControlFile;
 }
+
+ControlFileData*
+__get_controlfile(const char* progname)
+{
+	char* pg_data;
+	bool crc_ok_p;
+
+	crc_ok_p = false;
+
+	pg_data = getenv("PGDATA");
+	if (pg_data) {
+		canonicalize_path(pg_data);
+		return get_controlfile(pg_data, progname, &crc_ok_p);
+	} else {
+		printf(_("No PGDATA defined.\n"));
+		return NULL;
+	}
+}
+
+
+/*
+ * Relation block size in bytes
+ */
+unsigned int get_rel_blck_size(const char* progname)
+{
+	int blcksz;
+	ControlFileData* control_data;
+
+	blcksz = 0;
+
+        control_data =  __get_controlfile(progname);
+	if (control_data != NULL) {
+		blcksz = control_data->blcksz;
+		pfree(control_data);
+	}
+
+	return blcksz;
+}
+
+/*
+ * Relaation file size in blocks
+ */
+unsigned int get_rel_file_blck(const char* progname)
+{
+	int relseg_size;
+	ControlFileData* control_data;
+
+	relseg_size = 0;
+
+        control_data =  __get_controlfile(progname);
+	if (control_data != NULL) {
+		relseg_size = control_data->relseg_size;
+		pfree(control_data);
+	}
+
+        return relseg_size;
+}
+
+/*
+ * Wal file block size in bytes
+ */
+unsigned int get_wal_blck_size(const char* progname)
+{
+	int xlog_blcksz;
+	ControlFileData* control_data;
+
+	xlog_blcksz = 0;
+
+        control_data =  __get_controlfile(progname);
+	if (control_data != NULL) {
+		xlog_blcksz = control_data->xlog_blcksz;
+		pfree(control_data);
+	}
+
+        return xlog_blcksz;
+}
+
+/*
+ * Wal file size in blocks
+ */
+unsigned int get_wal_file_blck(const char* progname)
+{
+	int wal_file_blck;
+	ControlFileData* control_data;
+
+	wal_file_blck = 0;
+
+        control_data =  __get_controlfile(progname);
+	if (control_data != NULL) {
+		wal_file_blck = control_data->xlog_seg_size / control_data->xlog_blcksz;
+		pfree(control_data);
+	}
+
+        return wal_file_blck;
+}
diff --git a/src/include/access/brin_page.h b/src/include/access/brin_page.h
index bf03a6e9f8..abaa934804 100644
--- a/src/include/access/brin_page.h
+++ b/src/include/access/brin_page.h
@@ -86,7 +86,7 @@ typedef struct RevmapContents
 } RevmapContents;
 
 #define REVMAP_CONTENT_SIZE \
-	(BLCKSZ - MAXALIGN(SizeOfPageHeaderData) - \
+	(rel_blck_size - MAXALIGN(SizeOfPageHeaderData) - \
 	 offsetof(RevmapContents, rm_tids) - \
 	 MAXALIGN(sizeof(BrinSpecialSpace)))
 /* max num of items in the array */
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 114370c7d7..be4c79abc0 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -238,7 +238,7 @@ typedef signed char GinNullCategory;
  */
 #define GinMaxItemSize \
 	Min(INDEX_SIZE_MASK, \
-		MAXALIGN_DOWN(((BLCKSZ - \
+		MAXALIGN_DOWN(((rel_blck_size - \
 						MAXALIGN(SizeOfPageHeaderData + 3 * sizeof(ItemIdData)) - \
 						MAXALIGN(sizeof(GinPageOpaqueData))) / 3)))
 
@@ -308,7 +308,7 @@ typedef signed char GinNullCategory;
 	 GinPageGetOpaque(page)->maxoff * sizeof(PostingItem))
 
 #define GinDataPageMaxDataSize	\
-	(BLCKSZ - MAXALIGN(SizeOfPageHeaderData) \
+	(rel_blck_size - MAXALIGN(SizeOfPageHeaderData) \
 	 - MAXALIGN(sizeof(ItemPointerData)) \
 	 - MAXALIGN(sizeof(GinPageOpaqueData)))
 
@@ -316,7 +316,7 @@ typedef signed char GinNullCategory;
  * List pages
  */
 #define GinListPageSize  \
-	( BLCKSZ - SizeOfPageHeaderData - MAXALIGN(sizeof(GinPageOpaqueData)) )
+	( rel_blck_size - SizeOfPageHeaderData - MAXALIGN(sizeof(GinPageOpaqueData)) )
 
 /*
  * A compressed posting list.
diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h
index eb1c6728d4..592386a80f 100644
--- a/src/include/access/gist_private.h
+++ b/src/include/access/gist_private.h
@@ -51,14 +51,17 @@ typedef struct
 	char		tupledata[FLEXIBLE_ARRAY_MEMBER];
 } GISTNodeBufferPage;
 
-#define BUFFER_PAGE_DATA_OFFSET MAXALIGN(offsetof(GISTNodeBufferPage, tupledata))
+#define BUFFER_PAGE_DATA_OFFSET		MAXALIGN(offsetof(GISTNodeBufferPage, tupledata))
+
 /* Returns free space in node buffer page */
-#define PAGE_FREE_SPACE(nbp) (nbp->freespace)
+#define PAGE_FREE_SPACE(nbp)		(nbp->freespace)
+
 /* Checks if node buffer page is empty */
-#define PAGE_IS_EMPTY(nbp) (nbp->freespace == BLCKSZ - BUFFER_PAGE_DATA_OFFSET)
+#define PAGE_IS_EMPTY(nbp)		(nbp->freespace == rel_blck_size - BUFFER_PAGE_DATA_OFFSET)
+
 /* Checks if node buffers page don't contain sufficient space for index tuple */
-#define PAGE_NO_SPACE(nbp, itup) (PAGE_FREE_SPACE(nbp) < \
-										MAXALIGN(IndexTupleSize(itup)))
+#define PAGE_NO_SPACE(nbp, itup)	(PAGE_FREE_SPACE(nbp) < \
+						MAXALIGN(IndexTupleSize(itup)))
 
 /*
  * GISTSTATE: information needed for any GiST index operation
@@ -167,7 +170,7 @@ typedef struct GISTScanOpaqueData
 	GistNSN		curPageLSN;		/* pos in the WAL stream when page was read */
 
 	/* In a non-ordered search, returnable heap items are stored here: */
-	GISTSearchHeapItem pageData[BLCKSZ / sizeof(IndexTupleData)];
+	GISTSearchHeapItem* pageData;
 	OffsetNumber nPageData;		/* number of valid items in array */
 	OffsetNumber curPageData;	/* next item to return */
 	MemoryContext pageDataCxt;	/* context holding the fetched tuples, for
@@ -176,6 +179,9 @@ typedef struct GISTScanOpaqueData
 
 typedef GISTScanOpaqueData *GISTScanOpaque;
 
+#define SIZEOF_GIST_SEARCH_HEAP_ITEM	(sizeof(GISTSearchHeapItem) * (rel_blck_size / sizeof(IndexTupleData)))
+
+
 /* despite the name, gistxlogPage is not part of any xlog record */
 typedef struct gistxlogPage
 {
@@ -430,7 +436,7 @@ extern bool gistvalidate(Oid opclassoid);
 /* gistutil.c */
 
 #define GiSTPageSize   \
-	( BLCKSZ - SizeOfPageHeaderData - MAXALIGN(sizeof(GISTPageOpaqueData)) )
+	( rel_blck_size - SizeOfPageHeaderData - MAXALIGN(sizeof(GISTPageOpaqueData)) )
 
 #define GIST_MIN_FILLFACTOR			10
 #define GIST_DEFAULT_FILLFACTOR		90
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index e3135c1738..fd098de5a0 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -132,9 +132,12 @@ typedef struct HashScanPosData
 	int			lastItem;		/* last valid index in items[] */
 	int			itemIndex;		/* current index in items[] */
 
-	HashScanPosItem items[MaxIndexTuplesPerPage];	/* MUST BE LAST */
+	HashScanPosItem* items;				/* MUST BE LAST */
 }			HashScanPosData;
 
+#define SIZEOF_HASH_SCAN_POS_ITEM	(sizeof(HashScanPosItem) * (MaxIndexTuplesPerPage))
+
+
 #define HashScanPosIsPinned(scanpos) \
 ( \
 	AssertMacro(BlockNumberIsValid((scanpos).currPage) || \
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index b0d4c54121..a5c5bdf870 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -554,7 +554,7 @@ do { \
 
 /*
  * MaxHeapTupleSize is the maximum allowed size of a heap tuple, including
- * header and MAXALIGN alignment padding.  Basically it's BLCKSZ minus the
+ * header and MAXALIGN alignment padding.  Basically it's rel_blck_size minus the
  * other stuff that has to be on a disk page.  Since heap pages use no
  * "special space", there's no deduction for that.
  *
@@ -563,7 +563,7 @@ do { \
  * ItemIds and tuples have different alignment requirements, don't assume that
  * you can, say, fit 2 tuples of size MaxHeapTupleSize/2 on the same page.
  */
-#define MaxHeapTupleSize  (BLCKSZ - MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData)))
+#define MaxHeapTupleSize  (rel_blck_size - MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData)))
 #define MinHeapTupleSize  MAXALIGN(SizeofHeapTupleHeader)
 
 /*
@@ -578,8 +578,13 @@ do { \
  * require increases in the size of work arrays.
  */
 #define MaxHeapTuplesPerPage	\
-	((int) ((BLCKSZ - SizeOfPageHeaderData) / \
+	((int) ((rel_blck_size - SizeOfPageHeaderData) / \
 			(MAXALIGN(SizeofHeapTupleHeader) + sizeof(ItemIdData))))
+/*
+#define MaxHeapTuplesPerPage	\
+	((int) ((8192 - SizeOfPageHeaderData) / \
+			(MAXALIGN(SizeofHeapTupleHeader) + sizeof(ItemIdData))))
+ */
 
 /*
  * MaxAttrSize is a somewhat arbitrary upper limit on the declared size of
diff --git a/src/include/access/itup.h b/src/include/access/itup.h
index c178ae91a9..a9f8629de6 100644
--- a/src/include/access/itup.h
+++ b/src/include/access/itup.h
@@ -135,7 +135,7 @@ typedef IndexAttributeBitMapData * IndexAttributeBitMap;
  */
 #define MinIndexTupleSize MAXALIGN(sizeof(IndexTupleData) + 1)
 #define MaxIndexTuplesPerPage	\
-	((int) ((BLCKSZ - SizeOfPageHeaderData) / \
+	((int) ((rel_blck_size - SizeOfPageHeaderData) / \
 			(MAXALIGN(sizeof(IndexTupleData) + 1) + sizeof(ItemIdData))))
 
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 2d4c36d0b8..329b451255 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -325,11 +325,17 @@ typedef struct BTScanPosData
 	int			lastItem;		/* last valid index in items[] */
 	int			itemIndex;		/* current index in items[] */
 
-	BTScanPosItem items[MaxIndexTuplesPerPage]; /* MUST BE LAST */
+	BTScanPosItem		*items;	/* MUST BE LAST */
 } BTScanPosData;
 
 typedef BTScanPosData *BTScanPos;
 
+/*
+ * Memory management for items field of BTScanPosData
+ */
+#define SIZEOF_BT_SCAN_POST_ITEM	(sizeof(BTScanPosItem) * MaxIndexTuplesPerPage)
+
+
 #define BTScanPosIsPinned(scanpos) \
 ( \
 	AssertMacro(BlockNumberIsValid((scanpos).currPage) || \
@@ -395,7 +401,7 @@ typedef struct BTScanOpaqueData
 	/*
 	 * If we are doing an index-only scan, these are the tuple storage
 	 * workspaces for the currPos and markPos respectively.  Each is of size
-	 * BLCKSZ, so it can hold as much as a full page's worth of tuples.
+	 * rel_blck_size, so it can hold as much as a full page's worth of tuples.
 	 */
 	char	   *currTuples;		/* tuple storage for currPos */
 	char	   *markTuples;		/* tuple storage for markPos */
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 147f862a2b..619dd02ef7 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -75,9 +75,14 @@ typedef struct HeapScanDescData
 	/* these fields only used in page-at-a-time mode and for bitmap scans */
 	int			rs_cindex;		/* current tuple's index in vistuples */
 	int			rs_ntuples;		/* number of visible tuples on page */
-	OffsetNumber rs_vistuples[MaxHeapTuplesPerPage];	/* their offsets */
+	OffsetNumber rs_vistuples[FLEXIBLE_ARRAY_MEMBER];	/* their offsets: size is MaxHeapTuplesPerPage offsets */
 }			HeapScanDescData;
 
+#define SizeOfHeapScanDescData				\
+	(offsetof(HeapScanDescData, rs_vistuples) + 	\
+	sizeof(OffsetNumber) * (MaxHeapTuplesPerPage))
+
+
 /*
  * We use the same IndexScanDescData structure for both amgettuple-based
  * and amgetbitmap-based index scans.  Some fields are only relevant in
diff --git a/src/include/access/slru.h b/src/include/access/slru.h
index 20114c4d44..c924ae4454 100644
--- a/src/include/access/slru.h
+++ b/src/include/access/slru.h
@@ -18,7 +18,7 @@
 
 
 /*
- * Define SLRU segment size.  A page is the same BLCKSZ as is used everywhere
+ * Define SLRU segment size.  A page is the same rel_blck_size as is used everywhere
  * else in Postgres.  The segment size can be chosen somewhat arbitrarily;
  * we make it 32 pages by default, or 256Kb, i.e. 1M transactions for CLOG
  * or 64K transactions for SUBTRANS.
diff --git a/src/include/access/spgist_private.h b/src/include/access/spgist_private.h
index 1c4b321b6c..735429d459 100644
--- a/src/include/access/spgist_private.h
+++ b/src/include/access/spgist_private.h
@@ -157,9 +157,9 @@ typedef struct SpGistScanOpaqueData
 	TupleDesc	indexTupDesc;	/* if so, tuple descriptor for them */
 	int			nPtrs;			/* number of TIDs found on current page */
 	int			iPtr;			/* index for scanning through same */
-	ItemPointerData heapPtrs[MaxIndexTuplesPerPage];	/* TIDs from cur page */
-	bool		recheck[MaxIndexTuplesPerPage]; /* their recheck flags */
-	HeapTuple	reconTups[MaxIndexTuplesPerPage];	/* reconstructed tuples */
+	ItemPointerData* heapPtrs;	/* TIDs from cur page */
+	bool*		recheck;	/* their recheck flags */
+	HeapTuple*	reconTups;	/* reconstructed tuples */
 
 	/*
 	 * Note: using MaxIndexTuplesPerPage above is a bit hokey since
@@ -170,6 +170,20 @@ typedef struct SpGistScanOpaqueData
 
 typedef SpGistScanOpaqueData *SpGistScanOpaque;
 
+/*
+ * Memory management for fields of SpGistScanOpaqueData
+ */
+#define SP_GIST_SCAN_ALLOC(ptr)								\
+	(ptr)->heapPtrs = palloc0(sizeof(ItemPointerData) * MaxIndexTuplesPerPage); 	\
+	(ptr)->recheck = palloc0(sizeof(bool) * MaxIndexTuplesPerPage);			\
+	(ptr)->reconTups = palloc0(sizeof(HeapTuple) * MaxIndexTuplesPerPage)
+
+#define SP_GIST_SCAN_FREE(ptr)	\
+	pfree((ptr)->heapPtrs);	\
+	pfree((ptr)->recheck);	\
+	pfree((ptr)->reconTups)
+
+
 /*
  * This struct is what we actually keep in index->rd_amcache.  It includes
  * static configuration information as well as the lastUsedPages cache.
@@ -337,7 +351,7 @@ typedef SpGistDeadTupleData *SpGistDeadTuple;
 
 /* Page capacity after allowing for fixed header and special space */
 #define SPGIST_PAGE_CAPACITY  \
-	MAXALIGN_DOWN(BLCKSZ - \
+	MAXALIGN_DOWN(rel_blck_size - \
 				  SizeOfPageHeaderData - \
 				  MAXALIGN(sizeof(SpGistPageOpaqueData)))
 
diff --git a/src/include/access/tuptoaster.h b/src/include/access/tuptoaster.h
index fd9f83ac44..77f488eb0a 100644
--- a/src/include/access/tuptoaster.h
+++ b/src/include/access/tuptoaster.h
@@ -28,7 +28,7 @@
  * Find the maximum size of a tuple if there are to be N tuples per page.
  */
 #define MaximumBytesPerTuple(tuplesPerPage) \
-	MAXALIGN_DOWN((BLCKSZ - \
+	MAXALIGN_DOWN((rel_blck_size - \
 				   MAXALIGN(SizeOfPageHeaderData + (tuplesPerPage) * sizeof(ItemIdData))) \
 				  / (tuplesPerPage))
 
diff --git a/src/include/access/xlog_internal.h b/src/include/access/xlog_internal.h
index 7805c3c747..2ab383ff76 100644
--- a/src/include/access/xlog_internal.h
+++ b/src/include/access/xlog_internal.h
@@ -26,6 +26,7 @@
 #include "pgtime.h"
 #include "storage/block.h"
 #include "storage/relfilenode.h"
+#include "storage/md.h"
 
 
 /*
@@ -85,14 +86,15 @@ typedef XLogLongPageHeaderData *XLogLongPageHeader;
 #define XLogPageHeaderSize(hdr)		\
 	(((hdr)->xlp_info & XLP_LONG_HEADER) ? SizeOfXLogLongPHD : SizeOfXLogShortPHD)
 
-/* wal_segment_size can range from 1MB to 1GB */
+/* wal_file_size can range from 1MB to 1GB */
 #define WalSegMinSize 1024 * 1024
 #define WalSegMaxSize 1024 * 1024 * 1024
+
 /* default number of min and max wal segments */
 #define DEFAULT_MIN_WAL_SEGS 5
 #define DEFAULT_MAX_WAL_SEGS 64
 
-/* check that the given size is a valid wal_segment_size */
+/* check that the given size is a valid wal_file_size */
 #define IsPowerOf2(x) (x > 0 && ((x) & ((x)-1)) == 0)
 #define IsValidWalSegSize(size) \
 	 (IsPowerOf2(size) && \
@@ -135,7 +137,7 @@ typedef XLogLongPageHeaderData *XLogLongPageHeader;
 
 /* Check if an XLogRecPtr value is in a plausible range */
 #define XRecOffIsValid(xlrp) \
-		((xlrp) % XLOG_BLCKSZ >= SizeOfXLogShortPHD)
+		((xlrp) % wal_blck_size >= SizeOfXLogShortPHD)
 
 /*
  * The XLog directory and control file (relative to $PGDATA)
diff --git a/src/include/access/xlogreader.h b/src/include/access/xlogreader.h
index 3a9ebd4354..02aac3da1f 100644
--- a/src/include/access/xlogreader.h
+++ b/src/include/access/xlogreader.h
@@ -26,6 +26,7 @@
 #define XLOGREADER_H
 
 #include "access/xlogrecord.h"
+#include "storage/md.h"
 
 typedef struct XLogReaderState XLogReaderState;
 
@@ -76,14 +77,14 @@ struct XLogReaderState
 	/*
 	 * Segment size of the to-be-parsed data (mandatory).
 	 */
-	int			wal_segment_size;
+	int			wal_file_size;
 
 	/*
 	 * Data input callback (mandatory).
 	 *
 	 * This callback shall read at least reqLen valid bytes of the xlog page
 	 * starting at targetPagePtr, and store them in readBuf.  The callback
-	 * shall return the number of bytes read (never more than XLOG_BLCKSZ), or
+	 * shall return the number of bytes read (never more than wal_blck_size), or
 	 * -1 on failure.  The callback shall sleep, if necessary, to wait for the
 	 * requested bytes to become available.  The callback will not be invoked
 	 * again for the same page unless more than the returned number of bytes
@@ -146,7 +147,7 @@ struct XLogReaderState
 	 */
 
 	/*
-	 * Buffer for currently read page (XLOG_BLCKSZ bytes, valid up to at least
+	 * Buffer for currently read page (wal_blck_size bytes, valid up to at least
 	 * readLen bytes)
 	 */
 	char	   *readBuf;
@@ -194,7 +195,7 @@ struct XLogReaderState
 };
 
 /* Get a new XLogReader */
-extern XLogReaderState *XLogReaderAllocate(int wal_segment_size,
+extern XLogReaderState *XLogReaderAllocate(int wal_file_size,
 				   XLogPageReadCB pagereadfunc,
 				   void *private_data);
 
diff --git a/src/include/access/xlogrecord.h b/src/include/access/xlogrecord.h
index b53960e112..eee2dcd23c 100644
--- a/src/include/access/xlogrecord.h
+++ b/src/include/access/xlogrecord.h
@@ -112,21 +112,21 @@ typedef struct XLogRecordBlockHeader
  * contains only zero bytes.  If the length of "hole" > 0 then we have removed
  * such a "hole" from the stored data (and it's not counted in the
  * XLOG record's CRC, either).  Hence, the amount of block data actually
- * present is BLCKSZ - the length of "hole" bytes.
+ * present is rel_blck_size - the length of "hole" bytes.
  *
  * When wal_compression is enabled, a full page image which "hole" was
  * removed is additionally compressed using PGLZ compression algorithm.
  * This can reduce the WAL volume, but at some extra cost of CPU spent
  * on the compression during WAL logging. In this case, since the "hole"
  * length cannot be calculated by subtracting the number of page image bytes
- * from BLCKSZ, basically it needs to be stored as an extra information.
+ * from rel_blck_size, basically it needs to be stored as an extra information.
  * But when no "hole" exists, we can assume that the "hole" length is zero
  * and no such an extra information needs to be stored. Note that
  * the original version of page image is stored in WAL instead of the
  * compressed one if the number of bytes saved by compression is less than
  * the length of extra information. Hence, when a page image is successfully
  * compressed, the amount of block data actually present is less than
- * BLCKSZ - the length of "hole" bytes - the length of extra information.
+ * rel_blck_size - the length of "hole" bytes - the length of extra information.
  */
 typedef struct XLogRecordBlockImageHeader
 {
diff --git a/src/include/common/controldata_utils.h b/src/include/common/controldata_utils.h
index e97abe6a51..e818e5af79 100644
--- a/src/include/common/controldata_utils.h
+++ b/src/include/common/controldata_utils.h
@@ -13,5 +13,9 @@
 #include "catalog/pg_control.h"
 
 extern ControlFileData *get_controlfile(const char *DataDir, const char *progname, bool *crc_ok_p);
+extern unsigned int get_rel_blck_size(const char* progname);
+extern unsigned int get_rel_file_blck(const char* progname);
+extern unsigned int get_wal_blck_size(const char* progname);
+extern unsigned int get_wal_file_blck(const char* progname);
 
 #endif							/* COMMON_CONTROLDATA_UTILS_H */
diff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h
index c5af5b96a7..b2ebd97ec5 100644
--- a/src/include/lib/simplehash.h
+++ b/src/include/lib/simplehash.h
@@ -91,6 +91,9 @@
 #define SH_INITIAL_BUCKET SH_MAKE_NAME(initial_bucket)
 #define SH_ENTRY_HASH SH_MAKE_NAME(entry_hash)
 
+#define SH_GET_ENTRY(array, index)	(SH_ELEMENT_TYPE *)((char*) array + SH_SIZEOF_ELEMENT_TYPE * index)
+
+
 /* generate forward declarations necessary to use the hash table */
 #ifdef SH_DECLARE
 
@@ -222,7 +225,7 @@ SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32 newsize)
 	 * Verify that allocation of ->data is possible on this platform, without
 	 * overflowing Size.
 	 */
-	if ((((uint64) sizeof(SH_ELEMENT_TYPE)) * size) >= MaxAllocHugeSize)
+	if ((((uint64) SH_SIZEOF_ELEMENT_TYPE) * size) >= MaxAllocHugeSize)
 		elog(ERROR, "hash table too large");
 
 	/* now set size */
@@ -339,7 +342,7 @@ SH_CREATE(MemoryContext ctx, uint32 nelements, void *private_data)
 
 	SH_COMPUTE_PARAMETERS(tb, size);
 
-	tb->data = SH_ALLOCATE(tb, sizeof(SH_ELEMENT_TYPE) * tb->size);
+	tb->data = SH_ALLOCATE(tb, SH_SIZEOF_ELEMENT_TYPE * tb->size);
 
 	return tb;
 }
@@ -376,7 +379,7 @@ SH_GROW(SH_TYPE * tb, uint32 newsize)
 	/* compute parameters for new table */
 	SH_COMPUTE_PARAMETERS(tb, newsize);
 
-	tb->data = SH_ALLOCATE(tb, sizeof(SH_ELEMENT_TYPE) * tb->size);
+	tb->data = SH_ALLOCATE(tb, SH_SIZEOF_ELEMENT_TYPE * tb->size);
 
 	newdata = tb->data;
 
@@ -389,7 +392,7 @@ SH_GROW(SH_TYPE * tb, uint32 newsize)
 	 * consuming and frequent, that's worthwhile to optimize.
 	 *
 	 * To be able to simply move entries over, we have to start not at the
-	 * first bucket (i.e olddata[0]), but find the first bucket that's either
+	 * first bucket (i.e SH_GET_ENTRY(olddata, 0)), but find the first bucket that's either
 	 * empty, or is occupied by an entry at its optimal position. Such a
 	 * bucket has to exist in any table with a load factor under 1, as not all
 	 * buckets are occupied, i.e. there always has to be an empty bucket.  By
@@ -400,7 +403,7 @@ SH_GROW(SH_TYPE * tb, uint32 newsize)
 	/* search for the first element in the hash that's not wrapped around */
 	for (i = 0; i < oldsize; i++)
 	{
-		SH_ELEMENT_TYPE *oldentry = &olddata[i];
+		SH_ELEMENT_TYPE *oldentry = SH_GET_ENTRY(olddata, i);
 		uint32		hash;
 		uint32		optimal;
 
@@ -424,7 +427,7 @@ SH_GROW(SH_TYPE * tb, uint32 newsize)
 	copyelem = startelem;
 	for (i = 0; i < oldsize; i++)
 	{
-		SH_ELEMENT_TYPE *oldentry = &olddata[copyelem];
+		SH_ELEMENT_TYPE *oldentry = SH_GET_ENTRY(olddata, copyelem);
 
 		if (oldentry->status == SH_STATUS_IN_USE)
 		{
@@ -440,7 +443,7 @@ SH_GROW(SH_TYPE * tb, uint32 newsize)
 			/* find empty element to put data into */
 			while (true)
 			{
-				newentry = &newdata[curelem];
+				newentry = SH_GET_ENTRY(newdata, curelem);
 
 				if (newentry->status == SH_STATUS_EMPTY)
 				{
@@ -451,7 +454,7 @@ SH_GROW(SH_TYPE * tb, uint32 newsize)
 			}
 
 			/* copy entry to new slot */
-			memcpy(newentry, oldentry, sizeof(SH_ELEMENT_TYPE));
+			memcpy(newentry, oldentry, SH_SIZEOF_ELEMENT_TYPE);
 		}
 
 		/* can't use SH_NEXT here, would use new size */
@@ -514,7 +517,7 @@ restart:
 		uint32		curdist;
 		uint32		curhash;
 		uint32		curoptimal;
-		SH_ELEMENT_TYPE *entry = &data[curelem];
+		SH_ELEMENT_TYPE *entry = SH_GET_ENTRY(data, curelem);
 
 		/* any empty bucket can directly be used */
 		if (entry->status == SH_STATUS_EMPTY)
@@ -561,7 +564,7 @@ restart:
 				SH_ELEMENT_TYPE *emptyentry;
 
 				emptyelem = SH_NEXT(tb, emptyelem, startelem);
-				emptyentry = &data[emptyelem];
+				emptyentry = SH_GET_ENTRY(data, emptyelem);
 
 				if (emptyentry->status == SH_STATUS_EMPTY)
 				{
@@ -596,9 +599,9 @@ restart:
 				SH_ELEMENT_TYPE *moveentry;
 
 				moveelem = SH_PREV(tb, moveelem, startelem);
-				moveentry = &data[moveelem];
+				moveentry = SH_GET_ENTRY(data, moveelem);
 
-				memcpy(lastentry, moveentry, sizeof(SH_ELEMENT_TYPE));
+				memcpy(lastentry, moveentry, SH_SIZEOF_ELEMENT_TYPE);
 				lastentry = moveentry;
 			}
 
@@ -643,7 +646,7 @@ SH_LOOKUP(SH_TYPE * tb, SH_KEY_TYPE key)
 
 	while (true)
 	{
-		SH_ELEMENT_TYPE *entry = &tb->data[curelem];
+		SH_ELEMENT_TYPE *entry = SH_GET_ENTRY(tb->data, curelem);
 
 		if (entry->status == SH_STATUS_EMPTY)
 		{
@@ -679,7 +682,7 @@ SH_DELETE(SH_TYPE * tb, SH_KEY_TYPE key)
 
 	while (true)
 	{
-		SH_ELEMENT_TYPE *entry = &tb->data[curelem];
+		SH_ELEMENT_TYPE *entry = SH_GET_ENTRY(tb->data, curelem);
 
 		if (entry->status == SH_STATUS_EMPTY)
 			return false;
@@ -705,7 +708,7 @@ SH_DELETE(SH_TYPE * tb, SH_KEY_TYPE key)
 				uint32		curoptimal;
 
 				curelem = SH_NEXT(tb, curelem, startelem);
-				curentry = &tb->data[curelem];
+				curentry = SH_GET_ENTRY(tb->data, curelem);
 
 				if (curentry->status != SH_STATUS_IN_USE)
 				{
@@ -724,7 +727,7 @@ SH_DELETE(SH_TYPE * tb, SH_KEY_TYPE key)
 				}
 
 				/* shift */
-				memcpy(lastentry, curentry, sizeof(SH_ELEMENT_TYPE));
+				memcpy(lastentry, curentry, SH_SIZEOF_ELEMENT_TYPE);
 
 				lastentry = curentry;
 			}
@@ -752,14 +755,21 @@ SH_START_ITERATE(SH_TYPE * tb, SH_ITERATOR * iter)
 	 * supported, we want to start/end at an element that cannot be affected
 	 * by elements being shifted.
 	 */
+	//fprintf(stderr, "startiter start --> tb->size = %lu\n", tb->size);
+	//fflush(stderr);
 	for (i = 0; i < tb->size; i++)
 	{
-		SH_ELEMENT_TYPE *entry = &tb->data[i];
+		SH_ELEMENT_TYPE *entry = SH_GET_ENTRY(tb->data, i);
 
 		if (entry->status != SH_STATUS_IN_USE)
 		{
 			startelem = i;
+			//fprintf(stderr, "startiter startelem = %ld\n", startelem);
+			//fflush(stderr);
 			break;
+		} else {
+			//fprintf(stderr, "startiter i = %d, entry->status = SH_STATUS_IN_USE\n", i);
+			//fflush(stderr);
 		}
 	}
 
@@ -771,6 +781,8 @@ SH_START_ITERATE(SH_TYPE * tb, SH_ITERATOR * iter)
 	 */
 	iter->cur = startelem;
 	iter->end = iter->cur;
+	//fprintf(stderr, "startiter end --> iter->cur = %d, iter->end = %d\n", iter->cur, iter->end);
+	//fflush(stderr);
 	iter->done = false;
 }
 
@@ -806,23 +818,36 @@ SH_START_ITERATE_AT(SH_TYPE * tb, SH_ITERATOR * iter, uint32 at)
 SH_SCOPE	SH_ELEMENT_TYPE *
 SH_ITERATE(SH_TYPE * tb, SH_ITERATOR * iter)
 {
+	//SH_ELEMENT_TYPE *myelem;
+
+	//fprintf(stderr, "iter start --> cur = %d, end = %d, done = %d\n", iter->cur, iter->end, iter->done);
+
+	//myelem = SH_GET_ENTRY(tb->data, 0);
+	//fprintf(stderr, "iter start ==> elem = %p, 0 -> %d\n", myelem, myelem->status);
+
+	//myelem = SH_GET_ENTRY(tb->data, 6);
+	//fprintf(stderr, "iter start ==> elem = %p, 6 -> %d\n", myelem, myelem->status);
+
 	while (!iter->done)
 	{
 		SH_ELEMENT_TYPE *elem;
 
-		elem = &tb->data[iter->cur];
+		elem = SH_GET_ENTRY(tb->data, iter->cur);
+		//fprintf(stderr, "iter loop --> cur = %d, end = %d, elem = %p, elem->status = %d, done = %d\n",
+		//		iter->cur, iter->end, elem, elem->status, iter->done);
 
 		/* next element in backward direction */
 		iter->cur = (iter->cur - 1) & tb->sizemask;
-
 		if ((iter->cur & tb->sizemask) == (iter->end & tb->sizemask))
 			iter->done = true;
 		if (elem->status == SH_STATUS_IN_USE)
 		{
+			//fprintf(stderr, "iter loop --> found element!!! elem = %p, elem->status = %d\n", elem, elem->status);
 			return elem;
 		}
 	}
 
+	//fprintf(stderr, "iter null --> cur = %d, end = %d, done = %d\n", iter->cur, iter->end, iter->done);
 	return NULL;
 }
 
@@ -851,7 +876,7 @@ SH_STAT(SH_TYPE * tb)
 		uint32		dist;
 		SH_ELEMENT_TYPE *elem;
 
-		elem = &tb->data[i];
+		elem = SH_GET_ENTRY(tb->data, i);
 
 		if (elem->status != SH_STATUS_IN_USE)
 			continue;
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index e05bc04f52..583217eea9 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -600,6 +600,7 @@ typedef struct TupleHashEntryData
 #define SH_KEY_TYPE MinimalTuple
 #define SH_SCOPE extern
 #define SH_DECLARE
+#define SH_SIZEOF_ELEMENT_TYPE	sizeof(TupleHashEntryData)
 #include "lib/simplehash.h"
 
 typedef struct TupleHashTableData
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 03dc5307e8..d80da14aba 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -557,6 +557,7 @@ extern PGDLLIMPORT Node *newNodeMacroHolder;
 
 
 #define makeNode(_type_)		((_type_ *) newNode(sizeof(_type_),T_##_type_))
+#define makeNodeSize(_type_, _size_)	((_type_ *) newNode((_size_),T_##_type_))
 #define NodeSetTag(nodeptr,t)	(((Node*)(nodeptr))->type = (t))
 
 #define IsA(nodeptr,_type_)		(nodeTag(nodeptr) == T_##_type_)
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 84d59f12b2..fe38f8de7e 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -33,15 +33,6 @@
 /* The normal alignment of `short', in bytes. */
 #undef ALIGNOF_SHORT
 
-/* Size of a disk block --- this also limits the size of a tuple. You can set
-   it bigger if you need bigger tuples (although TOAST should reduce the need
-   to have large tuples, since fields can be spread across multiple tuples).
-   BLCKSZ must be a power of 2. The maximum possible value of BLCKSZ is
-   currently 2^15 (32768). This is determined by the 15-bit widths of the
-   lp_off and lp_len fields in ItemIdData (see include/storage/itemid.h).
-   Changing BLCKSZ requires an initdb. */
-#undef BLCKSZ
-
 /* Define to the default TCP port number on which the server listens and to
    which clients will try to connect. This can be overridden at run-time, but
    it's convenient if your clients have the right default compiled in.
@@ -771,19 +762,6 @@
    your system. */
 #undef PTHREAD_CREATE_JOINABLE
 
-/* RELSEG_SIZE is the maximum number of blocks allowed in one disk file. Thus,
-   the maximum size of a single file is RELSEG_SIZE * BLCKSZ; relations bigger
-   than that are divided into multiple files. RELSEG_SIZE * BLCKSZ must be
-   less than your OS' limit on file size. This is often 2 GB or 4GB in a
-   32-bit operating system, unless you have large file support enabled. By
-   default, we make the limit 1 GB to avoid any possible integer-overflow
-   problems within the OS. A limit smaller than necessary only means we divide
-   a large relation into more chunks than necessary, so it seems best to err
-   in the direction of a small limit. A power-of-2 value is recommended to
-   save a few cycles in md.c, but is not absolutely required. Changing
-   RELSEG_SIZE requires an initdb. */
-#undef RELSEG_SIZE
-
 /* The size of `long', as computed by sizeof. */
 #undef SIZEOF_LONG
 
@@ -898,15 +876,6 @@
 # endif
 #endif
 
-/* Size of a WAL file block. This need have no particular relation to BLCKSZ.
-   XLOG_BLCKSZ must be a power of 2, and if your system supports O_DIRECT I/O,
-   XLOG_BLCKSZ must be a multiple of the alignment requirement for direct-I/O
-   buffers, else direct I/O may fail. Changing XLOG_BLCKSZ requires an initdb.
-   */
-#undef XLOG_BLCKSZ
-
-
-
 /* Number of bits in a file offset, on hosts where this is settable. */
 #undef _FILE_OFFSET_BITS
 
diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h
index 6f2238b330..51257b0a86 100644
--- a/src/include/pg_config_manual.h
+++ b/src/include/pg_config_manual.h
@@ -13,12 +13,6 @@
  *------------------------------------------------------------------------
  */
 
-/*
- * This is default value for wal_segment_size to be used at initdb when run
- * without --walsegsize option. Must be a valid segment size.
- */
-#define DEFAULT_XLOG_SEG_SIZE	(16*1024*1024)
-
 /*
  * Maximum length for identifiers (e.g. table names, column names,
  * function names).  Names actually are limited to one less byte than this,
@@ -33,7 +27,7 @@
  *
  * The minimum value is 8 (GIN indexes use 8-argument support functions).
  * The maximum possible value is around 600 (limited by index tuple size in
- * pg_proc's index; BLCKSZ larger than 8K would allow more).  Values larger
+ * pg_proc's index; rel_blck_size larger than 8K would allow more).  Values larger
  * than needed will waste memory and processing time, but do not directly
  * cost disk space.
  *
diff --git a/src/include/pg_control_def.h b/src/include/pg_control_def.h
new file mode 100644
index 0000000000..cf23bd23f1
--- /dev/null
+++ b/src/include/pg_control_def.h
@@ -0,0 +1,44 @@
+#ifndef PG_CONTROL_DEF_H
+#define PG_CONTROL_DEF_H
+
+#define KB                      1024
+#define MB                      (1024 * 1024)
+#define GB                      (1024 * 1024 * 1024)
+
+/*
+ * Relation definitions
+ *
+ * Relation block file size must be a power of 2. Maximum value is 2 ^ 15.
+ * This is determined by the 15-bit widths of the lp_off and lp_len fields 
+ * in ItemIdData (see include/storage/itemid.h).
+ */
+#define REL_BLCK_SIZE_MIN       KB				/* in bytes */
+#define REL_BLCK_SIZE_DEF       (8 * KB)			/* in bytes */
+#define REL_BLCK_SIZE_MAX       (32 * KB)			/* in bytes */
+
+#define REL_FILE_SIZE_MIN       GB				/* in bytes */
+#define REL_FILE_SIZE_DEF       (2 * (unsigned long) GB)	/* in bytes */
+#define REL_FILE_SIZE_MAX       (64 * (unsigned long) GB)	/* in bytes */
+
+/* Below are based on above 2 series of block size and segment size */
+#define REL_FILE_BLCK_MIN	1024				/* in blocks. Makes segment size between 1MB and 32MB */
+#define REL_FILE_BLCK_DEF	131072          		/* in blocks. Makes segment size between 128MB and 1GB (blck_size = 8KB) */
+#define REL_FILE_BLCK_MAX	2097152        			/* in blocks. 2GB */
+
+/*
+ * Wal definitions
+ */
+#define WAL_BLCK_SIZE_MIN       KB				/* in bytes */
+#define WAL_BLCK_SIZE_DEF       (8 * KB)			/* in bytes */
+#define WAL_BLCK_SIZE_MAX       (64 * KB)			/* in bytes */
+
+#define WAL_FILE_SIZE_MIN       MB              		/* in bytes */
+#define WAL_FILE_SIZE_DEF       (16 * MB)      			/* in bytes */
+#define WAL_FILE_SIZE_MAX       (unsigned long) GB		/* in bytes */
+
+/* Below are based on above 2 series of block size and segment size */
+#define WAL_FILE_BLCK_MIN	16				/* in blocks. 1MB / 64 KB */
+#define WAL_FILE_BLCK_DEF	2048				/* in blocks. 16 MB / 8 KB */
+#define WAL_FILE_BLCK_MAX	1048576				/* in blocks. 1GB / 1KB */
+
+#endif
diff --git a/src/include/storage/bufmgr.h b/src/include/storage/bufmgr.h
index 98b63fc5ba..df89d00c98 100644
--- a/src/include/storage/bufmgr.h
+++ b/src/include/storage/bufmgr.h
@@ -130,7 +130,7 @@ extern PGDLLIMPORT int32 *LocalRefCount;
 	BufferIsLocal(buffer) ? \
 		LocalBufferBlockPointers[-(buffer) - 1] \
 	: \
-		(Block) (BufferBlocks + ((Size) ((buffer) - 1)) * BLCKSZ) \
+		(Block) (BufferBlocks + ((Size) ((buffer) - 1)) * rel_blck_size) \
 )
 
 /*
@@ -147,7 +147,7 @@ extern PGDLLIMPORT int32 *LocalRefCount;
 #define BufferGetPageSize(buffer) \
 ( \
 	AssertMacro(BufferIsValid(buffer)), \
-	(Size)BLCKSZ \
+	(Size)rel_blck_size \
 )
 
 /*
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index 50c72a3c8d..8ccdfc239b 100644
--- a/src/include/storage/bufpage.h
+++ b/src/include/storage/bufpage.h
@@ -18,6 +18,7 @@
 #include "storage/block.h"
 #include "storage/item.h"
 #include "storage/off.h"
+#include "storage/md.h"
 
 /*
  * A postgres disk page is an abstraction layered on top of a postgres
@@ -251,7 +252,7 @@ typedef PageHeaderData *PageHeader;
  * PageSizeIsValid
  *		True iff the page size is valid.
  */
-#define PageSizeIsValid(pageSize) ((pageSize) == BLCKSZ)
+#define PageSizeIsValid(pageSize) ((pageSize) == rel_blck_size)
 
 /*
  * PageGetPageSize
@@ -309,7 +310,7 @@ static inline bool
 PageValidateSpecialPointer(Page page)
 {
 	Assert(PageIsValid(page));
-	Assert(((PageHeader) (page))->pd_special <= BLCKSZ);
+	Assert(((PageHeader) (page))->pd_special <= rel_blck_size);
 	Assert(((PageHeader) (page))->pd_special >= SizeOfPageHeaderData);
 
 	return true;
diff --git a/src/include/storage/checksum_impl.h b/src/include/storage/checksum_impl.h
index bffd061de8..0103a7b3fb 100644
--- a/src/include/storage/checksum_impl.h
+++ b/src/include/storage/checksum_impl.h
@@ -193,7 +193,7 @@ pg_checksum_page(char *page, BlockNumber blkno)
 	 */
 	save_checksum = phdr->pd_checksum;
 	phdr->pd_checksum = 0;
-	checksum = pg_checksum_block(page, BLCKSZ);
+	checksum = pg_checksum_block(page, rel_blck_size);
 	phdr->pd_checksum = save_checksum;
 
 	/* Mix in the block number to detect transposed pages */
diff --git a/src/include/storage/fsm_internals.h b/src/include/storage/fsm_internals.h
index 722e649123..3a2ae45941 100644
--- a/src/include/storage/fsm_internals.h
+++ b/src/include/storage/fsm_internals.h
@@ -48,10 +48,10 @@ typedef FSMPageData *FSMPage;
  * Number of non-leaf and leaf nodes, and nodes in total, on an FSM page.
  * These definitions are internal to fsmpage.c.
  */
-#define NodesPerPage (BLCKSZ - MAXALIGN(SizeOfPageHeaderData) - \
+#define NodesPerPage (rel_blck_size - MAXALIGN(SizeOfPageHeaderData) - \
 					  offsetof(FSMPageData, fp_nodes))
 
-#define NonLeafNodesPerPage (BLCKSZ / 2 - 1)
+#define NonLeafNodesPerPage (rel_blck_size / 2 - 1)
 #define LeafNodesPerPage (NodesPerPage - NonLeafNodesPerPage)
 
 /*
@@ -68,5 +68,6 @@ extern uint8 fsm_get_max_avail(Page page);
 extern bool fsm_set_avail(Page page, int slot, uint8 value);
 extern bool fsm_truncate_avail(Page page, int nslots);
 extern bool fsm_rebuild_page(Page page);
+extern void fsm_init(void);
 
 #endif							/* FSM_INTERNALS_H */
diff --git a/src/include/storage/large_object.h b/src/include/storage/large_object.h
index 01d0985b44..3c86b1ded6 100644
--- a/src/include/storage/large_object.h
+++ b/src/include/storage/large_object.h
@@ -54,7 +54,7 @@ typedef struct LargeObjectDesc
 /*
  * Each "page" (tuple) of a large object can hold this much data
  *
- * We could set this as high as BLCKSZ less some overhead, but it seems
+ * We could set this as high as rel_blck_size less some overhead, but it seems
  * better to make it a smaller value, so that not as much space is used
  * up when a page-tuple is updated.  Note that the value is deliberately
  * chosen large enough to trigger the tuple toaster, so that we will
@@ -67,7 +67,7 @@ typedef struct LargeObjectDesc
  *
  * NB: Changing LOBLKSIZE requires an initdb.
  */
-#define LOBLKSIZE		(BLCKSZ / 4)
+#define LOBLKSIZE		(rel_blck_size / 4)
 
 /*
  * Maximum length in bytes for a large object.  To make this larger, we'd
diff --git a/src/include/storage/md.h b/src/include/storage/md.h
new file mode 100644
index 0000000000..4d1e2d0318
--- /dev/null
+++ b/src/include/storage/md.h
@@ -0,0 +1,12 @@
+#ifndef MD_H
+#define MD_H
+
+extern unsigned int rel_blck_size;
+extern unsigned int rel_file_blck;
+extern unsigned long rel_file_size;
+
+extern unsigned int wal_blck_size;
+extern unsigned int wal_file_blck;
+extern unsigned long wal_file_size;
+
+#endif
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index 7228808b94..74fada3aa9 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -25,7 +25,7 @@ typedef uint16 OffsetNumber;
 
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
-#define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
+#define MaxOffsetNumber			((OffsetNumber) (rel_blck_size / sizeof(ItemIdData)))
 #define OffsetNumberMask		(0xffff)	/* valid uint16 bits */
 
 /* ----------------
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 68fd6fbd54..e1d81ef9a4 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -307,14 +307,14 @@ typedef struct StdRdOptions
  *		Returns the relation's desired space usage per page in bytes.
  */
 #define RelationGetTargetPageUsage(relation, defaultff) \
-	(BLCKSZ * RelationGetFillFactor(relation, defaultff) / 100)
+	(rel_blck_size * RelationGetFillFactor(relation, defaultff) / 100)
 
 /*
  * RelationGetTargetPageFreeSpace
  *		Returns the relation's desired freespace per page in bytes.
  */
 #define RelationGetTargetPageFreeSpace(relation, defaultff) \
-	(BLCKSZ * (100 - RelationGetFillFactor(relation, defaultff)) / 100)
+	(rel_blck_size * (100 - RelationGetFillFactor(relation, defaultff)) / 100)
 
 /*
  * RelationIsUsedAsCatalogTable
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 8412ee8160..ff0e4c530c 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -494,6 +494,11 @@ struct pg_conn
 
 	/* Buffer for receiving various parts of messages */
 	PQExpBufferData workBuffer; /* expansible string */
+
+	unsigned int rel_blck_size;
+	unsigned int rel_file_blck;
+	unsigned int wal_blck_size;
+	unsigned int wal_file_blck;
 };
 
 /* PGcancel stores all data necessary to cancel a connection. A copy of this
#2Alvaro Herrera
alvherre@alvh.no-ip.org
In reply to: Remi Colinet (#1)
Re: [Patch v2] Make block and file size for WAL and relations defined at cluster creation

Remi Colinet wrote:

Hello,

This is version 2 of the patch to make the file and block sizes for WAL and
relations, run-time configurable at initdb.

I don't think this works, since we have a rule that pallocs are
prohibited within critical section and I see that your patch changes
some stack-allocated variables to palloc'ed. For example I think the
heap_page_prune changes should break some test or other.

This patch is too massive to review.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#3Remi Colinet
remi.colinet@gmail.com
In reply to: Alvaro Herrera (#2)
Re: [Patch v2] Make block and file size for WAL and relations defined at cluster creation

2018-01-03 23:11 GMT+01:00 Alvaro Herrera <alvherre@alvh.no-ip.org>:

Remi Colinet wrote:

Hello,

This is version 2 of the patch to make the file and block sizes for WAL

and

relations, run-time configurable at initdb.

I don't think this works, since we have a rule that pallocs are
prohibited within critical section and I see that your patch changes
some stack-allocated variables to palloc'ed. For example I think the
heap_page_prune changes should break some test or other.

Thank you for the head up.

For heap_page_prune() function, the critical section starts after the
palloc() call and ends before the pfree().
Unless critical sections can be nested, we are outside such section.

For the other palloc()/pfree() uses to replace the stack allocation, either
we already have palloc()/pfree() call.
The changes consist of:

-       page = (Page) palloc(BLCKSZ);
+       page = (Page) palloc(rel_blck_size);

Only one change could be suspected. This is for the async.c command.
But the change is also done outside of a critical section.

This patch is too massive to review.

I understand the point.
If the patch is clean enough and does not show any regression, I will split
it into smaller parts.

Regards
Remi

Show quoted text

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services