Extending to 32K row limit
I know that I've seen this answer before but can't seem to find it for
7.0.2 in the archives. Which file(s) need to be changed to have Postgres
default to 32K size row limits rather than 8K? Has anyone run into any
horror stories after going to 32K?
Thanks.
-Tony
p.s. Could the procedure for increasing to 32K rows be added to the FAQ?
(Hopefully, it won't be necessary post-TOAST).
At 12:24 PM 8/8/2000, G. Anthony Reina wrote:
I know that I've seen this answer before but can't seem to find it for
7.0.2 in the archives. Which file(s) need to be changed to have Postgres
default to 32K size row limits rather than 8K? Has anyone run into any
horror stories after going to 32K?
I've been running it for a while and fairly heavily without any problems...
in src/include/config.h modify the following section AFTER running configure.
/*
* Size of a disk block --- currently, this limits the size of a tuple.
* You can set it bigger if you need bigger tuples.
*/
/* currently must be <= 32k bjm */
#define BLCKSZ 8192
change to
#define BLCKSZ 32768
This has worked for me....
-
- Thomas Swan
- Graduate Student - Computer Science
- The University of Mississippi
-
- "People can be categorized into two fundamental
- groups, those that divide people into two groups
- and those that don't."
At 12:26 PM 8/8/00 -0500, Thomas Swan wrote:
<excerpt>At 12:24 PM 8/8/2000, G. Anthony Reina wrote:
<excerpt>I know that I've seen this answer before but can't seem to find
it for
7.0.2 in the archives. Which file(s) need to be changed to have
Postgres
default to 32K size row limits rather than 8K? Has anyone run into any
horror stories after going to 32K?
</excerpt>
I've been running it for a while and fairly heavily without any
problems...
</excerpt><<<<<<<<
Folks using the OpenACS toolkit, which includes the AOLserver site run
by
AOL (did y'all know that's a Postgres site now? - just the AOLserver
site,
not all of AOL, don't get TOO excited) run with a 16KB blocksize if
they
follow our instructions.
I've been running a couple of sites like that.
Zero problems.
- Don Baccus, Portland OR <<dhogaza@pacifier.com>
Nature photos, on-line guides, Pacific Northwest
Rare Bird Alert Service and other goodies at
At 11:17 AM 8/8/00 -0700, G. Anthony Reina wrote:
Thanks Don. One more question: Does Postgres set aside an entire 8 K (or
32 K) of
hard disk space for the row; or, does it just use what's needed to store the
information? For example, if I only have one integer value in a row, does
Postgres
set aside 8K of harddrive space or just sizeof(int) space (with some
pointer where
other values in the row could be placed on the disk)?
No, it does not allocate a fixed 8K (or 32K) block per row. The size of a
row is dependent on the data stored within the row. Each row also contains
a header of modest length.
- Don Baccus, Portland OR <dhogaza@pacifier.com>
Nature photos, on-line guides, Pacific Northwest
Rare Bird Alert Service and other goodies at
http://donb.photo.net.
Import Notes
Reply to msg id not found: 39904EAF.7FB8AF84@nsi.edu
I know that I've seen this answer before but can't seem to find it for
7.0.2 in the archives. Which file(s) need to be changed to have Postgres
default to 32K size row limits rather than 8K? Has anyone run into any
horror stories after going to 32K?
Bumping it to 32K on AIX 4.1 broke the disk drivers here, so I would say it
depends on your platform. Going to 16K worked fine, but after the jump to
32K, some fsck'ing was required to fix up our drives.
The problem was definitely in AIX since many other platforms have reported
no problems with the 32K setting. If another use has bumped it up
successfully for the same platform as yours, then I'd feel confident in
doing it. If you don't get a reply to that effect or can't find it in the
archives that someone has done it, I'd recommend putting it to 32K on a test
machine first.
Just my $.02 worth after trying it on AIX 4.1.
Darren
Thanks Don. One more question: Does Postgres set aside an entire 8 K (or 32 K) of
hard disk space for the row; or, does it just use what's needed to store the
information? For example, if I only have one integer value in a row, does Postgres
set aside 8K of harddrive space or just sizeof(int) space (with some pointer where
other values in the row could be placed on the disk)?
I just wanted to make sure that my old data at 8 K wasn't going to take up 4 times as
much harddrive space after the 32K conversion.
Thanks.
-Tony
Don Baccus wrote:
At 12:26 PM 8/8/00 -0500, Thomas Swan wrote:
At 12:24 PM 8/8/2000, G. Anthony Reina wrote:
I know that I've seen this answer before but can't seem to find it for
7.0.2 in the archives. Which file(s) need to be changed to have Postgres
default to 32K size row limits rather than 8K? Has anyone run into any
horror stories after going to 32K?I've been running it for a while and fairly heavily without any problems...
<<<<
Folks using the OpenACS toolkit, which includes the AOLserver site run by
AOL (did y'all know that's a Postgres site now? - just the AOLserver site,
not all of AOL, don't get TOO excited) run with a 16KB blocksize if they
follow our instructions.I've been running a couple of sites like that.
Zero problems.
- Don Baccus, Portland OR <dhogaza@pacifier.com>
Nature photos, on-line guides, Pacific Northwest
Rare Bird Alert Service and other goodies at
http://donb.photo.net.
--
///////////////////////////////////////////////////
// G. Anthony Reina, MD //
// The Neurosciences Institute //
// 10640 John Jay Hopkins Drive //
// San Diego, CA 92121 //
// Phone: (858) 626-2132 //
// FAX: (858) 626-2199 //
////////////////////////////////////////////
Thomas,
I've re-done my database with the 32K tuple limit-- looks good.
However, I seem to be having trouble with binary cursors. I think it may
be with the number of bytes in the tuple header (used to be 16 bytes
with the 8K limit). I've tried 16, 32, and 64, but haven't seemed to
find it. Have you used binary cursors with this setup?
Thanks.
-Tony
Thomas Swan wrote:
Show quoted text
At 12:24 PM 8/8/2000, G. Anthony Reina wrote:
I know that I've seen this answer before but can't seem to find it
for
7.0.2 in the archives. Which file(s) need to be changed to have
Postgres
default to 32K size row limits rather than 8K? Has anyone run into
any
horror stories after going to 32K?I've been running it for a while and fairly heavily without any
problems...in src/include/config.h modify the following section AFTER running
configure./*
* Size of a disk block --- currently, this limits the size of a
tuple.
* You can set it bigger if you need bigger tuples.
*/
/* currently must be <= 32k bjm */
#define BLCKSZ 8192change to
#define BLCKSZ 32768
This has worked for me....
-
- Thomas Swan
- Graduate Student - Computer Science
- The University of Mississippi
-
- "People can be categorized into two fundamental
- groups, those that divide people into two groups
- and those that don't."
Sorry. I just figured out it was an endianess problem rather than a header
size problem. Works fine now. Looks like 16 is still the magic number.
Please disregard the last question.
-Tony
"G. Anthony Reina" wrote:
Show quoted text
Thomas,
I've re-done my database with the 32K tuple limit-- looks good.
However, I seem to be having trouble with binary cursors. I think it may
be with the number of bytes in the tuple header (used to be 16 bytes
with the 8K limit). I've tried 16, 32, and 64, but haven't seemed to
find it. Have you used binary cursors with this setup?Thanks.
-TonyThomas Swan wrote:
At 12:24 PM 8/8/2000, G. Anthony Reina wrote:
I know that I've seen this answer before but can't seem to find it
for
7.0.2 in the archives. Which file(s) need to be changed to have
Postgres
default to 32K size row limits rather than 8K? Has anyone run into
any
horror stories after going to 32K?I've been running it for a while and fairly heavily without any
problems...in src/include/config.h modify the following section AFTER running
configure./*
* Size of a disk block --- currently, this limits the size of a
tuple.
* You can set it bigger if you need bigger tuples.
*/
/* currently must be <= 32k bjm */
#define BLCKSZ 8192change to
#define BLCKSZ 32768
This has worked for me....
-
- Thomas Swan
- Graduate Student - Computer Science
- The University of Mississippi
-
- "People can be categorized into two fundamental
- groups, those that divide people into two groups
- and those that don't."