postgres limitation
Hi,
Is there any document about the limitation of postgres
like the total size of one table
the max row number of a table
the max size of a row...
Thanks
Feng
Hi,
The FAQ says:
_________________________________________________
4.6) What is the maximum size for a row, table,
database?
These are the limits:
Maximum size for a database?
unlimited (60GB databases exist)
Maximum size for a table?
unlimited on all operating systems
Maximum size for a row?
unlimited in 7.1 and later
Maximum number of rows in a table?
unlimited
Maximum number of columns in a table?
unlimited
Maximum number of indexes on a table?
unlimited
Of course, these are not actually unlimited, but
limited to available disk space.
_________________________________________________
I would like to add:
In the earlyer versions than 7.1 there is a limit of
8K per row configurable to 32K per row at compiletime.
regards,
Rumen
--- a <hong@cs.purdue.edu> wrote:
Hi,
Is there any document about the limitation of
postgres
like the total size of one table
the max row number of a table
the max size of a row...Thanks
Feng
__________________________________________________
Do You Yahoo!?
Yahoo! Auctions - Buy the things you want at great prices.
http://auctions.yahoo.com/
Import Notes
Resolved by subject fallback
Maximum size for a row?
unlimited in 7.1 and later
Maximum number of rows in a table?
unlimited
Maximum number of columns in a table?
unlimited
Maximum number of indexes on a table?
unlimitedOf course, these are not actually unlimited, but
limited to available disk space.
_________________________________________________I would like to add:
In the earlyer versions than 7.1 there is a limit of
8K per row configurable to 32K per row at compiletime.
Yes, the FAQ was just updated with 7.1 near release.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Bruce, I think section 4.6 of the FAQ is a tad on the short and overly
optimistic side. Here's a set of more precise statements ...
4.6) What is the maximum size for a row, table, database?
Maximum size for a database?
Effectively unlimited, although you may see performance problems with
more than a few thousand tables in a database, depending on how
gracefully your filesystem copes with directories containing many files.
Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)
Maximum size for a row?
See limits on field size and number of columns.
Maximum size for an individual field value?
Field values are limited to 1Gb, and in practice are more tightly
limited by memory/swap space available to a backend; a field value that
is a large fraction of the maximum process memory size will probably
cause out-of-memory failures.
Maximum number of columns in a table?
1600. In practice probably quite a bit less, even with TOAST, since the
master tuple still has to fit in a block. If all the columns are large
(toastable) then at most you could fit about 250 columns with BLCKSZ=8K,
since an out-of-line TOAST value pointer takes 32 bytes. On the other
hand, 1600 int4 columns would fit easily.
Maximum number of rows in a table?
No specific limit. Note however that the COUNT() function currently
uses an int4 counter, so will give bogus results for more than 2G rows.
Maximum number of indexes on a table?
No limit.
regards, tom lane
OK, how is this?
These are the limits:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 64 TB on all operating systems
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 1600
Maximum number of indexes on a table? unlimited
Of course, these are not actually unlimited, but limited to
available disk space and memory/swap space. Performance may
suffer when these values get unusually large.
---------------------------------------------------------------------------
Bruce, I think section 4.6 of the FAQ is a tad on the short and overly
optimistic side. Here's a set of more precise statements ...4.6) What is the maximum size for a row, table, database?
Maximum size for a database?
Effectively unlimited, although you may see performance problems with
more than a few thousand tables in a database, depending on how
gracefully your filesystem copes with directories containing many files.Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)Maximum size for a row?
See limits on field size and number of columns.
Maximum size for an individual field value?
Field values are limited to 1Gb, and in practice are more tightly
limited by memory/swap space available to a backend; a field value that
is a large fraction of the maximum process memory size will probably
cause out-of-memory failures.Maximum number of columns in a table?
1600. In practice probably quite a bit less, even with TOAST, since the
master tuple still has to fit in a block. If all the columns are large
(toastable) then at most you could fit about 250 columns with BLCKSZ=8K,
since an out-of-line TOAST value pointer takes 32 bytes. On the other
hand, 1600 int4 columns would fit easily.Maximum number of rows in a table?
No specific limit. Note however that the COUNT() function currently
uses an int4 counter, so will give bogus results for more than 2G rows.Maximum number of indexes on a table?
No limit.
regards, tom lane
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
How about the full answer? I think Tom did a fantastic job of writing up,
be a shame to make it go to waste for brevity? :(
On Sat, 27 Jan 2001, Bruce Momjian wrote:
OK, how is this?
These are the limits:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 64 TB on all operating systems
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 1600
Maximum number of indexes on a table? unlimitedOf course, these are not actually unlimited, but limited to
available disk space and memory/swap space. Performance may
suffer when these values get unusually large.---------------------------------------------------------------------------
Bruce, I think section 4.6 of the FAQ is a tad on the short and overly
optimistic side. Here's a set of more precise statements ...4.6) What is the maximum size for a row, table, database?
Maximum size for a database?
Effectively unlimited, although you may see performance problems with
more than a few thousand tables in a database, depending on how
gracefully your filesystem copes with directories containing many files.Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)Maximum size for a row?
See limits on field size and number of columns.
Maximum size for an individual field value?
Field values are limited to 1Gb, and in practice are more tightly
limited by memory/swap space available to a backend; a field value that
is a large fraction of the maximum process memory size will probably
cause out-of-memory failures.Maximum number of columns in a table?
1600. In practice probably quite a bit less, even with TOAST, since the
master tuple still has to fit in a block. If all the columns are large
(toastable) then at most you could fit about 250 columns with BLCKSZ=8K,
since an out-of-line TOAST value pointer takes 32 bytes. On the other
hand, 1600 int4 columns would fit easily.Maximum number of rows in a table?
No specific limit. Note however that the COUNT() function currently
uses an int4 counter, so will give bogus results for more than 2G rows.Maximum number of indexes on a table?
No limit.
regards, tom lane
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
The Hermit Hacker <scrappy@hub.org> writes:
How about the full answer? I think Tom did a fantastic job of writing up,
be a shame to make it go to waste for brevity? :(
Perhaps someone would like to polish the answer into a section in the
Administrator's Guide? Then the FAQ answer could stay as short as Bruce
likes, but point to the admin guide for more info ...
regards, tom lane
Can I get some feedback from the group? Should a long or short version
be used in the FAQ?
How about the full answer? I think Tom did a fantastic job of writing up,
be a shame to make it go to waste for brevity? :(On Sat, 27 Jan 2001, Bruce Momjian wrote:
OK, how is this?
These are the limits:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 64 TB on all operating systems
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 1600
Maximum number of indexes on a table? unlimitedOf course, these are not actually unlimited, but limited to
available disk space and memory/swap space. Performance may
suffer when these values get unusually large.---------------------------------------------------------------------------
Bruce, I think section 4.6 of the FAQ is a tad on the short and overly
optimistic side. Here's a set of more precise statements ...4.6) What is the maximum size for a row, table, database?
Maximum size for a database?
Effectively unlimited, although you may see performance problems with
more than a few thousand tables in a database, depending on how
gracefully your filesystem copes with directories containing many files.Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)Maximum size for a row?
See limits on field size and number of columns.
Maximum size for an individual field value?
Field values are limited to 1Gb, and in practice are more tightly
limited by memory/swap space available to a backend; a field value that
is a large fraction of the maximum process memory size will probably
cause out-of-memory failures.Maximum number of columns in a table?
1600. In practice probably quite a bit less, even with TOAST, since the
master tuple still has to fit in a block. If all the columns are large
(toastable) then at most you could fit about 250 columns with BLCKSZ=8K,
since an out-of-line TOAST value pointer takes 32 bytes. On the other
hand, 1600 int4 columns would fit easily.Maximum number of rows in a table?
No specific limit. Note however that the COUNT() function currently
uses an int4 counter, so will give bogus results for more than 2G rows.Maximum number of indexes on a table?
No limit.
regards, tom lane
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Put both in,
the short for the managers who want to compare postgresql to
<insert generic brand sql server here>
and the long for the techies
Andrew
Show quoted text
On Sat, Jan 27, 2001 at 09:44:12PM -0500, Bruce Momjian wrote:
Can I get some feedback from the group? Should a long or short version
be used in the FAQ?How about the full answer? I think Tom did a fantastic job of writing up,
be a shame to make it go to waste for brevity? :(On Sat, 27 Jan 2001, Bruce Momjian wrote:
OK, how is this?
These are the limits:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 64 TB on all operating systems
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 1600
Maximum number of indexes on a table? unlimitedOf course, these are not actually unlimited, but limited to
available disk space and memory/swap space. Performance may
suffer when these values get unusually large.---------------------------------------------------------------------------
Bruce, I think section 4.6 of the FAQ is a tad on the short and overly
optimistic side. Here's a set of more precise statements ...4.6) What is the maximum size for a row, table, database?
Maximum size for a database?
Effectively unlimited, although you may see performance problems with
more than a few thousand tables in a database, depending on how
gracefully your filesystem copes with directories containing many files.Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)Maximum size for a row?
See limits on field size and number of columns.
Maximum size for an individual field value?
Field values are limited to 1Gb, and in practice are more tightly
limited by memory/swap space available to a backend; a field value that
is a large fraction of the maximum process memory size will probably
cause out-of-memory failures.Maximum number of columns in a table?
1600. In practice probably quite a bit less, even with TOAST, since the
master tuple still has to fit in a block. If all the columns are large
(toastable) then at most you could fit about 250 columns with BLCKSZ=8K,
since an out-of-line TOAST value pointer takes 32 bytes. On the other
hand, 1600 int4 columns would fit easily.Maximum number of rows in a table?
No specific limit. Note however that the COUNT() function currently
uses an int4 counter, so will give bogus results for more than 2G rows.Maximum number of indexes on a table?
No limit.
regards, tom lane
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
I updated the maximum number of columns:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 64 TB on all operating systems
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 250-1600 depending on column types
Maximum number of indexes on a table? unlimited
How about the full answer? I think Tom did a fantastic job of writing up,
be a shame to make it go to waste for brevity? :(On Sat, 27 Jan 2001, Bruce Momjian wrote:
OK, how is this?
These are the limits:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 64 TB on all operating systems
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 1600
Maximum number of indexes on a table? unlimitedOf course, these are not actually unlimited, but limited to
available disk space and memory/swap space. Performance may
suffer when these values get unusually large.---------------------------------------------------------------------------
Bruce, I think section 4.6 of the FAQ is a tad on the short and overly
optimistic side. Here's a set of more precise statements ...4.6) What is the maximum size for a row, table, database?
Maximum size for a database?
Effectively unlimited, although you may see performance problems with
more than a few thousand tables in a database, depending on how
gracefully your filesystem copes with directories containing many files.Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)Maximum size for a row?
See limits on field size and number of columns.
Maximum size for an individual field value?
Field values are limited to 1Gb, and in practice are more tightly
limited by memory/swap space available to a backend; a field value that
is a large fraction of the maximum process memory size will probably
cause out-of-memory failures.Maximum number of columns in a table?
1600. In practice probably quite a bit less, even with TOAST, since the
master tuple still has to fit in a block. If all the columns are large
(toastable) then at most you could fit about 250 columns with BLCKSZ=8K,
since an out-of-line TOAST value pointer takes 32 bytes. On the other
hand, 1600 int4 columns would fit easily.Maximum number of rows in a table?
No specific limit. Note however that the COUNT() function currently
uses an int4 counter, so will give bogus results for more than 2G rows.Maximum number of indexes on a table?
No limit.
regards, tom lane
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)
Added to TODO:
* Make sure all block numbers are unsigned to increase maximum table size
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
I think I have digested most of this information. I added it to the
FAQ, generalized it in the text of the FAQ, or added it to the TODO
list.
Thanks, Tom.
New FAQ text:
---------------------------------------------------------------------------
These are the limits:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 16 TB
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 250-1600 depending on column types
Maximum number of indexes on a table? unlimited
Of course, these are not actually unlimited, but limited to
available disk space and memory/swap space. Performance may
suffer when these values get unusually large.
The maximum table size of 16TB does not require large file
support from the operating system. Large tables are stored
as multiple 1GB files.
The maximum table size and maximum number of columns can be
increased by a factor or four if the default block size is increased
to 32k.
How about the full answer? I think Tom did a fantastic job of writing up,
be a shame to make it go to waste for brevity? :(On Sat, 27 Jan 2001, Bruce Momjian wrote:
OK, how is this?
These are the limits:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 64 TB on all operating systems
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 1600
Maximum number of indexes on a table? unlimitedOf course, these are not actually unlimited, but limited to
available disk space and memory/swap space. Performance may
suffer when these values get unusually large.---------------------------------------------------------------------------
Bruce, I think section 4.6 of the FAQ is a tad on the short and overly
optimistic side. Here's a set of more precise statements ...4.6) What is the maximum size for a row, table, database?
Maximum size for a database?
Effectively unlimited, although you may see performance problems with
more than a few thousand tables in a database, depending on how
gracefully your filesystem copes with directories containing many files.Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)Maximum size for a row?
See limits on field size and number of columns.
Maximum size for an individual field value?
Field values are limited to 1Gb, and in practice are more tightly
limited by memory/swap space available to a backend; a field value that
is a large fraction of the maximum process memory size will probably
cause out-of-memory failures.Maximum number of columns in a table?
1600. In practice probably quite a bit less, even with TOAST, since the
master tuple still has to fit in a block. If all the columns are large
(toastable) then at most you could fit about 250 columns with BLCKSZ=8K,
since an out-of-line TOAST value pointer takes 32 bytes. On the other
hand, 1600 int4 columns would fit easily.Maximum number of rows in a table?
No specific limit. Note however that the COUNT() function currently
uses an int4 counter, so will give bogus results for more than 2G rows.Maximum number of indexes on a table?
No limit.
regards, tom lane
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Sat, 27 Jan 2001, Bruce Momjian wrote:
Can I get some feedback from the group? Should a long or short version
be used in the FAQ?
Neither, it has it's own web page.
Vince.
How about the full answer? I think Tom did a fantastic job of writing up,
be a shame to make it go to waste for brevity? :(On Sat, 27 Jan 2001, Bruce Momjian wrote:
OK, how is this?
These are the limits:
Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? 64 TB on all operating systems
Maximum size for a row? unlimited in 7.1 and later
Maximum size for a field? 1GB in 7.1 and later
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 1600
Maximum number of indexes on a table? unlimitedOf course, these are not actually unlimited, but limited to
available disk space and memory/swap space. Performance may
suffer when these values get unusually large.---------------------------------------------------------------------------
Bruce, I think section 4.6 of the FAQ is a tad on the short and overly
optimistic side. Here's a set of more precise statements ...4.6) What is the maximum size for a row, table, database?
Maximum size for a database?
Effectively unlimited, although you may see performance problems with
more than a few thousand tables in a database, depending on how
gracefully your filesystem copes with directories containing many files.Maximum size for a table?
2G blocks, hence 16 to 64 terabytes depending on the BLCKSZ
configuration constant. (If someone were to run around and make sure
all the block-number arithmetic is unsigned, we could claim 4G blocks,
but I think it's not all unsigned now...)Maximum size for a row?
See limits on field size and number of columns.
Maximum size for an individual field value?
Field values are limited to 1Gb, and in practice are more tightly
limited by memory/swap space available to a backend; a field value that
is a large fraction of the maximum process memory size will probably
cause out-of-memory failures.Maximum number of columns in a table?
1600. In practice probably quite a bit less, even with TOAST, since the
master tuple still has to fit in a block. If all the columns are large
(toastable) then at most you could fit about 250 columns with BLCKSZ=8K,
since an out-of-line TOAST value pointer takes 32 bytes. On the other
hand, 1600 int4 columns would fit easily.Maximum number of rows in a table?
No specific limit. Note however that the COUNT() function currently
uses an int4 counter, so will give bogus results for more than 2G rows.Maximum number of indexes on a table?
No limit.
regards, tom lane
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
--
==========================================================================
Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net
128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking
Online Campground Directory http://www.camping-usa.com
Online Giftshop Superstore http://www.cloudninegifts.com
==========================================================================
Hello folks,
I have a web-tool that inputs text in a Postgresql table. This text is
in a form.
The problem is when I have a big text, I get this error :
ERROR: Tuple is too big: size 13924, max size 8140
Can you help me?
Thanx,
Bogdan,
Show quoted text
Bruce Momjian writes :
Can I get some feedback from the group? Should a long or short version
be used in the FAQ?
Please keep the long version in the FAQ at least until the full
version makes its way into the docs...
Kim Minh.
We made a separate web page for the long version.
Bruce Momjian writes :
Can I get some feedback from the group? Should a long or short version
be used in the FAQ?Please keep the long version in the FAQ at least until the full
version makes its way into the docs...Kim Minh.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026