Very large table...

Started by Jeffery Collinsalmost 26 years ago2 messagesgeneral
Jump to latest
#1Jeffery Collins
collins@onyx-technologies.com

Does anyone have any experience with very large postgresql tables? By
very large, I mean a table with ~38 million records, each record will
have between 80 and 128 bytes (we are not sure of some column sizes yet)
in ~10 columns with probably 3 btree-indexes. Basically the table will
hold all of the Postal Service deliverable addresses in the US in a
somewhat compressed form.

My concerns are in the area of performance and robustness.

I know I haven't been specific enough about the table layout, but I am
not sure yet exactly what it will look like. I am just try to get a gut
level feeling that this has been done before and there are no "got'chas"
out there.

Thank you all,
Jeff

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeffery Collins (#1)
Re: Very large table...

Jeffery Collins <collins@onyx-technologies.com> writes:

Does anyone have any experience with very large postgresql tables? By
very large, I mean a table with ~38 million records, each record will
have between 80 and 128 bytes (we are not sure of some column sizes yet)
in ~10 columns with probably 3 btree-indexes. Basically the table will
hold all of the Postal Service deliverable addresses in the US in a
somewhat compressed form.

My concerns are in the area of performance and robustness.

Should be OK as long as you are using a recent release (preferably 7.0).
Our support for tables over 2 gig used to be a little flaky, but it's
been wrung out...

regards, tom lane