OID, 4billion+ Rows.
I have a number of clients that retain large numbers of small
transactions. This could easily hit 4B+ at some sites. Most rows would
be 1k max, some may exceed that, average would be between 512bytes & 1k.
Is this feasible w/Postgres?
thanks,
Joshua
"Joshua Schmidlkofer" <menion@srci.iwpsd.org> writes:
I have a number of clients that retain large numbers of small
transactions. This could easily hit 4B+ at some sites. Most rows would
be 1k max, some may exceed that, average would be between 512bytes & 1k.
This would be a problem at the moment. I'm expecting to see some sort
of fix for it in 7.2, however. The simplest fix method would require a
complete-database VACUUM at least once every billion or so transactions;
but with the nonintrusive VACUUM that we're planning for 7.2, that
doesn't seem overly onerous.
regards, tom lane