> 16TB worth of data question

Started by Jeremiah Jahnalmost 23 years ago45 messagesgeneral
Jump to latest
#1Jeremiah Jahn
jeremiah@cs.earlham.edu

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database, or is there someway to have postgres
handle this somehow? What are other people out there doing about this
sort of thing?

thanx,
-jj-

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

#2Doug McNaught
doug@mcnaught.org
In reply to: Jeremiah Jahn (#1)
Re: > 16TB worth of data question

Jeremiah Jahn <jeremiah@cs.earlham.edu> writes:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database, or is there someway to have postgres
handle this somehow? What are other people out there doing about this
sort of thing?

Sounds to me as if you can ignore it for now. 16TB will last you at
least four years, at which time you'll be replacing hardware anyway
and can just buy 64-bit systems.

:)

-Doug

#3Neil Conway
neilc@samurai.com
In reply to: Jeremiah Jahn (#1)
Re: > 16TB worth of data question

On Mon, 2003-04-21 at 14:23, Jeremiah Jahn wrote:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems.

I'm not sure why the 16TB limit you refer to would be a problem in the
short to medium term, if you're only storing 2TB/year.

Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database

FWIW, storing the FS location in the database is by no means
complicated. I'm personally not very fond of doing that, but not because
it's overly complex...

Cheers,

Neil

#4Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Neil Conway (#3)
Re: > 16TB worth of data question

Neil Conway wrote:

Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database

FWIW, storing the FS location in the database is by no means
complicated. I'm personally not very fond of doing that, but not because
it's overly complex...

Heck, is the Pg BLOB interface anything more than a cute hook into
filesystem storage? I've often wondered what the point of BLOBs is,
beyond providing a single API to all the data (maybe that *is* the
point). Is there a performance advantage?

--
__
/
| Paul Ramsey
| Refractions Research
| Email: pramsey@refractions.net
\_

#5Jeremiah Jahn
jeremiah@cs.earlham.edu
In reply to: Doug McNaught (#2)
Re: > 16TB worth of data question

Notice the plus sign in that 2TB+. I have reason to belive that I could
hit that mark in a little over 2 years. What do you think the chances of
Linux 2.6 are of stablizing in that time frame.. ;) I'm just questioning
the use of BLOB's really. It would be nice if they could be spread over
multiple file systems.

-jj-

On Mon, 2003-04-21 at 13:28, Doug McNaught wrote:

Jeremiah Jahn <jeremiah@cs.earlham.edu> writes:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database, or is there someway to have postgres
handle this somehow? What are other people out there doing about this
sort of thing?

Sounds to me as if you can ignore it for now. 16TB will last you at
least four years, at which time you'll be replacing hardware anyway
and can just buy 64-bit systems.

:)

-Doug

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

#6Jeremiah Jahn
jeremiah@cs.earlham.edu
In reply to: Neil Conway (#3)
Re: > 16TB worth of data question

Sorry, for got a little info. A few years from now, I'd still like to be
able to luse my old servers for a bit of load balancing..

-jj-

On Mon, 2003-04-21 at 13:38, Neil Conway wrote:

On Mon, 2003-04-21 at 14:23, Jeremiah Jahn wrote:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems.

I'm not sure why the 16TB limit you refer to would be a problem in the
short to medium term, if you're only storing 2TB/year.

Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database

FWIW, storing the FS location in the database is by no means
complicated. I'm personally not very fond of doing that, but not because
it's overly complex...

Cheers,

Neil

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

#7Doug McNaught
doug@mcnaught.org
In reply to: Jeremiah Jahn (#1)
Re: > 16TB worth of data question

Paul Ramsey <pramsey@refractions.net> writes:

Heck, is the Pg BLOB interface anything more than a cute hook into
filesystem storage?

Yes; it's organized as a table and storage-managed the same way as
other tables.

I've often wondered what the point of BLOBs is,
beyond providing a single API to all the data (maybe that *is* the
point).

That, plus when you back up the database you're backing up everything
you need, plus it's harder for the database and the 'filesystem' to
get out of sync (you can use triggers etc to make sure, and easily run
queries to make sure you don't have any dangling references).

Is there a performance advantage?

Slight disadvantage, I should think (unless the individual-file
approach stupidly puts them all in one directory, which can be very
slow on some system).

-Doug

In reply to: Jeremiah Jahn (#1)
Re: > 16TB worth of data question

Jeremiah Jahn <jeremiah@cs.earlham.edu> wrote:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store
the images as BLOBS, and instead come up with some complicated way to
only store the location in the database, or is there someway to have
postgres handle this somehow? What are other people out there doing
about this sort of thing?

I have some experience in this matter.
We've made a system that contains about 5 million images today, and it's
about 2.5 TB. Our system works by having the metadata in a postgresql
database to be able to do complex searches, and it only has i filename
in _relative_ form like this:

11/2/33

and then the application server handling requests maps it into a real
file:

/storage/links/11/2/33.jpg

The links directory is just a indirection from the real storage
location, so "11" in this case can be /storage/nfs/server1/11.
Flexibility is the big win here. I can split partitions up any way i
want it and i don't have to care about any limits except maybe that the
"links" directory is getting too big. But since this is our application
we know that when the database is complete (20 million images+) it will
till have only 4000 directory entries in "links".

The blob stuff also sucks from other points of views.
It makes the database less manageable and also messes with lazy (we use
rsync) mirroring backups of the images, and also database dumps must be
huge? (i don't know how blobs are handled in pg_dump)

If it's nessecary to be able to handle this via postgresql, i would do a
serverside function that fetch the image from the filesystem and sends
it over, maybe using the bytea encoding?

I wouldn't go for the all-in-database approach ever in anything big.

Cheers
Magnus

#9scott.marlowe
scott.marlowe@ihs.com
In reply to: Jeremiah Jahn (#1)
Re: > 16TB worth of data question

On 21 Apr 2003, Jeremiah Jahn wrote:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database, or is there someway to have postgres
handle this somehow? What are other people out there doing about this
sort of thing?

Then why not start right out on 64 bit systems? Low end 64 bit sparcs
Ultra 60 type stuff) aren't too expensive, and debian and a few other
flavors seem to run quite quickly on Sparc hardware.

There's also 64 mainframe linux, and a couple of other 64 bit platforms
that are fairly mature, IBM's Power Pc based systems run Linux as well.

If you're gonna play with big datasets, that's the one time that 64 bit
really starts to have advantages, and let's face it, you're gonna go there
eventually anyway, might as well get a head start now.

#10Jeremiah Jahn
jeremiah@cs.earlham.edu
In reply to: scott.marlowe (#9)
Re: > 16TB worth of data question

The only issue with this is that it is difficult to recomend to our
clients who depend on bob and cuz'n joe to support their hardware. We
are kinda in the business of recomending the HW our clients need, and
not getting into the support side of it. Althoguh this might be a decent
option.

thanx,
-jj-

PS. My office already this whole black and silver motif going on, and
purplely blue would kinda clash.

On Mon, 2003-04-21 at 15:30, scott.marlowe wrote:

On 21 Apr 2003, Jeremiah Jahn wrote:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database, or is there someway to have postgres
handle this somehow? What are other people out there doing about this
sort of thing?

Then why not start right out on 64 bit systems? Low end 64 bit sparcs
Ultra 60 type stuff) aren't too expensive, and debian and a few other
flavors seem to run quite quickly on Sparc hardware.

There's also 64 mainframe linux, and a couple of other 64 bit platforms
that are fairly mature, IBM's Power Pc based systems run Linux as well.

If you're gonna play with big datasets, that's the one time that 64 bit
really starts to have advantages, and let's face it, you're gonna go there
eventually anyway, might as well get a head start now.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

#11scott.marlowe
scott.marlowe@ihs.com
In reply to: Jeremiah Jahn (#10)
Re: > 16TB worth of data question

Shame, there's an E10k with 64 CPUs selling for $24,000 with no opening
bid on ebay... Ends in 10.

http://listings.ebay.com/pool2/plistings/endtoday/all/category41080/index.html?from=R11

Ummmmmmm 64 CPUs...

On 21 Apr 2003, Jeremiah Jahn wrote:

Show quoted text

The only issue with this is that it is difficult to recomend to our
clients who depend on bob and cuz'n joe to support their hardware. We
are kinda in the business of recomending the HW our clients need, and
not getting into the support side of it. Althoguh this might be a decent
option.

thanx,
-jj-

PS. My office already this whole black and silver motif going on, and
purplely blue would kinda clash.

On Mon, 2003-04-21 at 15:30, scott.marlowe wrote:

On 21 Apr 2003, Jeremiah Jahn wrote:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database, or is there someway to have postgres
handle this somehow? What are other people out there doing about this
sort of thing?

Then why not start right out on 64 bit systems? Low end 64 bit sparcs
Ultra 60 type stuff) aren't too expensive, and debian and a few other
flavors seem to run quite quickly on Sparc hardware.

There's also 64 mainframe linux, and a couple of other 64 bit platforms
that are fairly mature, IBM's Power Pc based systems run Linux as well.

If you're gonna play with big datasets, that's the one time that 64 bit
really starts to have advantages, and let's face it, you're gonna go there
eventually anyway, might as well get a head start now.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly

#12scott.marlowe
scott.marlowe@ihs.com
In reply to: Jeremiah Jahn (#10)
Re: > 16TB worth of data question
#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeremiah Jahn (#10)
Re: > 16TB worth of data question

Jeremiah Jahn <jeremiah@cs.earlham.edu> writes:

The only issue with this is that it is difficult to recomend to our
clients who depend on bob and cuz'n joe to support their hardware.

And you expect them to be successful running a database that acquires
2TB+ of data per year? I think you need to recalibrate your
expectations. Consumer-grade junk PCs do not have the reliability
to make such a project even worth starting. Run the database on decent
made-to-be-a-server hardware, or you'll regret it.

I think I've spent more time chasing various people's hardware failures
lately than I have in investigating real Postgres bugs. I keep
volunteering to look at failures because I figure there are still some
data-loss bugs to be found, but I am coming to have a *real* low opinion
of off-the-shelf PC hardware.

regards, tom lane

#14Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Jeremiah Jahn (#1)
Re: > 16TB worth of data question

FWIW, FreeBSD's UFS filesystem doesn't suffer from the 2G limit. It's
limit is something like 4TB iirc.

I'll also second the opinion that if you're really going to put that
much data in your database, x86 hardware might not be a good idea.
However, I'd recommend RS/6000 over Sun if you can afford it; for
Oracle, the standard is that you only need 1/2 the CPUs in RS/6000 as
you would in a Sun, because RS/6000 puts heavy emphasis on memory
bandwidth.

On Mon, Apr 21, 2003 at 01:23:55PM -0500, Jeremiah Jahn wrote:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database, or is there someway to have postgres
handle this somehow? What are other people out there doing about this
sort of thing?

thanx,
-jj-

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

--
Jim C. Nasby (aka Decibel!) jim@nasby.net
Member: Triangle Fraternity, Sports Car Club of America
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

#15Jeremiah Jahn
jeremiah@cs.earlham.edu
In reply to: Tom Lane (#13)
Re: > 16TB worth of data question

On Mon, 2003-04-21 at 20:43, Tom Lane wrote:

Jeremiah Jahn <jeremiah@cs.earlham.edu> writes:

The only issue with this is that it is difficult to recomend to our
clients who depend on bob and cuz'n joe to support their hardware.

And you expect them to be successful running a database that acquires
2TB+ of data per year? I think you need to recalibrate your
expectations. Consumer-grade junk PCs do not have the reliability
to make such a project even worth starting. Run the database on decent
made-to-be-a-server hardware, or you'll regret it.

That's the question...That 2 TB of data is nothing but documents and
images. I'm under the perception that if that gets parked on a fibre
channel disk array/ SAN the data will be pretty safe, and the server
mostly replaceable at that time. Storage is my worry more than
processing power. I don't think I'm on crack here...?

I think I've spent more time chasing various people's hardware failures
lately than I have in investigating real Postgres bugs. I keep
volunteering to look at failures because I figure there are still some
data-loss bugs to be found, but I am coming to have a *real* low opinion
of off-the-shelf PC hardware.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

#16Lyndon Tiu
ltiu@alumni.sfu.ca
In reply to: Jim Nasby (#14)
Father of Relational Databases - Dead at 79.
#17Lincoln Yeoh
lyeoh@pop.jaring.my
In reply to: Jeremiah Jahn (#15)
Re: > 16TB worth of data question

At 08:13 AM 4/22/2003 -0500, Jeremiah Jahn wrote:

That's the question...That 2 TB of data is nothing but documents and
images. I'm under the perception that if that gets parked on a fibre
channel disk array/ SAN the data will be pretty safe, and the server
mostly replaceable at that time. Storage is my worry more than
processing power. I don't think I'm on crack here...?

How about backups? Backing up 2-16TB needs a bit more planning and design.

I think I've spent more time chasing various people's hardware failures
lately than I have in investigating real Postgres bugs. I keep
volunteering to look at failures because I figure there are still some
data-loss bugs to be found, but I am coming to have a *real* low opinion
of off-the-shelf PC hardware.

regards, tom lane

Well there's off-the-shelf x86 desktop PC hardware, and there's
off-the-shelf x86 server hardware.

In my experience for name brands it's about 1/7 DOA for the former, and 0
for the latter. There's a big diff in dependability. The latter tend to
keep running for many years, typically being retired for no fault of their own.

Then there are users who stick no name memory into their servers or "servers".

Regards,
Link.

#18Jonathan Bartlett
johnnyb@eskimo.com
In reply to: Jeremiah Jahn (#15)
Re: > 16TB worth of data question

And you expect them to be successful running a database that acquires
2TB+ of data per year? I think you need to recalibrate your
expectations. Consumer-grade junk PCs do not have the reliability
to make such a project even worth starting. Run the database on decent
made-to-be-a-server hardware, or you'll regret it.

That's the question...That 2 TB of data is nothing but documents and
images. I'm under the perception that if that gets parked on a fibre
channel disk array/ SAN the data will be pretty safe, and the server
mostly replaceable at that time. Storage is my worry more than
processing power. I don't think I'm on crack here...?

Actually, true server-grade machines usually have _less_ processing power
than PCs. Why? Because their attention is paid to reliability, not
speed. Server class (ie. name-brand, ECC) memory is a must.
Previous-generation processors are a must, and someone who knows something
about server hardware putting it together is a must. If you don't have
these things, you are bound for server lockups, memory corruption, and all
sorts of other evils.

Microsoft gets a lot of blame for the failures of the PC, but in reality,
a lot of them are hardware-related.

Jon

Show quoted text

I think I've spent more time chasing various people's hardware failures
lately than I have in investigating real Postgres bugs. I keep
volunteering to look at failures because I figure there are still some
data-loss bugs to be found, but I am coming to have a *real* low opinion
of off-the-shelf PC hardware.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

#19Jonathan Bartlett
johnnyb@eskimo.com
In reply to: Jim Nasby (#14)
Re: > 16TB worth of data question

FWIW, FreeBSD's UFS filesystem doesn't suffer from the 2G limit. It's
limit is something like 4TB iirc.

If the data itself is large (i.e. - large images not just large # of
images), I'd go with an XFS filesystem. It has a 5 petabyte limit, and
for large sequential reads, it has almost full-disk speed.

Jon

Show quoted text

I'll also second the opinion that if you're really going to put that
much data in your database, x86 hardware might not be a good idea.
However, I'd recommend RS/6000 over Sun if you can afford it; for
Oracle, the standard is that you only need 1/2 the CPUs in RS/6000 as
you would in a Sun, because RS/6000 puts heavy emphasis on memory
bandwidth.

On Mon, Apr 21, 2003 at 01:23:55PM -0500, Jeremiah Jahn wrote:

I have a system that will store about 2TB+ of images per year in a PG
database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
really sure what should be done here. Would life better to not store the
images as BLOBS, and instead come up with some complicated way to only
store the location in the database, or is there someway to have postgres
handle this somehow? What are other people out there doing about this
sort of thing?

thanx,
-jj-

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

--
Jim C. Nasby (aka Decibel!) jim@nasby.net
Member: Triangle Fraternity, Sports Car Club of America
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

#20Jeremiah Jahn
jeremiah@cs.earlham.edu
In reply to: Lincoln Yeoh (#17)
Re: > 16TB worth of data question

On Tue, 2003-04-22 at 10:31, Lincoln Yeoh wrote:

At 08:13 AM 4/22/2003 -0500, Jeremiah Jahn wrote:

That's the question...That 2 TB of data is nothing but documents and
images. I'm under the perception that if that gets parked on a fibre
channel disk array/ SAN the data will be pretty safe, and the server
mostly replaceable at that time. Storage is my worry more than
processing power. I don't think I'm on crack here...?

How about backups? Backing up 2-16TB needs a bit more planning and design.

The proposed solution here is to have the raid controller mirror accross
the street to a similar system. At what point do off line backups become
pointless?

I think I've spent more time chasing various people's hardware failures
lately than I have in investigating real Postgres bugs. I keep
volunteering to look at failures because I figure there are still some
data-loss bugs to be found, but I am coming to have a *real* low opinion
of off-the-shelf PC hardware.

regards, tom lane

Well there's off-the-shelf x86 desktop PC hardware, and there's
off-the-shelf x86 server hardware.

In my experience for name brands it's about 1/7 DOA for the former, and 0
for the latter. There's a big diff in dependability. The latter tend to
keep running for many years, typically being retired for no fault of their own.

Then there are users who stick no name memory into their servers or "servers".

Regards,
Link.

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

--
Jeremiah Jahn <jeremiah@cs.earlham.edu>

#21Jeremiah Jahn
jeremiah@cs.earlham.edu
In reply to: Jonathan Bartlett (#19)
#22Philip Hallstrom
philip@adhesivemedia.com
In reply to: Jeremiah Jahn (#20)
#23Jeremiah Jahn
jeremiah@cs.earlham.edu
In reply to: Philip Hallstrom (#22)
#24Jonathan Bartlett
johnnyb@eskimo.com
In reply to: Jeremiah Jahn (#20)
#25Dennis Gearon
gearond@cvc.net
In reply to: Jeremiah Jahn (#23)
#26Sean Chittenden
sean@chittenden.org
In reply to: Jim Nasby (#14)
#27Ron Johnson
ron.l.johnson@cox.net
In reply to: Jeremiah Jahn (#1)
#28Andrew Sullivan
andrew@libertyrms.info
In reply to: Jeremiah Jahn (#15)
#29Jan Wieck
JanWieck@Yahoo.com
In reply to: Tom Lane (#13)
#30Jan Wieck
JanWieck@Yahoo.com
In reply to: Jeremiah Jahn (#1)
#31scott.marlowe
scott.marlowe@ihs.com
In reply to: Jan Wieck (#30)
#32nolan
nolan@celery.tssi.com
In reply to: scott.marlowe (#31)
#33Jan Wieck
JanWieck@Yahoo.com
In reply to: scott.marlowe (#31)
#34Ron Johnson
ron.l.johnson@cox.net
In reply to: Jan Wieck (#30)
#35Lincoln Yeoh
lyeoh@pop.jaring.my
In reply to: Jan Wieck (#33)
#36Jan Wieck
JanWieck@Yahoo.com
In reply to: scott.marlowe (#31)
#37Ron Johnson
ron.l.johnson@cox.net
In reply to: Jan Wieck (#36)
#38Fred Moyer
fred@digicamp.com
In reply to: Ron Johnson (#37)
#39Jeremiah Jahn
jeremiah@cs.earlham.edu
In reply to: Jan Wieck (#29)
#40scott.marlowe
scott.marlowe@ihs.com
In reply to: Jeremiah Jahn (#39)
#41Ron Johnson
ron.l.johnson@cox.net
In reply to: scott.marlowe (#40)
#42scott.marlowe
scott.marlowe@ihs.com
In reply to: Ron Johnson (#41)
#43Ron Johnson
ron.l.johnson@cox.net
In reply to: scott.marlowe (#42)
#44Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Ron Johnson (#43)
#45Ron Johnson
ron.l.johnson@cox.net
In reply to: Jim Nasby (#44)