Long term database archival

Started by Karl O. Pincalmost 20 years ago41 messagesgeneral
Jump to latest
#1Karl O. Pinc
kop@meme.com

Hi,

What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Mostly, we're interested in dumps done with
--data-only, and have preferred the
default (-F c) format. But this form is somewhat more
opaque than a plain text SQL dump, which is bound
to be supported forever "out of the box".
Should we want to restore a 20 year old backup
nobody's going to want to be messing around with
decoding a "custom" format dump if it does not
just load all by itself.

Is the answer different if we're dumping the
schema as well as the data?

Thanks.

Karl <kop@meme.com>
Free Software: "You don't pay back, you pay forward."
-- Robert A. Heinlein

#2Florian Pflug
fgp@phlo.org
In reply to: Karl O. Pinc (#1)
Re: Long term database archival

Karl O. Pinc wrote:

Hi,

What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Mostly, we're interested in dumps done with
--data-only, and have preferred the
default (-F c) format. But this form is somewhat more
opaque than a plain text SQL dump, which is bound
to be supported forever "out of the box".
Should we want to restore a 20 year old backup
nobody's going to want to be messing around with
decoding a "custom" format dump if it does not
just load all by itself.

For schema dumps the custom format has advantages IMHO,
mainly because it adds flexibility. When creating text-formatted
dumps, you have to specify options like "--no-owner, ..."
at _dumping_ time, while custom-format dumps allow you to
specify them at _restoration_ time.

For data-dumps this is less relevant, since the amount of
available options is much smaller. But even there, restoring
with "insert-statements" as opposed to "copy from stdin" could
be usefull in some situations.

Anyway, 20 years is a _long_, _long_ time. If you _really_
need to keep your data that long, I'd suggest you create
text-only schema dumps, and text-only data dumps. The postgres
developers are very concerned about backward compatibility in
my experience, but probably _not_ for versions from 20 years ago ;-)

But since the probability of the need to restore your backup
in 6 months is _much_ larger than the one of needing to restore
it in 20 years, I'd create customer-format dumps too.
For the near future, they're the better choice IMHO.

Is the answer different if we're dumping the
schema as well as the data?

The above holds true for the schema as well as for the data.

greetings, Florian Pflug

#3Karl O. Pinc
kop@meme.com
In reply to: Florian Pflug (#2)
Re: Long term database archival

On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:

Karl O. Pinc wrote:

Hi,

What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Anyway, 20 years is a _long_, _long_ time.

Yes, but our data goes back over 30 years now
and is never deleted, only added to, and I
recently had occasion to want to look at a
backup from 1994-ish. So, yeah we probably do
really want backups for that long. They
probably won't get used, but we'll feel better.

Thanks.

Karl <kop@meme.com>
Free Software: "You don't pay back, you pay forward."
-- Robert A. Heinlein

#4Ron Johnson
ron.l.johnson@cox.net
In reply to: Florian Pflug (#2)
Re: Long term database archival

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Florian G. Pflug wrote:

Karl O. Pinc wrote:

[snip]

Anyway, 20 years is a _long_, _long_ time. If you _really_ need
to keep your data that long, I'd suggest you create text-only
schema dumps, and text-only data dumps. The postgres developers
are very concerned about backward compatibility in my experience,
but probably _not_ for versions from 20 years ago ;-)

20 years seems pretty long, but SARBOX sets many data retention
requirements at 7 years.

In a similar vein, we are the back-office contractor for a major
toll-road consortium, and regularly get subpoenas for transaction
details as old as 5 years.

The hassle of having to go thru old tapes and extract dozens and
dozens of GB of data just to ultimately retrieve 40 records is the
hook I'm using to get PostgreSQL into our old-guard datacenter.

- --
Ron Johnson, Jr.
Jefferson LA USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEraUIS9HxQb37XmcRAuz5AJ45f+daxvsF3tAr/d0cjklGj579kACfV5JH
6AYIuDWNwcytR3m4thqAnY8=
=DDJx
-----END PGP SIGNATURE-----

#5A.M.
agentm@themactionfaction.com
In reply to: Karl O. Pinc (#1)
Re: Long term database archival

Will postgresql be a viable database in 20 years? Will SQL be used
anywhere in 20 years? Are you sure 20 years is your ideal backup
duration?

Very few media even last 5 years. The good thing about open source and
open standards is that regardless of the answers to those questions,
there is no proprietary element to prevent you from accessing that
data- simply decide what it will be and update your backups along the
way. Whether such data will be relevant/ useful to anyone in 20 years
is a question you have to answer yourself. Good luck.

-M

On Jul 6, 2006, at 2:57 PM, Karl O. Pinc wrote:

Hi,

What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Mostly, we're interested in dumps done with
--data-only, and have preferred the
default (-F c) format. But this form is somewhat more
opaque than a plain text SQL dump, which is bound
to be supported forever "out of the box".
Should we want to restore a 20 year old backup
nobody's going to want to be messing around with
decoding a "custom" format dump if it does not
just load all by itself.

Is the answer different if we're dumping the
schema as well as the data?

Thanks.

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
agentm@themactionfaction.com
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬

#6Richard Broersma Jr
rabroersma@yahoo.com
In reply to: A.M. (#5)
Re: Long term database archival

Will postgresql be a viable database in 20 years? Will SQL be used
anywhere in 20 years? Are you sure 20 years is your ideal backup
duration?

Very few media even last 5 years. The good thing about open source and
open standards is that regardless of the answers to those questions,
there is no proprietary element to prevent you from accessing that
data- simply decide what it will be and update your backups along the
way. Whether such data will be relevant/ useful to anyone in 20 years
is a question you have to answer yourself. Good luck.

I am not to sure of the relevance, but I periodically worked as a sub-contractor for an
Oil-producing Company in California. They were carrying 35 years of data on an Alpha Server
running Ca-Ingres. The really bad part is that hundreds and hundreds of reporting tables were
created on top of the functioning system for reporting over the years. Now nobody know which
tables are relevant and with are redundant and or deprecated.

Also year after year, new custom text file reports were created with procedural scrips. The load
on the server was such that the daily reporting was taking near taking 23 hours to complete. And
the requests for new reports was getting the IT department very worried.

Worst of all know one there really know the ins and outs of Ingres to do anything about it.

Well, I take part of that back. They recently upgrade to a newer alpha to reduce the time daily
reporting was taking. :-)

Regards,

Richard Broersma Jr.

#7Ron Johnson
ron.l.johnson@cox.net
In reply to: A.M. (#5)
Re: Long term database archival

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Agent M wrote:

Will postgresql be a viable database in 20 years? Will SQL be used
anywhere in 20 years? Are you sure 20 years is your ideal backup duration?

SQL was used 20 years ago, why not 20 years from now?

I can't see needing data from 10 years ago, but you never know.
Thank $DEITY for microfilm; otherwise, we'd not know a whole lot
about what happened 150 years ago.

- --
Ron Johnson, Jr.
Jefferson LA USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEraoCS9HxQb37XmcRAnUSAKCrq1JOjb4lqmesi31Ko8a9N3MjxgCg7X9B
1gl+H5TMia4PE6mFtSbAApE=
=UtVq
-----END PGP SIGNATURE-----

#8A.M.
agentm@themactionfaction.com
In reply to: Richard Broersma Jr (#6)
Re: Long term database archival

I am not to sure of the relevance, but I periodically worked as a
sub-contractor for an
Oil-producing Company in California. They were carrying 35 years of
data on an Alpha Server
running Ca-Ingres. The really bad part is that hundreds and hundreds
of reporting tables were
created on top of the functioning system for reporting over the years.
Now nobody know which
tables are relevant and with are redundant and or deprecated.

Also year after year, new custom text file reports were created with
procedural scrips. The load
on the server was such that the daily reporting was taking near taking
23 hours to complete. And
the requests for new reports was getting the IT department very
worried.

But the data from 35 years ago wasn't stored in Ingres and, if it's
important, it won't stay in Ingres. The data shifts from format to
format as technology progresses.

It seemed to me that the OP wanted some format that would be readable
in 20 years. No one can guarantee anything like that.

-M

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
agentm@themactionfaction.com
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬

#9Richard Broersma Jr
rabroersma@yahoo.com
In reply to: A.M. (#8)
Re: Long term database archival

But the data from 35 years ago wasn't stored in Ingres and, if it's
important, it won't stay in Ingres. The data shifts from format to
format as technology progresses.

It seemed to me that the OP wanted some format that would be readable
in 20 years. No one can guarantee anything like that.

What you are saying could be true, but that wasn't what I was lead to believe. This Database was
logging data from the production automation system. I believe the need for 30+ years of data was
because the client was interested in determining / trending the gradual drop off in production
over the year.

Their interest is in extrapolating profitability lifetime for their facility. Essentially want to
know how long they have before they have to "close the doors."

But you are probably correct, I had no way of really knowing how old that data on there server
really was.

Regards,

Richard Broersma Jr.

#10Ron Johnson
ron.l.johnson@cox.net
In reply to: A.M. (#8)
Re: Long term database archival

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Agent M wrote:
[snip]

But the data from 35 years ago wasn't stored in Ingres and, if
it's important, it won't stay in Ingres. The data shifts from
format to format as technology progresses.

Ingres has been around for longer than you think: about 20 years.

So, the data has been converted one time in 35 years. Pretty damned
stable if you ask me.

Another example: the on-disk structure of RDB/VMS has remained
stable ever v1.0 in 1984. That means that upgrading from
major-version to major version (even when new datatypes and index
structures have been added) is a quick, trivial process.

Companies with lots of important data like that.

It seemed to me that the OP wanted some format that would be
readable in 20 years. No one can guarantee anything like that.

ASCII will be here in 20 years. So will EBCDIC. As will UTF.

- --
Ron Johnson, Jr.
Jefferson LA USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFErbXoS9HxQb37XmcRAk8OAJ0bUv1kk7T0Q273jGkFVwy5TnHG9wCdFDI8
9ebDZyxwiIGwfmISbJpGWBs=
=DfBk
-----END PGP SIGNATURE-----

#11Dann Corbit
DCorbit@connx.com
In reply to: Ron Johnson (#10)
Re: Long term database archival

-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
owner@postgresql.org] On Behalf Of Ron Johnson
Sent: Thursday, July 06, 2006 5:26 PM
To: Postgres general mailing list
Subject: Re: [GENERAL] Long term database archival

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Agent M wrote:

Will postgresql be a viable database in 20 years? Will SQL be used
anywhere in 20 years? Are you sure 20 years is your ideal backup

duration?

SQL was used 20 years ago, why not 20 years from now?

I can't see needing data from 10 years ago, but you never know.
Thank $DEITY for microfilm; otherwise, we'd not know a whole lot
about what happened 150 years ago.

The company I work for does lots of business with OpenVMS systems
running RMS, Rdb, and DBMS and IBM Mainframes running VSAM, IMS, etc.
along with many other 'ancient' database systems.

We have customers with Rdb version 4.x (around 15 years old, IIRC) and
RMS and VSAM formats from the 1980s.

Suppose, for instance, that you run a sawmill. The software for your
sawmill was written in 1985. In 1991, you did a hardware upgrade to a
VAX 4100, but did not upgrade your Rdb version (since it was debugged
and performed adequately).

Your software can completely keep up with the demands of the sawmill.
It even runs payroll. The workers got tired of the RS232 terminals and
so you did a client server upgrade using PCs as terminals in 1999, but
kept your VAX 4100 minicomputer running Rdb with no changes. You
upgraded from Xentis to Crystal Reports in 2003, but using OLEDB drivers
means you did not have to touch anything on your server.

Sound far-fetched? It's not uncommon in the least. Furthermore, a
million dollar upgrade to a shiny new system and software might not
increase productivity at all.

It's the data that contains all the value. The hardware becomes
obsolete when it can no longer keep up with business needs.

#12Ron Johnson
ron.l.johnson@cox.net
In reply to: Dann Corbit (#11)
Re: Long term database archival

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Dann Corbit wrote:

-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
owner@postgresql.org] On Behalf Of Ron Johnson
Sent: Thursday, July 06, 2006 5:26 PM
To: Postgres general mailing list
Subject: Re: [GENERAL] Long term database archival

Agent M wrote:

Will postgresql be a viable database in 20 years? Will SQL be used
anywhere in 20 years? Are you sure 20 years is your ideal backup

duration?

SQL was used 20 years ago, why not 20 years from now?

I can't see needing data from 10 years ago, but you never know.
Thank $DEITY for microfilm; otherwise, we'd not know a whole lot
about what happened 150 years ago.

The company I work for does lots of business with OpenVMS systems
running RMS, Rdb, and DBMS and IBM Mainframes running VSAM, IMS, etc.
along with many other 'ancient' database systems.

We have customers with Rdb version 4.x (around 15 years old, IIRC) and
RMS and VSAM formats from the 1980s.

Wow, that *is* ancient. Rdb 4.2 was 1993, though. "Only" 13 years.

Snicker.

Suppose, for instance, that you run a sawmill. The software for your
sawmill was written in 1985. In 1991, you did a hardware upgrade to a
VAX 4100, but did not upgrade your Rdb version (since it was debugged
and performed adequately).

Your software can completely keep up with the demands of the sawmill.
It even runs payroll. The workers got tired of the RS232 terminals and
so you did a client server upgrade using PCs as terminals in 1999, but
kept your VAX 4100 minicomputer running Rdb with no changes. You
upgraded from Xentis to Crystal Reports in 2003, but using OLEDB drivers
means you did not have to touch anything on your server.

Sound far-fetched? It's not uncommon in the least. Furthermore, a
million dollar upgrade to a shiny new system and software might not
increase productivity at all.

It's the data that contains all the value. The hardware becomes
obsolete when it can no longer keep up with business needs.

DEC surely did build VAX h/w to last. Much higher quality than the
cheapo industry standard stuff they use now. And, IMO, VAX/VMS was
a heck of a lot more stable written in Bliss and Macro than
Alpha/VMS ported to C.

I'd be worried, though, about the disk drives, so would push for
migration to Charon-VAX running on an x86 server.

- --
Ron Johnson, Jr.
Jefferson LA USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFErcXiS9HxQb37XmcRAqRDAKC63yqdkw4DEk0rUGu0AQw3a9jIDQCfR+fn
gWsYc94OFgcJEAA8J8Bs7jc=
=gbgy
-----END PGP SIGNATURE-----

#13Ben
bench@silentmedia.com
In reply to: Dann Corbit (#11)
Re: Long term database archival

On Thu, 6 Jul 2006, Dann Corbit wrote:

It's the data that contains all the value. The hardware becomes
obsolete when it can no longer keep up with business needs.

..... or can no longer be repaired. :)

#14Csaba Nagy
nagy@ecircle-ag.com
In reply to: Karl O. Pinc (#1)
Re: Long term database archival

On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:

Hi,

What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Should we want to restore a 20 year old backup
nobody's going to want to be messing around with
decoding a "custom" format dump if it does not
just load all by itself.

Karl, I would say that if you really want data from 20 years ago, keep
it in the custom format, along with a set of the sources of postgres
which created the dump. then in 20 years when you'll need it, you'll
compile the sources and load the data in the original postgres
version... of course you might need to also keep an image of the current
OS and the hardware you're running on if you really want to be sure it
will work in 20 years :-)

Cheers,
Csaba.

#15Shane Ambler
pgsql@007Marketing.com
In reply to: Csaba Nagy (#14)
Re: Long term database archival

On 7/7/2006 17:49, "Csaba Nagy" <nagy@ecircle-ag.com> wrote:

On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:

Hi,

What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Should we want to restore a 20 year old backup
nobody's going to want to be messing around with
decoding a "custom" format dump if it does not
just load all by itself.

Karl, I would say that if you really want data from 20 years ago, keep
it in the custom format, along with a set of the sources of postgres
which created the dump. then in 20 years when you'll need it, you'll
compile the sources and load the data in the original postgres
version... of course you might need to also keep an image of the current
OS and the hardware you're running on if you really want to be sure it
will work in 20 years :-)

Cheers,
Csaba.

Depending on the size of data (if it isn't too large) you could consider
creating a new database for archives, maybe even one for each year.

This can be on an old server or backup server instead of the production
one.

Unless the data is too large you can dump/restore the archive data to a new
pg version as you upgrade meaning the data will always be available and you
won't have any format issues when you want to retrieve the data.

#16Tino Wildenhain
tino@wildenhain.de
In reply to: Csaba Nagy (#14)
Re: Long term database archival

Csaba Nagy schrieb:
...

Karl, I would say that if you really want data from 20 years ago, keep
it in the custom format, along with a set of the sources of postgres
which created the dump. then in 20 years when you'll need it, you'll
compile the sources and load the data in the original postgres
version... of course you might need to also keep an image of the current
OS and the hardware you're running on if you really want to be sure it
will work in 20 years :-)

No need - you will just emulate the whole hardware in 20 years ;-)

Regards
Tino

#17Ron Johnson
ron.l.johnson@cox.net
In reply to: Ben (#13)
Re: Long term database archival

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ben wrote:

On Thu, 6 Jul 2006, Dann Corbit wrote:

It's the data that contains all the value. The hardware becomes
obsolete when it can no longer keep up with business needs.

..... or can no longer be repaired. :)

http://www.softresint.com/charon-vax/index.htm

- --
Ron Johnson, Jr.
Jefferson LA USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFErnGyS9HxQb37XmcRAjz0AKCvhP7k5quH+Ozdwa1Z35zvdYyuLACgu45B
tgCgyFmeOvyKp7jzZivpSdI=
=8CL1
-----END PGP SIGNATURE-----

#18Richard Broersma Jr
rabroersma@yahoo.com
In reply to: Csaba Nagy (#14)
Re: Long term database archival

of course you might need to also keep an image of the current

OS and the hardware you're running on if you really want to be sure it
will work in 20 years :-)

I think that in twenty years, I think most of us will be more worried about our retirement than
the long terms data conserns of the companies we will no longer be working for. :-D

Of course, some of us that really enjoy what we do for work might prefer to "die with our work
boots on."

Regards,

Richard Broersma Jr.

#19Steve Atkins
steve@blighty.com
In reply to: Csaba Nagy (#14)
Re: Long term database archival

On Jul 7, 2006, at 1:19 AM, Csaba Nagy wrote:

On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:

Hi,

What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Should we want to restore a 20 year old backup
nobody's going to want to be messing around with
decoding a "custom" format dump if it does not
just load all by itself.

Karl, I would say that if you really want data from 20 years ago, keep
it in the custom format, along with a set of the sources of postgres
which created the dump. then in 20 years when you'll need it, you'll
compile the sources and load the data in the original postgres
version... of course you might need to also keep an image of the
current
OS and the hardware you're running on if you really want to be sure it
will work in 20 years :-)

I've been burned by someone doing that, and then being unable to
find a BCPL compiler.

So don't do that.

Store them in a nice, neutral ASCII format, along with all the
documentation. If you can't imagine extracting the
data with a small perl script and less than a days work today
then your successor will likely curse your name in 20 years
time.

Cheers,
Steve

#20Karsten Hilbert
Karsten.Hilbert@gmx.net
In reply to: Richard Broersma Jr (#18)
Re: Long term database archival

On Fri, Jul 07, 2006 at 09:09:22AM -0700, Richard Broersma Jr wrote:

I think that in twenty years, I think most of us will be more worried about our retirement than
the long terms data conserns of the companies we will no longer be working for. :-D

You may want to take precautions now such that you start
getting *more* healthy towards retirement rather than less.

Because your old medical record cannot be accessed any longer.

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346

#21Adam
adam@spatialsystems.org
In reply to: Richard Broersma Jr (#6)
#22Adam
adam@spatialsystems.org
In reply to: Richard Broersma Jr (#6)
#23Richard Broersma Jr
rabroersma@yahoo.com
In reply to: Adam (#22)
#24Bruce Momjian
bruce@momjian.us
In reply to: Richard Broersma Jr (#23)
#25Richard Broersma Jr
rabroersma@yahoo.com
In reply to: Bruce Momjian (#24)
#26Doug McNaught
doug@mcnaught.org
In reply to: Richard Broersma Jr (#25)
#27Martijn van Oosterhout
kleptog@svana.org
In reply to: Richard Broersma Jr (#25)
#28Nikolay Samokhvalov
samokhvalov@gmail.com
In reply to: Adam (#21)
#29Jan Wieck
JanWieck@Yahoo.com
In reply to: Karl O. Pinc (#3)
#30Karl O. Pinc
kop@meme.com
In reply to: Jan Wieck (#29)
#31Richard Broersma Jr
rabroersma@yahoo.com
In reply to: Karl O. Pinc (#30)
#32Ron Johnson
ron.l.johnson@cox.net
In reply to: Karl O. Pinc (#30)
#33Tim Hart
tjhart@mac.com
In reply to: Jan Wieck (#29)
#34Joshua D. Drake
jd@commandprompt.com
In reply to: Richard Broersma Jr (#31)
#35Jan Wieck
JanWieck@Yahoo.com
In reply to: Tim Hart (#33)
#36Bruce Momjian
bruce@momjian.us
In reply to: Jan Wieck (#35)
#37Marco Bizzarri
marco.bizzarri@gmail.com
In reply to: Karl O. Pinc (#1)
#38Ron Johnson
ron.l.johnson@cox.net
In reply to: Tim Hart (#33)
#39Leif B. Kristensen
leif@solumslekt.org
In reply to: Marco Bizzarri (#37)
#40Ron Johnson
ron.l.johnson@cox.net
In reply to: Leif B. Kristensen (#39)
#41Michelle Konzack
linux4michelle@freenet.de
In reply to: Ron Johnson (#7)