Re: ALTER TABLE DROP COLUMN

Started by Bruce Momjianover 25 years ago72 messageshackers
Jump to latest
#1Bruce Momjian
bruce@momjian.us

OK, I am opening this can of worms again. I personally would like to
see this code activated, even if it does take 2x the disk space to alter
a column. Hiroshi had other ideas. Where did we leave this? We have
one month to decide on a plan.

Bruce Momjian <pgman@candle.pha.pa.us> writes:

You can exclusively lock the table, then do a heap_getnext() scan over
the entire table, remove the dropped column, do a heap_insert(), then a
heap_delete() on the current tuple, making sure to skip over the tuples
inserted by the current transaction. When completed, remove the column
from pg_attribute, mark the transaction as committed (if desired), and
run vacuum over the table to remove the deleted rows.

Hmm, that would work --- the new tuples commit at the same instant that
the schema updates commit, so it should be correct. You have the 2x
disk usage problem, but there's no way around that without losing
rollback ability.

A potentially tricky bit will be persuading the tuple-reading and tuple-
writing subroutines to pay attention to different versions of the tuple
structure for the same table. I haven't looked to see if this will be
difficult or not. If you can pass the TupleDesc explicitly then it
shouldn't be a problem.

I'd suggest that the cleanup vacuum *not* be an automatic part of
the operation; just recommend that people do it ASAP after dropping
a column. Consider needing to drop several columns...

regards, tom lane

************

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#1)

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, I am opening this can of worms again. I personally would like to
see this code activated, even if it does take 2x the disk space to alter
a column. Hiroshi had other ideas. Where did we leave this? We have
one month to decide on a plan.

I think the plan should be to do nothing for 7.1. ALTER DROP COLUMN
isn't an especially pressing feature, and so I don't feel that we
should be hustling to squeeze it in just before beta. We're already
overdue for beta.

regards, tom lane

#3Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Tom Lane (#2)
RE: ALTER TABLE DROP COLUMN

-----Original Message-----
From: Tom Lane

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, I am opening this can of worms again. I personally would like to
see this code activated, even if it does take 2x the disk space to alter
a column. Hiroshi had other ideas. Where did we leave this? We have
one month to decide on a plan.

I think the plan should be to do nothing for 7.1. ALTER DROP COLUMN
isn't an especially pressing feature, and so I don't feel that we
should be hustling to squeeze it in just before beta. We're already
overdue for beta.

Seems some people expect the implementation in 7.1.
(recent [GENERAL} drop column?)
I could commit my local branch if people don't mind
backward incompatibility.
I've maintained the branch for more than 1 month
and it implements the following TODOs.

* Add ALTER TABLE DROP COLUMN feature
* ALTER TABLE ADD COLUMN to inherited table put column in wrong place
* Prevent column dropping if column is used by foreign key

Comments ?

Hiroshi Inoue

P.S. I've noticed that get_rte_attribute_name() seems to
break my implementation. I'm not sure if I could solve it.

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hiroshi Inoue (#3)

"Hiroshi Inoue" <Inoue@tpf.co.jp> writes:

Seems some people expect the implementation in 7.1.
(recent [GENERAL} drop column?)
I could commit my local branch if people don't mind
backward incompatibility.

I've lost track --- is this different from the _DROP_COLUMN_HACK__
code that's already in CVS? I really really didn't like that
implementation :-(, but I forget what other methods were being
discussed.

P.S. I've noticed that get_rte_attribute_name() seems to
break my implementation. I'm not sure if I could solve it.

That would be a problem --- rule dumping depends on that code to
produce correct aliases, so making it work is not optional.

regards, tom lane

#5Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Hiroshi Inoue (#3)

Tom Lane wrote:

"Hiroshi Inoue" <Inoue@tpf.co.jp> writes:

Seems some people expect the implementation in 7.1.
(recent [GENERAL} drop column?)
I could commit my local branch if people don't mind
backward incompatibility.

I've lost track --- is this different from the _DROP_COLUMN_HACK__
code that's already in CVS? I really really didn't like that
implementation :-(, but I forget what other methods were being
discussed.

My current local trial implementation follows your idea(logical/
physical attribute numbers).

P.S. I've noticed that get_rte_attribute_name() seems to
break my implementation. I'm not sure if I could solve it.

That would be a problem --- rule dumping depends on that code to
produce correct aliases, so making it work is not optional.

Your change has no problem if logical==physical attribute
numbers.

Regards.

Hiroshi Inoue

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hiroshi Inoue (#5)

Hiroshi Inoue <Inoue@tpf.co.jp> writes:

P.S. I've noticed that get_rte_attribute_name() seems to
break my implementation. I'm not sure if I could solve it.

That would be a problem --- rule dumping depends on that code to
produce correct aliases, so making it work is not optional.

Your change has no problem if logical==physical attribute
numbers.

But if they're not, what do we do? Can we define the order of the
alias-name lists as being one or the other numbering? (Offhand I'd
say it should be logical numbering, but I haven't chased the details.)
If neither of those work, we'll need some more complex datastructure
than a simple list.

regards, tom lane

#7Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Hiroshi Inoue (#3)

Tom Lane wrote:

Hiroshi Inoue <Inoue@tpf.co.jp> writes:

P.S. I've noticed that get_rte_attribute_name() seems to
break my implementation. I'm not sure if I could solve it.

That would be a problem --- rule dumping depends on that code to
produce correct aliases, so making it work is not optional.

Your change has no problem if logical==physical attribute
numbers.

But if they're not, what do we do? Can we define the order of the
alias-name lists as being one or the other numbering? (Offhand I'd
say it should be logical numbering, but I haven't chased the details.)
If neither of those work, we'll need some more complex datastructure
than a simple list.

I'm not sure if we could keep invariant attribute numbers.
Though I've used physical attribute numbers as many as possible
in my trial implementation,there's already an exception.
I had to use logical attribute numbers for FieldSelect node.

Regards.

Hiroshi Inoue

#8Philip Warner
pjw@rhyme.com.au
In reply to: Hiroshi Inoue (#7)

At 12:05 6/10/00 +0900, Hiroshi Inoue wrote:

Tom Lane wrote:

Hiroshi Inoue <Inoue@tpf.co.jp> writes:

P.S. I've noticed that get_rte_attribute_name() seems to
break my implementation. I'm not sure if I could solve it.

That would be a problem --- rule dumping depends on that code to
produce correct aliases, so making it work is not optional.

Your change has no problem if logical==physical attribute
numbers.

But if they're not, what do we do? Can we define the order of the
alias-name lists as being one or the other numbering? (Offhand I'd
say it should be logical numbering, but I haven't chased the details.)
If neither of those work, we'll need some more complex datastructure
than a simple list.

I'm not sure if we could keep invariant attribute numbers.
Though I've used physical attribute numbers as many as possible
in my trial implementation,there's already an exception.
I had to use logical attribute numbers for FieldSelect node.

Not really a useful suggestion at this stage, but it seems to me that
storing plans and/or parse trees is possibly a false economy. Would it be
worth considering storing the relevant SQL (or a parse tree with field &
table names) and compiling the rule in each backend the first time it is
used? (and keep it for the life of the backend).

This would allow underlying view tables to be deleted/added as well as make
the above problem go away. The 'parse tree with names' would also enable
easy construction of dependency information when and if that is implemented...

----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/

#9The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#2)

seconded ...

On Fri, 29 Sep 2000, Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, I am opening this can of worms again. I personally would like to
see this code activated, even if it does take 2x the disk space to alter
a column. Hiroshi had other ideas. Where did we leave this? We have
one month to decide on a plan.

I think the plan should be to do nothing for 7.1. ALTER DROP COLUMN
isn't an especially pressing feature, and so I don't feel that we
should be hustling to squeeze it in just before beta. We're already
overdue for beta.

regards, tom lane

Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org

#10The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#4)

On Thu, 5 Oct 2000, Tom Lane wrote:

"Hiroshi Inoue" <Inoue@tpf.co.jp> writes:

Seems some people expect the implementation in 7.1.
(recent [GENERAL} drop column?)
I could commit my local branch if people don't mind
backward incompatibility.

there have been several ideas thrown back and forth ... the best one that
I saw, forgetting who suggested it, had to do with the idea of locking the
table and doing an effective vacuum on that table with a 'row re-write'
happening ...

Basically, move the first 100 rows to the end of the table file, then take
100 and write it to position 0, 101 to position 1, etc ... that way, at
max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
size ... either method is going to lock the file for a period of time, but
one is much more friendly as far as disk space is concerned *plus*, if RAM
is available for this, it might even be something that the backend could
use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and
the table is 24Meg in size, it could do it all in memory?

#11Bruce Momjian
bruce@momjian.us
In reply to: The Hermit Hacker (#10)

Basically, move the first 100 rows to the end of the table file, then take
100 and write it to position 0, 101 to position 1, etc ... that way, at
max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
size ... either method is going to lock the file for a period of time, but
one is much more friendly as far as disk space is concerned *plus*, if RAM
is available for this, it might even be something that the backend could
use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and
the table is 24Meg in size, it could do it all in memory?

Yes, I liked that too.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#11)

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, move the first 100 rows to the end of the table file, then take
100 and write it to position 0, 101 to position 1, etc ... that way, at
max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
size ... either method is going to lock the file for a period of time, but
one is much more friendly as far as disk space is concerned *plus*, if RAM
is available for this, it might even be something that the backend could
use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and
the table is 24Meg in size, it could do it all in memory?

Yes, I liked that too.

What happens if you crash partway through?

I don't think it's possible to build a crash-robust rewriting ALTER
process that doesn't use 2X disk space: you must have all the old tuples
AND all the new tuples down on disk simultaneously just before you
commit. The only way around 2X disk space is to adopt some logical
renumbering approach to the columns, so that you can pretend the dropped
column isn't there anymore when it really still is.

regards, tom lane

#13Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#12)

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, move the first 100 rows to the end of the table file, then take
100 and write it to position 0, 101 to position 1, etc ... that way, at
max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
size ... either method is going to lock the file for a period of time, but
one is much more friendly as far as disk space is concerned *plus*, if RAM
is available for this, it might even be something that the backend could
use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and
the table is 24Meg in size, it could do it all in memory?

Yes, I liked that too.

What happens if you crash partway through?

I don't think it's possible to build a crash-robust rewriting ALTER
process that doesn't use 2X disk space: you must have all the old tuples
AND all the new tuples down on disk simultaneously just before you
commit. The only way around 2X disk space is to adopt some logical
renumbering approach to the columns, so that you can pretend the dropped
column isn't there anymore when it really still is.

Yes, I liked the 2X disk space, and making the new tuples visible all at
once at the end.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#14The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#12)

On Mon, 9 Oct 2000, Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, move the first 100 rows to the end of the table file, then take
100 and write it to position 0, 101 to position 1, etc ... that way, at
max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
size ... either method is going to lock the file for a period of time, but
one is much more friendly as far as disk space is concerned *plus*, if RAM
is available for this, it might even be something that the backend could
use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and
the table is 24Meg in size, it could do it all in memory?

Yes, I liked that too.

What happens if you crash partway through?

what happens if you crash partway through a vacuum?

I don't think it's possible to build a crash-robust rewriting ALTER
process that doesn't use 2X disk space: you must have all the old
tuples AND all the new tuples down on disk simultaneously just before
you commit. The only way around 2X disk space is to adopt some
logical renumbering approach to the columns, so that you can pretend
the dropped column isn't there anymore when it really still is.

how about a combination of the two? basically, we're gonna want a vacuum
of the table after the alter to clean out those extra columns that we've
marked as 'dead' ... basically, anything that avoids tht whole 2x disk
space option is cool ...

#15The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#13)

On Mon, 9 Oct 2000, Bruce Momjian wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, move the first 100 rows to the end of the table file, then take
100 and write it to position 0, 101 to position 1, etc ... that way, at
max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
size ... either method is going to lock the file for a period of time, but
one is much more friendly as far as disk space is concerned *plus*, if RAM
is available for this, it might even be something that the backend could
use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and
the table is 24Meg in size, it could do it all in memory?

Yes, I liked that too.

What happens if you crash partway through?

I don't think it's possible to build a crash-robust rewriting ALTER
process that doesn't use 2X disk space: you must have all the old tuples
AND all the new tuples down on disk simultaneously just before you
commit. The only way around 2X disk space is to adopt some logical
renumbering approach to the columns, so that you can pretend the dropped
column isn't there anymore when it really still is.

Yes, I liked the 2X disk space, and making the new tuples visible all at
once at the end.

man, are you ever wishy-washy on this issue, aren't you? :) you like not
using 2x, you like using 2x ... :)

#16Tom Lane
tgl@sss.pgh.pa.us
In reply to: The Hermit Hacker (#14)

The Hermit Hacker <scrappy@hub.org> writes:

What happens if you crash partway through?

what happens if you crash partway through a vacuum?

Nothing. Vacuum is crash-safe. ALTER TABLE should be too.

regards, tom lane

#17The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#16)

On Mon, 9 Oct 2000, Tom Lane wrote:

The Hermit Hacker <scrappy@hub.org> writes:

What happens if you crash partway through?

what happens if you crash partway through a vacuum?

Nothing. Vacuum is crash-safe. ALTER TABLE should be too.

Sorry, that's what I meant ... why should marking a column as 'deleted'
and running a 'vacuum' to clean up the physical table be any less
crash-safe?

#18Bruce Momjian
bruce@momjian.us
In reply to: The Hermit Hacker (#17)

On Mon, 9 Oct 2000, Tom Lane wrote:

The Hermit Hacker <scrappy@hub.org> writes:

What happens if you crash partway through?

what happens if you crash partway through a vacuum?

Nothing. Vacuum is crash-safe. ALTER TABLE should be too.

Sorry, that's what I meant ... why should marking a column as 'deleted'
and running a 'vacuum' to clean up the physical table be any less
crash-safe?

It is not. The only downside is 2x disk space to make new versions of
the tuple.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#19The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#18)

On Mon, 9 Oct 2000, Bruce Momjian wrote:

On Mon, 9 Oct 2000, Tom Lane wrote:

The Hermit Hacker <scrappy@hub.org> writes:

What happens if you crash partway through?

what happens if you crash partway through a vacuum?

Nothing. Vacuum is crash-safe. ALTER TABLE should be too.

Sorry, that's what I meant ... why should marking a column as 'deleted'
and running a 'vacuum' to clean up the physical table be any less
crash-safe?

It is not. The only downside is 2x disk space to make new versions of
the tuple.

huh? vacuum moves/cleans up tuples, as well as compresses them, so that
the end result is a smaller table then what it started with, at/with very
little increase in the total size/space needed to perform the vacuum ...

if we reduced vacuum such that it compressed at the field level vs tuple,
we could move a few tuples to the end of the table (crash safe) and then
move N+1 to position 1 minus that extra field. If we mark the column as
being deleted, then if the system crashes part way through, it should be
possible to continue after the system is brought up, no?

#20Bruce Momjian
bruce@momjian.us
In reply to: The Hermit Hacker (#19)

Sorry, that's what I meant ... why should marking a column as 'deleted'
and running a 'vacuum' to clean up the physical table be any less
crash-safe?

It is not. The only downside is 2x disk space to make new versions of
the tuple.

huh? vacuum moves/cleans up tuples, as well as compresses them, so that
the end result is a smaller table then what it started with, at/with very
little increase in the total size/space needed to perform the vacuum ...

if we reduced vacuum such that it compressed at the field level vs tuple,
we could move a few tuples to the end of the table (crash safe) and then
move N+1 to position 1 minus that extra field. If we mark the column as
being deleted, then if the system crashes part way through, it should be
possible to continue after the system is brought up, no?

If it crashes in the middle, some rows have the column removed, and some
do not.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#21The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#20)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: The Hermit Hacker (#19)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: The Hermit Hacker (#21)
#24The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#23)
#25Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#23)
#26The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#25)
#27Bruce Momjian
bruce@momjian.us
In reply to: The Hermit Hacker (#26)
#28The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#27)
#29Bruce Momjian
bruce@momjian.us
In reply to: The Hermit Hacker (#28)
#30Don Baccus
dhogaza@pacifier.com
In reply to: The Hermit Hacker (#24)
#31Hannu Krosing
hannu@tm.ee
In reply to: The Hermit Hacker (#28)
#32Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Hannu Krosing (#31)
#33Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Zeugswetter Andreas SB (#32)
#34Tom Lane
tgl@sss.pgh.pa.us
In reply to: Zeugswetter Andreas SB (#33)
#35The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#34)
#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: The Hermit Hacker (#35)
#37The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#36)
#38Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: The Hermit Hacker (#37)
#39Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Zeugswetter Andreas SB (#38)
#40Bruce Momjian
bruce@momjian.us
In reply to: Zeugswetter Andreas SB (#39)
#41The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#40)
#42Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: The Hermit Hacker (#41)
#43Hannu Krosing
hannu@tm.ee
In reply to: Zeugswetter Andreas SB (#33)
#44Don Baccus
dhogaza@pacifier.com
In reply to: Zeugswetter Andreas SB (#33)
#45The Hermit Hacker
scrappy@hub.org
In reply to: Zeugswetter Andreas SB (#42)
#46Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#40)
#47Tom Lane
tgl@sss.pgh.pa.us
In reply to: The Hermit Hacker (#45)
#48The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#47)
#49Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Don Baccus (#44)
#50Bruce Momjian
bruce@momjian.us
In reply to: Hiroshi Inoue (#49)
#51KuroiNeko
evpopkov@carrier.kiev.ua
In reply to: Hiroshi Inoue (#49)
#52KuroiNeko
evpopkov@carrier.kiev.ua
In reply to: Hiroshi Inoue (#49)
#53Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hiroshi Inoue (#49)
#54Stephan Szabo
sszabo@megazone23.bigpanda.com
In reply to: KuroiNeko (#52)
#55KuroiNeko
evpopkov@carrier.kiev.ua
In reply to: Stephan Szabo (#54)
#56Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Tom Lane (#53)
#57Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Hiroshi Inoue (#49)
#58Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Bruce Momjian (#40)
#59Chris Bitmead
chris@bitmead.com
In reply to: Hiroshi Inoue (#49)
#60Chris Bitmead
chris@bitmead.com
In reply to: Hiroshi Inoue (#49)
#61Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Hiroshi Inoue (#49)
#62Chris Bitmead
chris@bitmead.com
In reply to: Hiroshi Inoue (#49)
#63Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Hiroshi Inoue (#49)
#64Chris Bitmead
chris@bitmead.com
In reply to: Hiroshi Inoue (#49)
#65Hannu Krosing
hannu@tm.ee
In reply to: Hiroshi Inoue (#49)
#66Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Hannu Krosing (#65)
#67Don Baccus
dhogaza@pacifier.com
In reply to: Hiroshi Inoue (#49)
#68Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Don Baccus (#67)
#69KuroiNeko
evpopkov@carrier.kiev.ua
In reply to: Hiroshi Inoue (#49)
#70merlin
merlin@crimelabs.net
In reply to: Don Baccus (#67)
#71Adam Haberlach
adam@newsnipple.com
In reply to: Chris Bitmead (#59)
#72Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Zeugswetter Andreas SB (#66)