multiline CSV fields

Started by Andrew Dunstanover 21 years ago47 messageshackers
Jump to latest
#1Andrew Dunstan
andrew@dunslane.net

Darcy Buskermolen has drawn my attention to unfortunate behaviour of
COPY CSV with fields containing embedded line end chars if the embedded
sequence isn't the same as those of the file containing the CSV data. In
that case we error out when reading the data in. This means there are
cases where we can produce a CSV data file which we can't read in, which
is not at all pleasant.

Possible approaches to the problem:
. make it a documented limitation
. have a "csv read" mode for backend/commands/copy.c:CopyReadLine() that
relaxes some of the restrictions on inconsistent line endings
. escape embedded line end chars

The last really isn't an option, because the whole point of CSVs is to
play with other programs, and my understanding is that those that
understand multiline fields (e.g. Excel) expect them not to be escaped,
and do not produce them escaped.

So right now I'm tossing up in my head between the first two options. Or
maybe there's another solution I haven't thought of.

Thoughts?

cheers

andrew

#2Patrick B Kelly
pbk@patrickbkelly.org
In reply to: Andrew Dunstan (#1)
Re: multiline CSV fields

On Nov 10, 2004, at 6:10 PM, Andrew Dunstan wrote:

The last really isn't an option, because the whole point of CSVs is to
play with other programs, and my understanding is that those that
understand multiline fields (e.g. Excel) expect them not to be
escaped, and do not produce them escaped.

Actually, when I try to export a sheet with multi-line cells from
excel, it tells me that this feature is incompatible with the CSV
format and will not include them in the CSV file.

Patrick B. Kelly
------------------------------------------------------
http://patrickbkelly.org

#3Andrew Dunstan
andrew@dunslane.net
In reply to: Patrick B Kelly (#2)
Re: multiline CSV fields

Patrick B Kelly wrote:

On Nov 10, 2004, at 6:10 PM, Andrew Dunstan wrote:

The last really isn't an option, because the whole point of CSVs is
to play with other programs, and my understanding is that those that
understand multiline fields (e.g. Excel) expect them not to be
escaped, and do not produce them escaped.

Actually, when I try to export a sheet with multi-line cells from
excel, it tells me that this feature is incompatible with the CSV
format and will not include them in the CSV file.

It probably depends on the version. I have just tested with Excel 2000
on a WinXP machine and it both read and wrote these files.

cheers

andrew

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#3)
Re: multiline CSV fields

Andrew Dunstan <andrew@dunslane.net> writes:

Patrick B Kelly wrote:

Actually, when I try to export a sheet with multi-line cells from
excel, it tells me that this feature is incompatible with the CSV
format and will not include them in the CSV file.

It probably depends on the version. I have just tested with Excel 2000
on a WinXP machine and it both read and wrote these files.

I'd be inclined to define Excel 2000 as broken, honestly, if it's
writing unescaped newlines as data. To support this would mean throwing
away most of our ability to detect incorrectly formatted CSV files.
A simple error like a missing close quote would look to the machine like
the rest of the file is a single long data line where all the newlines
are embedded in data fields. How likely is it that you'll get a useful
error message out of that? Most likely the error message would point to
the end of the file, or at least someplace well removed from the actual
mistake.

I would vote in favor of removing the current code that attempts to
support unquoted newlines, and waiting to see if there are complaints.

regards, tom lane

#5Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#4)
Re: multiline CSV fields

Tom Lane wrote:

Andrew Dunstan <andrew@dunslane.net> writes:

Patrick B Kelly wrote:

Actually, when I try to export a sheet with multi-line cells from
excel, it tells me that this feature is incompatible with the CSV
format and will not include them in the CSV file.

It probably depends on the version. I have just tested with Excel 2000
on a WinXP machine and it both read and wrote these files.

I'd be inclined to define Excel 2000 as broken, honestly, if it's
writing unescaped newlines as data. To support this would mean throwing
away most of our ability to detect incorrectly formatted CSV files.
A simple error like a missing close quote would look to the machine like
the rest of the file is a single long data line where all the newlines
are embedded in data fields. How likely is it that you'll get a useful
error message out of that? Most likely the error message would point to
the end of the file, or at least someplace well removed from the actual
mistake.

I would vote in favor of removing the current code that attempts to
support unquoted newlines, and waiting to see if there are complaints.

This feature was specifically requested when we discussed what sort of
CSVs we would handle.

And it does in fact work as long as the newline style is the same.

I just had an idea. How about if we add a new CSV option MULTILINE. If
absent, then on output we would not output unescaped LF/CR characters
and on input we would not allow fields with embedded unescaped LF/CR
characters. In both cases we could error out for now, with perhaps an
8.1 TODO to provide some other behaviour.

Or we could drop the whole multiline "feature" for now and make the
whole thing an 8.1 item, although it would be a bit of a pity when it
does work in what will surely be the most common case.

cheers

andrew

#6Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#4)
Re: multiline CSV fields

Tom Lane <tgl@sss.pgh.pa.us> writes:

I would vote in favor of removing the current code that attempts to
support unquoted newlines, and waiting to see if there are complaints.

Uhm. *raises hand*

I agree with your argument but one way or another I have to load these CSVs
I'm given. And like it or not virtually all the CSVs people get are going to
be coming from Excel. So far with 7.4 I've just opened them up in Emacs and
removed the newlines, but it's a royal pain in the arse.

--
greg

#7David Fetter
david@fetter.org
In reply to: Bruce Momjian (#6)
Re: multiline CSV fields

On Thu, Nov 11, 2004 at 03:38:16PM -0500, Greg Stark wrote:

Tom Lane <tgl@sss.pgh.pa.us> writes:

I would vote in favor of removing the current code that attempts
to support unquoted newlines, and waiting to see if there are
complaints.

Uhm. *raises hand*

I agree with your argument but one way or another I have to load
these CSVs I'm given. And like it or not virtually all the CSVs
people get are going to be coming from Excel. So far with 7.4 I've
just opened them up in Emacs and removed the newlines, but it's a
royal pain in the arse.

Meanwhile, check out dbi-link. It lets you query against DBI data
sources including DBD::Excel :)

http://pgfoundry.org/projects/dbi-link/

Bug reports welcome.

Cheers,
D
--
David Fetter david@fetter.org http://fetter.org/
phone: +1 510 893 6100 mobile: +1 415 235 3778

Remember to vote!

#8Patrick B Kelly
pbk@patrickbkelly.org
In reply to: Andrew Dunstan (#5)
Re: multiline CSV fields

On Nov 11, 2004, at 2:56 PM, Andrew Dunstan wrote:

Tom Lane wrote:

Andrew Dunstan <andrew@dunslane.net> writes:

Patrick B Kelly wrote:

Actually, when I try to export a sheet with multi-line cells from
excel, it tells me that this feature is incompatible with the CSV
format and will not include them in the CSV file.

It probably depends on the version. I have just tested with Excel
2000 on a WinXP machine and it both read and wrote these files.

I'd be inclined to define Excel 2000 as broken, honestly, if it's
writing unescaped newlines as data. To support this would mean
throwing
away most of our ability to detect incorrectly formatted CSV files.
A simple error like a missing close quote would look to the machine
like
the rest of the file is a single long data line where all the newlines
are embedded in data fields. How likely is it that you'll get a
useful
error message out of that? Most likely the error message would point
to
the end of the file, or at least someplace well removed from the
actual
mistake.

I would vote in favor of removing the current code that attempts to
support unquoted newlines, and waiting to see if there are complaints.

This feature was specifically requested when we discussed what sort of
CSVs we would handle.

And it does in fact work as long as the newline style is the same.

I just had an idea. How about if we add a new CSV option MULTILINE. If
absent, then on output we would not output unescaped LF/CR characters
and on input we would not allow fields with embedded unescaped LF/CR
characters. In both cases we could error out for now, with perhaps an
8.1 TODO to provide some other behaviour.

Or we could drop the whole multiline "feature" for now and make the
whole thing an 8.1 item, although it would be a bit of a pity when it
does work in what will surely be the most common case.

What about just coding a FSM into
backend/commands/copy.c:CopyReadLine() that does not process any flavor
of NL characters when it is inside of a data field?

Patrick B. Kelly
------------------------------------------------------
http://patrickbkelly.org

#9Andrew Dunstan
andrew@dunslane.net
In reply to: Patrick B Kelly (#8)
Re: multiline CSV fields

Patrick B Kelly wrote:

What about just coding a FSM into
backend/commands/copy.c:CopyReadLine() that does not process any
flavor of NL characters when it is inside of a data field?

It would be a major change - the routine doesn't read data a field at a
time, and has no idea if we are even in CSV mode at all. It would be
rather late in the dev cycle to be making such changes, I suspect.

cheers

andrew

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Patrick B Kelly (#8)
Re: multiline CSV fields

Patrick B Kelly <pbk@patrickbkelly.org> writes:

What about just coding a FSM into
backend/commands/copy.c:CopyReadLine() that does not process any flavor
of NL characters when it is inside of a data field?

CopyReadLine has no business tracking that. One reason why not is that
it is dealing with data not yet converted out of the client's encoding,
which makes matching to user-specified quote/escape characters
difficult.

regards, tom lane

#11Patrick B Kelly
pbk@patrickbkelly.org
In reply to: Tom Lane (#10)
Re: multiline CSV fields

On Nov 11, 2004, at 6:16 PM, Tom Lane wrote:

Patrick B Kelly <pbk@patrickbkelly.org> writes:

What about just coding a FSM into
backend/commands/copy.c:CopyReadLine() that does not process any
flavor
of NL characters when it is inside of a data field?

CopyReadLine has no business tracking that. One reason why not is that
it is dealing with data not yet converted out of the client's encoding,
which makes matching to user-specified quote/escape characters
difficult.

regards, tom lane

---------------------------(end of
broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

I appreciate what you are saying about the encoding and you are, of
course, right but CopyReadLine is already processing the NL characters
and it is doing it without considering the context in which they
appear. Unfortunately, the same character(s) are used for two different
purposes in the files in question. Without considering whether they
appear inside or outside of data fields, CopyReadline will mistake one
for the other and cannot correctly do what it is already trying to do
which is break the input file into lines.

My suggestion is to simply have CopyReadLine recognize these two states
(in-field and out-of-field) and execute the current logic only while in
the second state. It would not be too hard but as you mentioned it is
non-trivial.

Patrick B. Kelly
------------------------------------------------------
http://patrickbkelly.org

#12Andrew Dunstan
andrew@dunslane.net
In reply to: Patrick B Kelly (#11)
Re: multiline CSV fields

Patrick B Kelly wrote:

My suggestion is to simply have CopyReadLine recognize these two
states (in-field and out-of-field) and execute the current logic only
while in the second state. It would not be too hard but as you
mentioned it is non-trivial.

We don't know what state we expect the end of line to be in until after
we have actually read the line. To know how to treat the end of line on
your scheme we would have to parse as we go rather than after reading
the line as now. Changing this would be not only be non-trivial but
significantly invasive to the code.

cheers

andrew

#13Patrick B Kelly
pbk@patrickbkelly.org
In reply to: Andrew Dunstan (#12)
Re: multiline CSV fields

On Nov 11, 2004, at 10:07 PM, Andrew Dunstan wrote:

Patrick B Kelly wrote:

My suggestion is to simply have CopyReadLine recognize these two
states (in-field and out-of-field) and execute the current logic only
while in the second state. It would not be too hard but as you
mentioned it is non-trivial.

We don't know what state we expect the end of line to be in until
after we have actually read the line. To know how to treat the end of
line on your scheme we would have to parse as we go rather than after
reading the line as now. Changing this would be not only be
non-trivial but significantly invasive to the code.

Perhaps I am misunderstanding the code. As I read it the code currently
goes through the input character by character looking for NL and EOF
characters. It appears to be very well structured for what I am
proposing. The section in question is a small and clearly defined loop
which reads the input one character at a time and decides when it has
reached the end of the line or file. Each call of CopyReadLine attempts
to get one more line. I would propose that each time it starts out in
the out-of-field state and the state is toggled by each un-escaped
quote that it encounters in the stream. When in the in-field state, it
would only look for the next un-escaped quote and while in the
out-of-field state, it would execute the existing logic as well as
looking for the next un-escaped quote.

I may not be explaining myself well or I may fundamentally
misunderstand how copy works. I would be happy to code the change and
send it to you for review, if you would be interested in looking it
over and it is felt to be a worthwhile capability.

Patrick B. Kelly
------------------------------------------------------
http://patrickbkelly.org

#14Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#1)
Re: multiline CSV fields

Can I see an example of such a failure line?

---------------------------------------------------------------------------

Andrew Dunstan wrote:

Darcy Buskermolen has drawn my attention to unfortunate behaviour of
COPY CSV with fields containing embedded line end chars if the embedded
sequence isn't the same as those of the file containing the CSV data. In
that case we error out when reading the data in. This means there are
cases where we can produce a CSV data file which we can't read in, which
is not at all pleasant.

Possible approaches to the problem:
. make it a documented limitation
. have a "csv read" mode for backend/commands/copy.c:CopyReadLine() that
relaxes some of the restrictions on inconsistent line endings
. escape embedded line end chars

The last really isn't an option, because the whole point of CSVs is to
play with other programs, and my understanding is that those that
understand multiline fields (e.g. Excel) expect them not to be escaped,
and do not produce them escaped.

So right now I'm tossing up in my head between the first two options. Or
maybe there's another solution I haven't thought of.

Thoughts?

cheers

andrew

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Patrick B Kelly (#13)
Re: multiline CSV fields

Patrick B Kelly <pbk@patrickbkelly.org> writes:

I may not be explaining myself well or I may fundamentally
misunderstand how copy works.

Well, you're definitely ignoring the character-set-conversion issue.

regards, tom lane

#16Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#14)
Re: multiline CSV fields

This example should fail on data line 2 or 3 on any platform,
regardless of the platform's line-end convention, although I haven't
tested on Windows.

cheers

andrew

[andrew@aloysius inst]$ bin/psql -e -f csverr.sql ; od -c
/tmp/csverrtest.csv
create table csverrtest (a int, b text, c int);
CREATE TABLE
insert into csverrtest values(1,'a',1);
INSERT 122471 1
insert into csverrtest values(2,'foo\r\nbar',2);
INSERT 122472 1
insert into csverrtest values(3,'baz\nblurfl',3);
INSERT 122473 1
insert into csverrtest values(4,'d',4);
INSERT 122474 1
insert into csverrtest values(5,'e',5);
INSERT 122475 1
copy csverrtest to '/tmp/csverrtest.csv' csv;
COPY
truncate csverrtest;
TRUNCATE TABLE
copy csverrtest from '/tmp/csverrtest.csv' csv;
psql:cvserr.sql:9: ERROR: literal carriage return found in data
HINT: Use "\r" to represent carriage return.
CONTEXT: COPY csverrtest, line 2: "2,"foo"
drop table csverrtest;
DROP TABLE
0000000 1 , a , 1 \n 2 , " f o o \r \n b a
0000020 r " , 2 \n 3 , " b a z \n b l u r
0000040 f l " , 3 \n 4 , d , 4 \n 5 , e ,
0000060 5 \n
0000062
[andrew@aloysius inst]$

Bruce Momjian wrote:

Show quoted text

Can I see an example of such a failure line?

---------------------------------------------------------------------------

Andrew Dunstan wrote:

Darcy Buskermolen has drawn my attention to unfortunate behaviour of
COPY CSV with fields containing embedded line end chars if the embedded
sequence isn't the same as those of the file containing the CSV data. In
that case we error out when reading the data in. This means there are
cases where we can produce a CSV data file which we can't read in, which
is not at all pleasant.

Possible approaches to the problem:
. make it a documented limitation
. have a "csv read" mode for backend/commands/copy.c:CopyReadLine() that
relaxes some of the restrictions on inconsistent line endings
. escape embedded line end chars

The last really isn't an option, because the whole point of CSVs is to
play with other programs, and my understanding is that those that
understand multiline fields (e.g. Excel) expect them not to be escaped,
and do not produce them escaped.

So right now I'm tossing up in my head between the first two options. Or
maybe there's another solution I haven't thought of.

Thoughts?

cheers

andrew

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

#17Patrick B Kelly
pbk@patrickbkelly.org
In reply to: Tom Lane (#15)
Re: multiline CSV fields

On Nov 12, 2004, at 12:20 AM, Tom Lane wrote:

Patrick B Kelly <pbk@patrickbkelly.org> writes:

I may not be explaining myself well or I may fundamentally
misunderstand how copy works.

Well, you're definitely ignoring the character-set-conversion issue.

I was not trying to ignore the character set and encoding issues but
perhaps my assumptions are naive or overly optimistic. I realized that
quotes are not as consistent as the NL characters but I was assuming
that some encodings would escape to ASCII or a similar encoding like
JIS Roman that would simplify recognition of the quote character.
Unicode files make recognizing other punctuation like the quote fairly
straightforward and to the naive observer, the code in CopyReadLine as
it is currently written appears to handle multi-byte encodings such as
SJIS that may present characters below 127 in trailing bytes.

As I said, perhaps I was oversimplifying. Is there a regression test
set of input files for that I could review to see all of the supported
encodings?

Patrick B. Kelly
------------------------------------------------------
http://patrickbkelly.org

#18Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#16)
Re: multiline CSV fields

OK, what solutions do we have for this? Not being able to load dumped
data is a serious bug. I have added this to the open items list:

* fix COPY CSV with \r,\n in data

My feeling is that if we are in a quoted string we just process whatever
characters we find, even passing through an EOL. I realize it might not
mark missing quote errors well but that seems minor compared to not
loading valid data.

---------------------------------------------------------------------------

Andrew Dunstan wrote:

This example should fail on data line 2 or 3 on any platform,
regardless of the platform's line-end convention, although I haven't
tested on Windows.

cheers

andrew

[andrew@aloysius inst]$ bin/psql -e -f csverr.sql ; od -c
/tmp/csverrtest.csv
create table csverrtest (a int, b text, c int);
CREATE TABLE
insert into csverrtest values(1,'a',1);
INSERT 122471 1
insert into csverrtest values(2,'foo\r\nbar',2);
INSERT 122472 1
insert into csverrtest values(3,'baz\nblurfl',3);
INSERT 122473 1
insert into csverrtest values(4,'d',4);
INSERT 122474 1
insert into csverrtest values(5,'e',5);
INSERT 122475 1
copy csverrtest to '/tmp/csverrtest.csv' csv;
COPY
truncate csverrtest;
TRUNCATE TABLE
copy csverrtest from '/tmp/csverrtest.csv' csv;
psql:cvserr.sql:9: ERROR: literal carriage return found in data
HINT: Use "\r" to represent carriage return.
CONTEXT: COPY csverrtest, line 2: "2,"foo"
drop table csverrtest;
DROP TABLE
0000000 1 , a , 1 \n 2 , " f o o \r \n b a
0000020 r " , 2 \n 3 , " b a z \n b l u r
0000040 f l " , 3 \n 4 , d , 4 \n 5 , e ,
0000060 5 \n
0000062
[andrew@aloysius inst]$

Bruce Momjian wrote:

Can I see an example of such a failure line?

---------------------------------------------------------------------------

Andrew Dunstan wrote:

Darcy Buskermolen has drawn my attention to unfortunate behaviour of
COPY CSV with fields containing embedded line end chars if the embedded
sequence isn't the same as those of the file containing the CSV data. In
that case we error out when reading the data in. This means there are
cases where we can produce a CSV data file which we can't read in, which
is not at all pleasant.

Possible approaches to the problem:
. make it a documented limitation
. have a "csv read" mode for backend/commands/copy.c:CopyReadLine() that
relaxes some of the restrictions on inconsistent line endings
. escape embedded line end chars

The last really isn't an option, because the whole point of CSVs is to
play with other programs, and my understanding is that those that
understand multiline fields (e.g. Excel) expect them not to be escaped,
and do not produce them escaped.

So right now I'm tossing up in my head between the first two options. Or
maybe there's another solution I haven't thought of.

Thoughts?

cheers

andrew

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#19Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#18)
Re: multiline CSV fields

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, what solutions do we have for this? Not being able to load dumped
data is a serious bug.

Which we do not have, because pg_dump doesn't use CSV. I do not think
this is a must-fix, especially not if the proposed fix introduces
inconsistencies elsewhere.

regards, tom lane

#20Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#19)
Re: multiline CSV fields

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, what solutions do we have for this? Not being able to load dumped
data is a serious bug.

Which we do not have, because pg_dump doesn't use CSV. I do not think
this is a must-fix, especially not if the proposed fix introduces
inconsistencies elsewhere.

Sure, pg_dump doesn't use it but COPY should be able to load anything it
output.

Can this be fixed if we ignore the problem with reporting errors?

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#21Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#20)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#20)
#23Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#22)
#24Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#23)
#25Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#24)
#26Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#25)
#27Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#25)
#28Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#26)
#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#26)
#30Kris Jurka
books@ejurka.com
In reply to: Andrew Dunstan (#27)
#31Bruce Momjian
bruce@momjian.us
In reply to: Kris Jurka (#30)
#32Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kris Jurka (#30)
#33Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#32)
#34Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#32)
#35Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#34)
#36Kris Jurka
books@ejurka.com
In reply to: Bruce Momjian (#35)
#37Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#35)
#38Noname
Ben.Young@risk.sungard.com
In reply to: Andrew Dunstan (#37)
#39Andrew Dunstan
andrew@dunslane.net
In reply to: Noname (#38)
#40Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#37)
#41Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#37)
#42Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#41)
#43Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#42)
#44Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#27)
#45Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#44)
#46Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#45)
#47Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#44)