LWLock contention: I think I understand the problem

Started by Tom Laneover 24 years ago61 messageshackers
Jump to latest
#1Tom Lane
tgl@sss.pgh.pa.us

After some further experimentation, I believe I understand the reason for
the reports we've had of 7.2 producing heavy context-swap activity where
7.1 didn't. Here is an extract from tracing lwlock activity for one
backend in a pgbench run:

2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): awakened
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): excl 1 shared 0 head 0x422c27d4
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): release waiter
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(300): excl 0 shared 0 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(300): excl 0 shared 1 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): excl 1 shared 0 head 0x422c2bfc
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): waiting
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): awakened
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): excl 1 shared 0 head 0x422c27d4
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): release waiter
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(232): excl 0 shared 0 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(232): excl 0 shared 1 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(300): excl 0 shared 0 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(300): excl 0 shared 1 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): excl 1 shared 0 head 0x422c2bfc
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): waiting
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): awakened
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): excl 1 shared 0 head 0x422c27d4
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): release waiter
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(232): excl 0 shared 0 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(232): excl 0 shared 1 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(300): excl 0 shared 0 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(300): excl 0 shared 1 head (nil)
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): excl 1 shared 0 head 0x422c2bfc
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): waiting
2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): awakened
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): excl 1 shared 0 head 0x422c27d4
2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): release waiter

LWLock 0 is the BufMgrLock, while the locks with numbers like 232 and
300 are context locks for individual buffers. At the beginning of this
trace we see the process awoken after having been granted the
BufMgrLock. It does a small amount of processing (probably a ReadBuffer
operation) and releases the BufMgrLock. At that point, someone else is
already waiting for BufMgrLock, and the line about "release waiter"
means that ownership of BufMgrLock has been transferred to that other
someone. Next, the context lock 300 is acquired and released (there's no
contention for it). Next we need to get the BufMgrLock again (probably
to do a ReleaseBuffer). Since we've already granted the BufMgrLock to
someone else, we are forced to block here. When control comes back,
we do the ReleaseBuffer and then release the BufMgrLock --- again,
immediately granting it to someone else. That guarantees that our next
attempt to acquire BufMgrLock will cause us to block. The cycle repeats
for every attempt to lock BufMgrLock.

In essence, what we're seeing here is a "tag team" behavior: someone is
always waiting on the BufMgrLock, and so each LWLockRelease(BufMgrLock)
transfers lock ownership to someone else; then the next
LWLockAcquire(BufMgrLock) in the same process is guaranteed to block;
and that means we have a new waiter on BufMgrLock, so that the cycle
repeats. Net result: a process context swap for *every* entry to the
buffer manager.

In previous versions, since BufMgrLock was only a spinlock, releasing it
did not cause ownership of the lock to be immediately transferred to
someone else. Therefore, the releaser would be able to re-acquire the
lock if he wanted to do another bufmgr operation before his time quantum
expired. This made for many fewer context swaps.

It would seem, therefore, that lwlock.c's behavior of immediately
granting the lock to released waiters is not such a good idea after all.
Perhaps we should release waiters but NOT grant them the lock; when they
get to run, they have to loop back, try to get the lock, and possibly go
back to sleep if they fail. This apparent waste of cycles is actually
beneficial because it saves context swaps overall.

Comments?

regards, tom lane

#2Thomas Lockhart
lockhart@fourpalms.org
In reply to: Tom Lane (#1)
Re: LWLock contention: I think I understand the problem

...

It would seem, therefore, that lwlock.c's behavior of immediately
granting the lock to released waiters is not such a good idea after all.
Perhaps we should release waiters but NOT grant them the lock; when they
get to run, they have to loop back, try to get the lock, and possibly go
back to sleep if they fail. This apparent waste of cycles is actually
beneficial because it saves context swaps overall.

Hmm. Seems reasonable. In some likely scenerios, it would seem that the
waiters *could* grab the lock when they are next scheduled, since the
current locker would have finished at least one
grab/release/grab/release cycle in the meantime.

How hard will it be to try this out?

- Thomas

#3Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#1)
Re: LWLock contention: I think I understand the problem

It would seem, therefore, that lwlock.c's behavior of immediately
granting the lock to released waiters is not such a good idea after all.
Perhaps we should release waiters but NOT grant them the lock; when they
get to run, they have to loop back, try to get the lock, and possibly go
back to sleep if they fail. This apparent waste of cycles is actually
beneficial because it saves context swaps overall.

I still need to think about this, but the above idea doesn't seem good.
Right now, we wake only one waiting process who gets the lock while
other waiters stay sleeping, right? If we don't give them the lock,
don't we have to wake up all the waiters? If there are many, that
sounds like lots of context switches no?

I am still thinking.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#4Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#1)
Re: LWLock contention: I think I understand the problem

It would seem, therefore, that lwlock.c's behavior of immediately
granting the lock to released waiters is not such a good idea after all.
Perhaps we should release waiters but NOT grant them the lock; when they
get to run, they have to loop back, try to get the lock, and possibly go
back to sleep if they fail. This apparent waste of cycles is actually
beneficial because it saves context swaps overall.

Another question: Is there a way to release buffer locks without
aquiring the master lock?

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#3)
Re: LWLock contention: I think I understand the problem

Bruce Momjian <pgman@candle.pha.pa.us> writes:

I still need to think about this, but the above idea doesn't seem good.
Right now, we wake only one waiting process who gets the lock while
other waiters stay sleeping, right? If we don't give them the lock,
don't we have to wake up all the waiters?

No. We'll still wake up the same processes as now: either one would-be
exclusive lock holder, or multiple would-be shared lock holders.
But what I'm proposing is that they don't get granted the lock at that
instant; they have to try to get the lock once they actually start to
run.

Once in a while, they'll fail to get the lock, either because the
original releaser reacquired the lock, and then ran out of his time
quantum before releasing it, or because some third process came along
and acquired the lock. In either of these scenarios they'd have to
block again, and we'd have wasted a process dispatch cycle. The
important thing though is that the current arrangement wastes a process
dispatch cycle for every acquisition of a contended-for lock.

What I had not really focused on before, but it's now glaringly obvious,
is that on modern machines one process time quantum (0.01 sec typically)
is enough time for a LOT of computation, in particular an awful lot of
trips through the buffer manager or other modules with shared state.
We want to be sure that a process can repeatedly acquire and release
the shared lock for as long as its time quantum holds out, even if there
are other processes waiting for the lock. Otherwise we'll be swapping
processes too often.

regards, tom lane

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Lockhart (#2)
Re: LWLock contention: I think I understand the problem

Thomas Lockhart <lockhart@fourpalms.org> writes:

How hard will it be to try this out?

It's a pretty minor rearrangement of the logic in lwlock.c, I think.
Working on it now.

regards, tom lane

#7Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#5)
Re: LWLock contention: I think I understand the problem

No. We'll still wake up the same processes as now: either one would-be
exclusive lock holder, or multiple would-be shared lock holders.
But what I'm proposing is that they don't get granted the lock at that
instant; they have to try to get the lock once they actually start to
run.

Once in a while, they'll fail to get the lock, either because the
original releaser reacquired the lock, and then ran out of his time
quantum before releasing it, or because some third process came along
and acquired the lock. In either of these scenarios they'd have to
block again, and we'd have wasted a process dispatch cycle. The
important thing though is that the current arrangement wastes a process
dispatch cycle for every acquisition of a contended-for lock.

What I had not really focused on before, but it's now glaringly obvious,
is that on modern machines one process time quantum (0.01 sec typically)
is enough time for a LOT of computation, in particular an awful lot of
trips through the buffer manager or other modules with shared state.
We want to be sure that a process can repeatedly acquire and release
the shared lock for as long as its time quantum holds out, even if there
are other processes waiting for the lock. Otherwise we'll be swapping
processes too often.

OK, I understand what you are saying now. You are not talking about the
SysV semaphore but a level above that.

What you are saying is that when we release a lock, we are currently
automatically giving it to another process that is asleep and may not be
scheduled to run for some time. We then continue processing, and when
we need that lock again, we can't get it because the sleeper is holding
it. We go to sleep and the sleeper wakes up, gets the lock, and
continues.

What you want to do is to wake up the sleeper but not give them the lock
until they are actually running and can aquire it themselves.

Seems like a no-brainer win to me. Giving the lock to a process that is
not currently running seems quite bad to me. It would be one thing if
we were trying to do some real-time processing, but throughput is the
key for us.

If you code up a patch, I will test it on my SMP machine using pgbench.
Hopefully this will help Tatsuo's 4-way AIX machine too, and Linux.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#8Jeffrey W. Baker
jwbaker@acm.org
In reply to: Tom Lane (#1)
Re: LWLock contention: I think I understand the problem

On Sat, 29 Dec 2001, Tom Lane wrote:

After some further experimentation, I believe I understand the reason for
the reports we've had of 7.2 producing heavy context-swap activity where
7.1 didn't. Here is an extract from tracing lwlock activity for one
backend in a pgbench run:

...

It would seem, therefore, that lwlock.c's behavior of immediately
granting the lock to released waiters is not such a good idea after all.
Perhaps we should release waiters but NOT grant them the lock; when they
get to run, they have to loop back, try to get the lock, and possibly go
back to sleep if they fail. This apparent waste of cycles is actually
beneficial because it saves context swaps overall.

Sounds reasonable enough, but there seems to be a possibility of a process
starving. For example, if A releases the lock, B and C wake up, B gets
the lock. Then B releases the lock, A and C wake, and A gets the lock
back. C gets CPU time but never gets the lock.

BTW I am not on this list.

-jwb

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#7)
Re: LWLock contention: I think I understand the problem

Bruce Momjian <pgman@candle.pha.pa.us> writes:

What you want to do is to wake up the sleeper but not give them the lock
until they are actually running and can aquire it themselves.

Yeah. Essentially this is a partial reversion to the idea of a
spinlock. But it's more efficient than our old implementation with
timed waits between retries, because (a) a process will not be awoken
unless it has a chance at getting the lock, and (b) when a contended-for
lock is freed, a waiting process will be made ready immediately, rather
than waiting for a time tick to elapse. So, if the lock-releasing
process does block before the end of its quantum, the released process
is available to run immediately. Under the old scheme, a process that
had failed to get a spinlock couldn't run until its select wait timed
out, even if the lock were now available. So I think it's still a net
win to have the LWLock mechanism in there, rather than just changing
them back to spinlocks.

If you code up a patch, I will test it on my SMP machine using pgbench.
Hopefully this will help Tatsuo's 4-way AIX machine too, and Linux.

Attached is a proposed patch (against the current-CVS version of
lwlock.c). I haven't committed this yet, but it seems to be a win on
a single CPU. Can people try it on multi CPUs?

regards, tom lane

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#4)
Re: LWLock contention: I think I understand the problem

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Another question: Is there a way to release buffer locks without
aquiring the master lock?

We might want to think about making bufmgr locking more fine-grained
... in a future release. For 7.2 I don't really want to mess around
with the bufmgr logic at this late hour. Too risky.

regards, tom lane

#11Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#10)
Re: LWLock contention: I think I understand the problem

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Another question: Is there a way to release buffer locks without
aquiring the master lock?

We might want to think about making bufmgr locking more fine-grained
... in a future release. For 7.2 I don't really want to mess around
with the bufmgr logic at this late hour. Too risky.

You want a TODO item on this?

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#11)
Re: LWLock contention: I think I understand the problem

Bruce Momjian <pgman@candle.pha.pa.us> writes:

We might want to think about making bufmgr locking more fine-grained
... in a future release. For 7.2 I don't really want to mess around
with the bufmgr logic at this late hour. Too risky.

You want a TODO item on this?

Sure. But don't phrase it as just a bufmgr problem. Maybe:

* Make locking of shared data structures more fine-grained

regards, tom lane

#13Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#9)
Re: LWLock contention: I think I understand the problem

Bruce Momjian <pgman@candle.pha.pa.us> writes:

What you want to do is to wake up the sleeper but not give them the lock
until they are actually running and can aquire it themselves.

Yeah. Essentially this is a partial reversion to the idea of a
spinlock. But it's more efficient than our old implementation with
timed waits between retries, because (a) a process will not be awoken
unless it has a chance at getting the lock, and (b) when a contended-for
lock is freed, a waiting process will be made ready immediately, rather
than waiting for a time tick to elapse. So, if the lock-releasing
process does block before the end of its quantum, the released process
is available to run immediately. Under the old scheme, a process that
had failed to get a spinlock couldn't run until its select wait timed
out, even if the lock were now available. So I think it's still a net
win to have the LWLock mechanism in there, rather than just changing
them back to spinlocks.

If you code up a patch, I will test it on my SMP machine using pgbench.
Hopefully this will help Tatsuo's 4-way AIX machine too, and Linux.

Attached is a proposed patch (against the current-CVS version of
lwlock.c). I haven't committed this yet, but it seems to be a win on
a single CPU. Can people try it on multi CPUs?

OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is
before the patch, the second after. Both average 14tps, so the patch
has no negative effect on my system. Of course, it has no positive
effect either. :-)

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Attachments:

/root/pgbench2_7.2text/plainDownload
/root/pgbench2_7.2_v2text/plainDownload
#14Jeffrey W. Baker
jwbaker@acm.org
In reply to: Bruce Momjian (#13)
Re: LWLock contention: I think I understand the problem

On Sat, 29 Dec 2001, Bruce Momjian wrote:

OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is
before the patch, the second after. Both average 14tps, so the patch
has no negative effect on my system. Of course, it has no positive
effect either. :-)

Actually it looks slighty worse with the patch. What about CPU usage?

-jwb

#15Bruce Momjian
bruce@momjian.us
In reply to: Jeffrey W. Baker (#14)
Re: LWLock contention: I think I understand the problem

On Sat, 29 Dec 2001, Bruce Momjian wrote:

OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is
before the patch, the second after. Both average 14tps, so the patch
has no negative effect on my system. Of course, it has no positive
effect either. :-)

Actually it looks slighty worse with the patch. What about CPU usage?

Yes, slightly, but I have better performance on 2 cpu's than 1, so I
didn't expect to see any major change, partially because the context
switching overhead problem doesn't see to exist on this OS.

If we find that it helps single-cpu machines, and perhaps helps machines
that had worse performance on SMP than single-cpu, my guess is it would
be a win, in general.

Let me tell you what I did to test it. I ran /contrib/pgbench. I had
the postmaster configured with 1000 buffers, and ran pgbench with a
scale of 50. I then ran it with 1, 10, 25, and 50 clients using 1000
transactions.

The commands were:

$ createdb pgbench
$ pgbench -i -s 50
$ for CLIENT in 1 10 25 50
do
pgbench -c $CLIENT -t 1000 pgbench
done | tee -a pgbench2_7.2

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#16Bruce Momjian
bruce@momjian.us
In reply to: Jeffrey W. Baker (#14)
Re: LWLock contention: I think I understand the problem

On Sat, 29 Dec 2001, Bruce Momjian wrote:

OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is
before the patch, the second after. Both average 14tps, so the patch
has no negative effect on my system. Of course, it has no positive
effect either. :-)

Actually it looks slighty worse with the patch. What about CPU usage?

For 5 clients, CPU's are 96% idle. Load average is around 5. Seems
totally I/O bound.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#17Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#15)
Re: LWLock contention: I think I understand the problem

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is
before the patch, the second after. Both average 14tps, so the patch
has no negative effect on my system. Of course, it has no positive
effect either. :-)

I am also having a hard time measuring any difference using pgbench.
However, pgbench is almost entirely I/O bound on my hardware (CPU is
typically 70-80% idle) so this is not very surprising.

I can confirm that the patch accomplishes the intended goal of reducing
context swaps. Using pgbench with 64 clients, a profile of the old code
showed about 7% of LWLockAcquire calls blocking (invoking
IpcSemaphoreLock). A profile of the new code shows 0.1% of the calls
blocking.

I suspect that we need something less I/O-bound than pgbench to really
tell whether this patch is worthwhile or not. Jeffrey, what are you
seeing in your application?

And btw, what are you using to count context swaps?

regards, tom lane

#18Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#9)
Re: LWLock contention: I think I understand the problem

If you code up a patch, I will test it on my SMP machine using pgbench.
Hopefully this will help Tatsuo's 4-way AIX machine too, and Linux.

Attached is a proposed patch (against the current-CVS version of
lwlock.c). I haven't committed this yet, but it seems to be a win on
a single CPU. Can people try it on multi CPUs?

Your patches seem lightly enhanced 7.2 performance on AIX 5L (still
slower than 7.1, however).

Attachments:

bench.pngimage/pngDownload
#19Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#18)
Re: LWLock contention: I think I understand the problem

Tatsuo Ishii <t-ishii@sra.co.jp> writes:

Your patches seem lightly enhanced 7.2 performance on AIX 5L (still
slower than 7.1, however).

It's awfully hard to see what's happening near the left end of that
chart. May I suggest plotting the x-axis on a log scale?

regards, tom lane

#20Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#18)
Re: LWLock contention: I think I understand the problem

I have thought of a further refinement to the patch I produced
yesterday. Assume that there are multiple waiters blocked on (eg)
BufMgrLock. After we release the first one, we want the currently
running process to be able to continue acquiring and releasing the lock
for as long as its time quantum holds out. But in the patch as given,
each acquire/release cycle releases another waiter. This is probably
not good.

Attached is a modification that prevents additional waiters from being
released until the first releasee has a chance to run and acquire the
lock. Would you try this and see if it's better or not in your test
cases? It doesn't seem to help on a single CPU, but maybe on multiple
CPUs it'll make a difference.

To try to make things simple, I've attached the mod in two forms:
as a diff from current CVS, and as a diff from the previous patch.

regards, tom lane

#21Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#20)
#22Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#20)
#23Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#20)
#24Bruce Momjian
bruce@momjian.us
In reply to: Tatsuo Ishii (#22)
#25Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Bruce Momjian (#24)
#26Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Tatsuo Ishii (#25)
#27Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#22)
#28Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#27)
#29Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#27)
#30Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#29)
#31Hannu Krosing
hannu@tm.ee
In reply to: Bruce Momjian (#29)
#32Hannu Krosing
hannu@tm.ee
In reply to: Bruce Momjian (#29)
#33Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hannu Krosing (#32)
#34Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#7)
#35Kenny H Klatt
kklatt@csd.uwm.edu
In reply to: Bruce Momjian (#7)
#36Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#31)
#37Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#30)
#38Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#37)
#39Jeffrey W. Baker
jwbaker@acm.org
In reply to: Bruce Momjian (#37)
#40Bruce Momjian
bruce@momjian.us
In reply to: Jeffrey W. Baker (#39)
#41Fredrik Estreen
estreen@algonet.se
In reply to: Bruce Momjian (#7)
#42Hannu Krosing
hannu@tm.ee
In reply to: Bruce Momjian (#29)
#43Fredrik Estreen
estreen@algonet.se
In reply to: Bruce Momjian (#7)
#44Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#22)
#45Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#44)
#46Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#44)
#47Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#46)
#48Hannu Krosing
hannu@tm.ee
In reply to: Tom Lane (#9)
#49Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hannu Krosing (#48)
#50Ashley Cambrell
ash@freaky-namuh.com
In reply to: Tom Lane (#9)
#51Hannu Krosing
hannu@tm.ee
In reply to: Tom Lane (#49)
#52Hannu Krosing
hannu@tm.ee
In reply to: Tom Lane (#9)
#53Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hannu Krosing (#51)
#54Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hannu Krosing (#52)
#55Hannu Krosing
hannu@tm.ee
In reply to: Tom Lane (#9)
#56Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hannu Krosing (#55)
#57Gilles Darold
gilles@darold.net
In reply to: Tom Lane (#9)
#58Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Tom Lane (#9)
#59Luis Alberto Amigo Navarro
lamigo@atc.unican.es
In reply to: Tom Lane (#9)
#60Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#56)
#61Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#60)