Bug: Buffer cache is not scan resistant

Started by Luke Lonerganabout 19 years ago95 messageshackers
Jump to latest
#1Luke Lonergan
llonergan@greenplum.com

I'm putting this out there before we publish a fix so that we can discuss
how best to fix it.

Doug and Sherry recently found the source of an important performance issue
with the Postgres shared buffer cache.

The issue is summarized like this: the buffer cache in PGSQL is not "scan
resistant" as advertised. A sequential scan of a table larger than cache
will pollute the buffer cache in almost all circumstances.

Here is performance of GPDB 2.301 (Postgres 8.1.6) on a single X4500
(thumper-3) with 4 cores where "bigtable" is a table 2x the size of RAM and
"memtable" is a table that fits into I/O cache:

With our default setting of shared_buffers (16MB):

Operation memtable bigtable
---------------------------------------------------
SELECT COUNT(*) 1221 MB/s 973 MB/s
VACUUM 1709 MB/s 1206 MB/s

We had observed that VACUUM would perform better when done right after a
SELECT. In the above example, the faster rate from disk was 1608 MB/s,
compared to the normal rate of 1206 MB/s.

We verified this behavior on Postgres 8.2 as well. The buffer selection
algorithm is choosing buffer pages scattered throughout the buffer cache in
almost all circumstances.

Sherry traced the behavior to the processor repeatedly flushing the L2
cache. Doug found that we weren't using the Postgres buffer cache the way
we expected, instead we were loading the scanned data from disk into the
cache even though there was no possibility of reusing it. In addition to
pushing other, possibly useful pages from the cache, it has the additional
behavior of invalidating the L2 cache for the remainder of the executor path
that uses the data.

To prove that the buffer cache was the source of the problem, we dropped the
shared buffer size to fit into L2 cache (1MB per Opteron core), and this is
what we saw (drop size of shared buffers to 680KB):

Operation memtable bigtable
---------------------------------------------------
SELECT COUNT(*) 1320 MB/s 1059 MB/s
VACUUM 3033 MB/s 1597 MB/s

These results do not vary with the order of operations.

Thoughts on the best way to fix the buffer selection algorithm? Ideally,
one page would be used in the buffer cache in circumstances where the table
to be scanned is (significantly?) larger than the size of the buffer cache.

- Luke

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Luke Lonergan (#1)
Re: Bug: Buffer cache is not scan resistant

"Luke Lonergan" <llonergan@greenplum.com> writes:

The issue is summarized like this: the buffer cache in PGSQL is not "scan
resistant" as advertised.

Sure it is. As near as I can tell, your real complaint is that the
bufmgr doesn't attempt to limit its usage footprint to fit in L2 cache;
which is hardly surprising considering it doesn't know the size of L2
cache. That's not a consideration that we've ever taken into account.

I'm also less than convinced that it'd be helpful for a big seqscan:
won't reading a new disk page into memory via DMA cause that memory to
get flushed from the processor cache anyway? I wonder whether your
numbers are explained by some other consideration than you think.

regards, tom lane

#3Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#2)
Re: Bug: Buffer cache is not scan resistant

When we instrument the page selections made within the buffer cache, they are sequential and span the entire address space of the cache.

With respect to whether it's L2, it's a conclusion based on the experimental results. It's not the TLB, as we also tested for the 512 entries for each L2.

One thing I left out of the previous post: the difference between fast and slow behavior was that in the fast case, the buffer selection alternated between two buffer pages. This was the case only when the preceding statement was a SELECT and the statement was VACUUM.

- Luke

Msg is shrt cuz m on ma treo

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Sunday, March 04, 2007 08:36 PM Eastern Standard Time
To: Luke Lonergan
Cc: PGSQL Hackers; Doug Rady; Sherry Moore
Subject: Re: [HACKERS] Bug: Buffer cache is not scan resistant

"Luke Lonergan" <llonergan@greenplum.com> writes:

The issue is summarized like this: the buffer cache in PGSQL is not "scan
resistant" as advertised.

Sure it is. As near as I can tell, your real complaint is that the
bufmgr doesn't attempt to limit its usage footprint to fit in L2 cache;
which is hardly surprising considering it doesn't know the size of L2
cache. That's not a consideration that we've ever taken into account.

I'm also less than convinced that it'd be helpful for a big seqscan:
won't reading a new disk page into memory via DMA cause that memory to
get flushed from the processor cache anyway? I wonder whether your
numbers are explained by some other consideration than you think.

regards, tom lane

#4Luke Lonergan
llonergan@greenplum.com
In reply to: Luke Lonergan (#3)
Re: Bug: Buffer cache is not scan resistant

One more thing: the L2 is invalidated when re-written from the kernel IO cache, but the pages addressed in L2 retain their values when 'writeen thru' which allows the new data to be re-used up the executor chain.

- Luke

Msg is shrt cuz m on ma treo

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Sunday, March 04, 2007 08:36 PM Eastern Standard Time
To: Luke Lonergan
Cc: PGSQL Hackers; Doug Rady; Sherry Moore
Subject: Re: [HACKERS] Bug: Buffer cache is not scan resistant

"Luke Lonergan" <llonergan@greenplum.com> writes:

The issue is summarized like this: the buffer cache in PGSQL is not "scan
resistant" as advertised.

Sure it is. As near as I can tell, your real complaint is that the
bufmgr doesn't attempt to limit its usage footprint to fit in L2 cache;
which is hardly surprising considering it doesn't know the size of L2
cache. That's not a consideration that we've ever taken into account.

I'm also less than convinced that it'd be helpful for a big seqscan:
won't reading a new disk page into memory via DMA cause that memory to
get flushed from the processor cache anyway? I wonder whether your
numbers are explained by some other consideration than you think.

regards, tom lane

#5Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Tom Lane (#2)
Re: Bug: Buffer cache is not scan resistant

Tom Lane wrote:

"Luke Lonergan" <llonergan@greenplum.com> writes:

The issue is summarized like this: the buffer cache in PGSQL is not "scan
resistant" as advertised.

Sure it is. As near as I can tell, your real complaint is that the
bufmgr doesn't attempt to limit its usage footprint to fit in L2 cache;
which is hardly surprising considering it doesn't know the size of L2
cache. That's not a consideration that we've ever taken into account.

To add a little to this - forgetting the scan resistant point for the
moment... cranking down shared_buffers to be smaller than the L2 cache
seems to help *any* sequential scan immensely, even on quite modest HW:

e.g: PIII 1.26Ghz 512Kb L2 cache, 2G ram,

SELECT count(*) FROM lineitem (which is about 11GB) performance:

Shared_buffers Elapsed
-------------- -------
400MB 101 s
128KB 74 s

When I've profiled this activity, I've seen a lot of time spent
searching for/allocating a new buffer for each page being fetched.
Obviously having less of them to search through will help, but having
less than the L2 cache-size worth of 'em seems to help a whole lot!

Cheers

Mark

#6Gavin Sherry
swm@linuxworld.com.au
In reply to: Mark Kirkwood (#5)
Re: Bug: Buffer cache is not scan resistant

On Mon, 5 Mar 2007, Mark Kirkwood wrote:

To add a little to this - forgetting the scan resistant point for the
moment... cranking down shared_buffers to be smaller than the L2 cache
seems to help *any* sequential scan immensely, even on quite modest HW:

e.g: PIII 1.26Ghz 512Kb L2 cache, 2G ram,

SELECT count(*) FROM lineitem (which is about 11GB) performance:

Shared_buffers Elapsed
-------------- -------
400MB 101 s
128KB 74 s

When I've profiled this activity, I've seen a lot of time spent
searching for/allocating a new buffer for each page being fetched.
Obviously having less of them to search through will help, but having
less than the L2 cache-size worth of 'em seems to help a whole lot!

Could you demonstrate that point by showing us timings for shared_buffers
sizes from 512K up to, say, 2 MB? The two numbers you give there might
just have to do with managing a large buffer.

Thanks,

Gavin

#7Luke Lonergan
llonergan@greenplum.com
In reply to: Gavin Sherry (#6)
Re: Bug: Buffer cache is not scan resistant

Gavin, Mark,

Could you demonstrate that point by showing us timings for
shared_buffers sizes from 512K up to, say, 2 MB? The two
numbers you give there might just have to do with managing a
large buffer.

I suggest two experiments that we've already done:
1) increase shared buffers to double the L2 cache size, you should see
that the behavior reverts to the "slow" performance and is constant at
larger sizes

2) instrument the calls to BufferGetPage() (a macro) and note that the
buffer block numbers returned increase sequentially during scans of
tables larger than the buffer size

- Luke

#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Gavin Sherry (#6)
Re: Bug: Buffer cache is not scan resistant

Gavin Sherry <swm@alcove.com.au> writes:

Could you demonstrate that point by showing us timings for shared_buffers
sizes from 512K up to, say, 2 MB? The two numbers you give there might
just have to do with managing a large buffer.

Using PG CVS HEAD on 64-bit Intel Xeon (1MB L2 cache), Fedora Core 5,
I don't measure any noticeable difference in seqscan speed for
shared_buffers set to 32MB or 256kB. I note that the code would
not let me choose the latter setting without a large decrease in
max_connections, which might be expected to cause some performance
changes in itself.

Now this may only prove that the disk subsystem on this machine is
too cheap to let the system show any CPU-related issues. I'm seeing
a scan rate of about 43MB/sec for both count(*) and plain ol' "wc",
which is a factor of 4 or so less than Mark's numbers suggest...
but "top" shows CPU usage of less than 5%, so even with a 4x faster
disk I'd not really expect that CPU speed would become interesting.

(This is indeed a milestone, btw, because it wasn't so long ago that
count(*) was nowhere near disk speed.)

regards, tom lane

#9Grzegorz Jaskiewicz
gj@pointblue.com.pl
In reply to: Tom Lane (#2)
Re: Bug: Buffer cache is not scan resistant

On Mar 5, 2007, at 2:36 AM, Tom Lane wrote:

n into account.

I'm also less than convinced that it'd be helpful for a big seqscan:
won't reading a new disk page into memory via DMA cause that memory to
get flushed from the processor cache anyway?

Nope. DMA is writing directly into main memory. If the area was in
the L2/L1 cache, it will get invalidated. But if it isn't there, it
is okay.

--
Grzegorz Jaskiewicz
gj@pointblue.com.pl

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Grzegorz Jaskiewicz (#9)
Re: Bug: Buffer cache is not scan resistant

Grzegorz Jaskiewicz <gj@pointblue.com.pl> writes:

On Mar 5, 2007, at 2:36 AM, Tom Lane wrote:

I'm also less than convinced that it'd be helpful for a big seqscan:
won't reading a new disk page into memory via DMA cause that memory to
get flushed from the processor cache anyway?

Nope. DMA is writing directly into main memory. If the area was in
the L2/L1 cache, it will get invalidated. But if it isn't there, it
is okay.

So either way, it isn't in processor cache after the read. So how can
there be any performance benefit?

regards, tom lane

#11Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#10)
Re: Bug: Buffer cache is not scan resistant

So either way, it isn't in processor cache after the read.
So how can there be any performance benefit?

It's the copy from kernel IO cache to the buffer cache that is L2
sensitive. When the shared buffer cache is polluted, it thrashes the L2
cache. When the number of pages being written to in the kernel->user
space writes fits in L2, then the L2 lines are "written through" (see
the link below on page 264 for the write combining features of the
opteron for example) and the writes to main memory are deferred.

http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/
25112.PDF

- Luke

#12Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#8)
Re: Bug: Buffer cache is not scan resistant

Hi Tom,

Now this may only prove that the disk subsystem on this
machine is too cheap to let the system show any CPU-related
issues.

Try it with a warm IO cache. As I posted before, we see double the
performance of a VACUUM from a table in IO cache when the shared buffer
cache isn't being polluted. The speed with large buffer cache should be
about 450 MB/s and the speed with a buffer cache smaller than L2 should
be about 800 MB/s.

The real issue here isn't the L2 behavior, though that's important when
trying to reach very high IO speeds, the issue is that we're seeing the
buffer cache pollution in the first place. When we instrument the
blocks selected by the buffer page selection algorithm, we see that they
iterate sequentially, filling the shared buffer cache. That's the
source of the problem here.

Do we have a regression test somewhere for this?

- Luke

#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Luke Lonergan (#11)
Re: Bug: Buffer cache is not scan resistant

"Luke Lonergan" <LLonergan@greenplum.com> writes:

So either way, it isn't in processor cache after the read.
So how can there be any performance benefit?

It's the copy from kernel IO cache to the buffer cache that is L2
sensitive. When the shared buffer cache is polluted, it thrashes the L2
cache. When the number of pages being written to in the kernel->user
space writes fits in L2, then the L2 lines are "written through" (see
the link below on page 264 for the write combining features of the
opteron for example) and the writes to main memory are deferred.

That makes absolutely zero sense. The data coming from the disk was
certainly not in processor cache to start with, and I hope you're not
suggesting that it matters whether the *target* page of a memcpy was
already in processor cache. If the latter, it is not our bug to fix.

http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/
25112.PDF

Even granting that your conclusions are accurate, we are not in the
business of optimizing Postgres for a single CPU architecture.

regards, tom lane

#14Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#13)
Re: Bug: Buffer cache is not scan resistant

Hi Tom,

Even granting that your conclusions are accurate, we are not
in the business of optimizing Postgres for a single CPU architecture.

I think you're missing my/our point:

The Postgres shared buffer cache algorithm appears to have a bug. When
there is a sequential scan the blocks are filling the entire shared
buffer cache. This should be "fixed".

My proposal for a fix: ensure that when relations larger (much larger?)
than buffer cache are scanned, they are mapped to a single page in the
shared buffer cache.

- Luke

#15Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Luke Lonergan (#14)
Re: Bug: Buffer cache is not scan resistant

Luke Lonergan wrote:

The Postgres shared buffer cache algorithm appears to have a bug. When
there is a sequential scan the blocks are filling the entire shared
buffer cache. This should be "fixed".

My proposal for a fix: ensure that when relations larger (much larger?)
than buffer cache are scanned, they are mapped to a single page in the
shared buffer cache.

It's not that simple. Using the whole buffer cache for a single seqscan
is ok, if there's currently no better use for the buffer cache. Running
a single select will indeed use the whole cache, but if you run any
other smaller queries, the pages they need should stay in cache and the
seqscan will loop through the other buffers.

In fact, the pages that are left in the cache after the seqscan finishes
would be useful for the next seqscan of the same table if we were smart
enough to read those pages first. That'd make a big difference for
seqscanning a table that's say 1.5x your RAM size. Hmm, I wonder if
Jeff's sync seqscan patch adresses that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#16Hannu Krosing
hannu@tm.ee
In reply to: Luke Lonergan (#14)
Re: Bug: Buffer cache is not scan resistant

Ühel kenal päeval, E, 2007-03-05 kell 03:51, kirjutas Luke Lonergan:

Hi Tom,

Even granting that your conclusions are accurate, we are not
in the business of optimizing Postgres for a single CPU architecture.

I think you're missing my/our point:

The Postgres shared buffer cache algorithm appears to have a bug. When
there is a sequential scan the blocks are filling the entire shared
buffer cache. This should be "fixed".

My proposal for a fix: ensure that when relations larger (much larger?)
than buffer cache are scanned, they are mapped to a single page in the
shared buffer cache.

How will this approach play together with synchronized scan patches ?

Or should synchronized scan rely on systems cache only ?

- Luke

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly

--
----------------
Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com

#17Tom Lane
tgl@sss.pgh.pa.us
In reply to: Luke Lonergan (#14)
Re: Bug: Buffer cache is not scan resistant

"Luke Lonergan" <LLonergan@greenplum.com> writes:

I think you're missing my/our point:

The Postgres shared buffer cache algorithm appears to have a bug. When
there is a sequential scan the blocks are filling the entire shared
buffer cache. This should be "fixed".

No, this is not a bug; it is operating as designed. The point of the
current bufmgr algorithm is to replace the page least recently used,
and that's what it's doing.

If you want to lobby for changing the algorithm, then you need to
explain why one test case on one platform justifies de-optimizing
for a lot of other cases. In almost any concurrent-access situation
I think that what you are suggesting would be a dead loss --- for
instance we might as well forget about Jeff Davis' synchronized-scan
work.

In any case, I'm still not convinced that you've identified the problem
correctly, because your explanation makes no sense to me. How can the
processor's L2 cache improve access to data that it hasn't got yet?

regards, tom lane

#18Florian Weimer
fweimer@bfk.de
In reply to: Tom Lane (#13)
Re: Bug: Buffer cache is not scan resistant

* Tom Lane:

That makes absolutely zero sense. The data coming from the disk was
certainly not in processor cache to start with, and I hope you're not
suggesting that it matters whether the *target* page of a memcpy was
already in processor cache. If the latter, it is not our bug to fix.

Uhm, if it's not in the cache, you typically need to evict some cache
lines to make room for the data, so I'd expect an indirect performance
hit. I could be mistaken, though.

--
Florian Weimer <fweimer@bfk.de>
BFK edv-consulting GmbH http://www.bfk.de/
Kriegsstraße 100 tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

#19Hannu Krosing
hannu@tm.ee
In reply to: Tom Lane (#17)
Re: Bug: Buffer cache is not scan resistant

Ühel kenal päeval, E, 2007-03-05 kell 04:15, kirjutas Tom Lane:

"Luke Lonergan" <LLonergan@greenplum.com> writes:

I think you're missing my/our point:

The Postgres shared buffer cache algorithm appears to have a bug. When
there is a sequential scan the blocks are filling the entire shared
buffer cache. This should be "fixed".

No, this is not a bug; it is operating as designed.

Maybe he means that there is an oversight (aka "bug") in the design ;)

The point of the
current bufmgr algorithm is to replace the page least recently used,
and that's what it's doing.

If you want to lobby for changing the algorithm, then you need to
explain why one test case on one platform justifies de-optimizing
for a lot of other cases.

If you know beforehand that you will definitely overflow cache and not
reuse it anytime soon, then it seems quite reasonable to not even start
polluting the cache. Especially, if you get a noticable boost in
performance while doing so.

In almost any concurrent-access situation
I think that what you are suggesting would be a dead loss

Only if the concurrent access patern is over data mostly fitting in
buffer cache. If we can avoid polluting buffer cache with data we know
we will use only once, more useful data will be available.

--- for
instance we might as well forget about Jeff Davis' synchronized-scan
work.

Depends on ratio of system cache/shared buffer cache. I don't think
Jeff's patch is anywere near the point it needs to start worrying about
data swapping between system cache and shared burrers, or L2 cache usage

In any case, I'm still not convinced that you've identified the problem
correctly, because your explanation makes no sense to me. How can the
processor's L2 cache improve access to data that it hasn't got yet?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate

--
----------------
Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com

#20Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#17)
Re: Bug: Buffer cache is not scan resistant

The Postgres shared buffer cache algorithm appears to have a bug.
When there is a sequential scan the blocks are filling the entire
shared buffer cache. This should be "fixed".

No, this is not a bug; it is operating as designed. The
point of the current bufmgr algorithm is to replace the page
least recently used, and that's what it's doing.

At least we've established that for certain.

If you want to lobby for changing the algorithm, then you
need to explain why one test case on one platform justifies
de-optimizing for a lot of other cases. In almost any
concurrent-access situation I think that what you are
suggesting would be a dead loss --- for instance we might as
well forget about Jeff Davis' synchronized-scan work.

Instead of forgetting about it, we'd need to change it.

In any case, I'm still not convinced that you've identified
the problem correctly, because your explanation makes no
sense to me. How can the processor's L2 cache improve access
to data that it hasn't got yet?

The evidence seems to clearly indicate reduced memory writing due to an
L2 related effect. The actual data shows a dramatic reduction in main
memory writing when the destination of the written data fits in the L2
cache.

I'll try to fit a hypothesis to explain it. Assume you've got a warm IO
cache in the OS.

The heapscan algorithm now works like this:
0) select a destination user buffer
1) uiomove->kcopy memory from the IO cache to the user buffer
1A) step 1: read from kernel space
1B) step 2: write to user space
2) the user buffer is accessed many times by the executor nodes above
Repeat

There are two situations we are evaluating: one where the addresses of
the user buffer are scattered over a space larger than the size of L2
(caseA) and one where they are confined to the size of L2 (caseB). Note
that we could also consider another situation where the addresses are
scattered over a space smaller than the TLB entries mapped by the L2
cache (512 max) and larger than the size of L2, but we've tried that and
it proved uninteresting.

For both cases step 1A is the same: each block (8KB) write from (1) will
read from IO cache into 128 L2 (64B each) lines, evicting the previous
data there.

In step 1B for caseA the destination for the writes is mostly an address
not currently mapped into L2 cache, so 128 victim L2 lines are found
(LRU), stored into, and writes are flushed to main memory.

In step 1B for caseB, the destination for the writes is located in L2
already. The 128 L2 lines are stored into, and the write to main memory
is delayed under the assumption that these lines are "hot" as they were
already in L2.

I don't know enough to be sure this is the right answer, but it does fit
the experimental data.

- Luke

#21Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Gavin Sherry (#6)
#22Luke Lonergan
llonergan@greenplum.com
In reply to: Mark Kirkwood (#21)
#23Bruce Momjian
bruce@momjian.us
In reply to: Luke Lonergan (#20)
#24Tom Lane
tgl@sss.pgh.pa.us
In reply to: Mark Kirkwood (#21)
#25Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#24)
#26Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Tom Lane (#24)
#27Tom Lane
tgl@sss.pgh.pa.us
In reply to: Pavan Deolasee (#26)
#28Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#27)
#29Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#24)
#30Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#27)
#31Luke Lonergan
llonergan@greenplum.com
In reply to: Luke Lonergan (#29)
#32Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Tom Lane (#30)
#33Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#30)
#34Tom Lane
tgl@sss.pgh.pa.us
In reply to: Pavan Deolasee (#32)
#35Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#30)
#36Simon Riggs
simon@2ndQuadrant.com
In reply to: Josh Berkus (#33)
#37Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#36)
#38Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#37)
#39Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#37)
#40Jeff Davis
pgsql@j-davis.com
In reply to: Luke Lonergan (#14)
#41Jeff Davis
pgsql@j-davis.com
In reply to: Hannu Krosing (#16)
#42Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#39)
#43Jeff Davis
pgsql@j-davis.com
In reply to: Heikki Linnakangas (#15)
#44Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#43)
#45Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#44)
#46Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#45)
#47Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jeff Davis (#45)
#48Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Tom Lane (#30)
#49Tom Lane
tgl@sss.pgh.pa.us
In reply to: Mark Kirkwood (#48)
#50Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Tom Lane (#49)
#51Tom Lane
tgl@sss.pgh.pa.us
In reply to: Mark Kirkwood (#50)
#52Jeff Davis
pgsql@j-davis.com
In reply to: Heikki Linnakangas (#47)
#53Florian Pflug
fgp@phlo.org
In reply to: Simon Riggs (#39)
#54Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Tom Lane (#51)
#55Tom Lane
tgl@sss.pgh.pa.us
In reply to: Mark Kirkwood (#54)
#56Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#55)
#57Luke Lonergan
llonergan@greenplum.com
In reply to: Bruce Momjian (#56)
#58Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#56)
#59Tom Lane
tgl@sss.pgh.pa.us
In reply to: Luke Lonergan (#57)
#60Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Heikki Linnakangas (#47)
#61Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Josh Berkus (#33)
#62Luke Lonergan
llonergan@greenplum.com
In reply to: Tom Lane (#59)
#63Tom Lane
tgl@sss.pgh.pa.us
In reply to: Luke Lonergan (#62)
#64Sherry Moore
sherry.moore@sun.com
In reply to: Tom Lane (#59)
#65Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jim Nasby (#61)
#66Simon Riggs
simon@2ndQuadrant.com
In reply to: Florian Pflug (#53)
#67Jeff Davis
pgsql@j-davis.com
In reply to: Jim Nasby (#60)
#68Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#67)
#69Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#68)
#70Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jeff Davis (#67)
#71Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Tom Lane (#68)
#72Simon Riggs
simon@2ndQuadrant.com
In reply to: Sherry Moore (#64)
#73Jeff Davis
pgsql@j-davis.com
In reply to: Heikki Linnakangas (#71)
#74Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Tom Lane (#65)
#75Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Jeff Davis (#67)
#76Jeff Davis
pgsql@j-davis.com
In reply to: Jim Nasby (#75)
#77Jeff Davis
pgsql@j-davis.com
In reply to: Heikki Linnakangas (#70)
#78Sherry Moore
sherry.moore@sun.com
In reply to: Simon Riggs (#72)
#79Luke Lonergan
llonergan@greenplum.com
In reply to: Sherry Moore (#78)
#80Hannu Krosing
hannu@tm.ee
In reply to: Jeff Davis (#77)
#81Marko Kreen
markokr@gmail.com
In reply to: Hannu Krosing (#80)
#82Simon Riggs
simon@2ndQuadrant.com
In reply to: Luke Lonergan (#79)
#83Luke Lonergan
llonergan@greenplum.com
In reply to: Simon Riggs (#82)
#84ITAGAKI Takahiro
itagaki.takahiro@oss.ntt.co.jp
In reply to: Simon Riggs (#82)
#85Simon Riggs
simon@2ndQuadrant.com
In reply to: ITAGAKI Takahiro (#84)
#86Simon Riggs
simon@2ndQuadrant.com
In reply to: Simon Riggs (#85)
#87Tom Lane
tgl@sss.pgh.pa.us
In reply to: ITAGAKI Takahiro (#84)
#88Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#87)
#89ITAGAKI Takahiro
itagaki.takahiro@oss.ntt.co.jp
In reply to: Simon Riggs (#86)
#90Luke Lonergan
llonergan@greenplum.com
In reply to: Simon Riggs (#86)
#91Simon Riggs
simon@2ndQuadrant.com
In reply to: Luke Lonergan (#90)
#92Simon Riggs
simon@2ndQuadrant.com
In reply to: ITAGAKI Takahiro (#89)
#93Luke Lonergan
llonergan@greenplum.com
In reply to: Simon Riggs (#91)
#94Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#86)
#95Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#86)