bgwriter changes

Started by Neil Conwayover 21 years ago22 messageshackers
Jump to latest
#1Neil Conway
neilc@samurai.com

In recent discussion[1]http://archives.postgresql.org/pgsql-hackers/2004-12/msg00386.php with Simon Riggs, there has been some talk of
making some changes to the bgwriter. To summarize the problem, the
bgwriter currently scans the entire T1+T2 buffer lists and returns a
list of all the currently dirty buffers. It then selects a subset of
that list (computed using bgwriter_percent and bgwriter_maxpages) to
flush to disk. Not only does this mean we can end up scanning a
significant portion of shared_buffers for every invocation of the
bgwriter, we also do the scan while holding the BufMgrLock, likely
hurting scalability.

I think a fix for this in some fashion is warranted for 8.0. Possible
solutions:

(1) Special-case bgwriter_percent=100. The only reason we need to return
a list of all the dirty buffers is so that we can choose n% of them to
satisfy bgwriter_percent. That is obviously unnecessary if we have
bgwriter_percent=100. I think this change won't help most users,
*unless* we also change bgwriter_percent=100 in the default configuration.

(2) Remove bgwriter_percent. I have yet to hear anyone argue that
there's an actual need for bgwriter_percent in tuning bgwriter behavior,
and one less GUC var is a good thing, all else being equal. This is
effectively the same as #1 with the default changed, only less flexibility.

(3) Change the meaning of bgwriter_percent, per Simon's proposal. Make
it mean "the percentage of the buffer pool to scan, at most, to look for
dirty buffers". I don't think this is workable, at least not at this
point in the release cycle, because it means we might not smooth of
checkpoint load, one of the primary goals of the bgwriter (in this
proposal bgwriter would only ever consider writing out a small subset of
the total shared buffer cache: the least-recently-used n%, with 2% being
a suggested default). Some variant of this might be worth exploring for
8.1 though.

A patch (implementing #2) is attached -- any benchmark results would be
helpful. Increasing shared_buffers (to 10,000 or more) should make the
problem noticeable.

Opinions on which route is the best, or on some alternative solution? My
inclination is toward #2, but I'm not dead-set on it.

-Neil

[1]: http://archives.postgresql.org/pgsql-hackers/2004-12/msg00386.php

Attachments:

bgwriter_rem_percent-1.patchtext/x-patch; name=bgwriter_rem_percent-1.patch; x-mac-creator=0; x-mac-type=0Download+140-160
#2Bruce Momjian
bruce@momjian.us
In reply to: Neil Conway (#1)
Re: bgwriter changes

Neil Conway wrote:

(2) Remove bgwriter_percent. I have yet to hear anyone argue that
there's an actual need for bgwriter_percent in tuning bgwriter behavior,
and one less GUC var is a good thing, all else being equal. This is
effectively the same as #1 with the default changed, only less flexibility.

I prefer #2, and agree with you and Simon that something has to be done
for 8.0.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Neil Conway (#1)
Re: bgwriter changes

Neil Conway <neilc@samurai.com> writes:

...
(2) Remove bgwriter_percent. I have yet to hear anyone argue that
there's an actual need for bgwriter_percent in tuning bgwriter behavior,
...

Of the three offered solutions, I agree that that makes the most sense
(unless Jan steps up with a strong argument why this knob is needed).

However, due consideration should also be given to

(4) Do nothing until 8.1.

At this point in the release cycle I'm not sure we should be making
any significant changes for anything less than a crashing bug.

A patch (implementing #2) is attached -- any benchmark results would be
helpful. Increasing shared_buffers (to 10,000 or more) should make the
problem noticeable.

I'd want to see some pretty impressive benchmark results before we
consider making a change now.

regards, tom lane

#4Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#3)
Re: bgwriter changes

Tom Lane wrote:

However, due consideration should also be given to

(4) Do nothing until 8.1.

At this point in the release cycle I'm not sure we should be making
any significant changes for anything less than a crashing bug.

If that's not the policy, then I don't understand the dev cycle state
labels used.

In the commercial world, my approach would be that if this was
determined to be necessary (about which I am moderately agnostic) then
we would abort the current RC stage, effectively postponing the release.

cheers

andrew

#5Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Andrew Dunstan (#4)
Re: bgwriter changes

(2) Remove bgwriter_percent. I have yet to hear anyone argue that
there's an actual need for bgwriter_percent in tuning
bgwriter behavior,

One argument for it is to avoid writing very hot pages.

(3) Change the meaning of bgwriter_percent, per Simon's proposal. Make
it mean "the percentage of the buffer pool to scan, at most, to look for
dirty buffers". I don't think this is workable, at least not at this

a la long I think we want to avoid that checkpoint needs to do a lot of
writing, without writing hot pages too often. This can only reasonably be
defined with a max number of pages we want to allow dirty at checkpoint time.
bgwriter_percent comes close to this meaning, although in this sense the value
would need to be high, like 80%.

I think we do want 2 settings. Think of one as a short time value
(so bgwriter does not write everything in one run) and one a long term
target over multiple runs.

Is it possible to do a patch that produces a dirty buffer list in LRU order
and stops early when eighter maxpages is reached or bgwriter_percent
pages are scanned ?

Andreas

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Zeugswetter Andreas SB SD (#5)
Re: bgwriter changes

"Zeugswetter Andreas DAZ SD" <ZeugswetterA@spardat.at> writes:

Is it possible to do a patch that produces a dirty buffer list in LRU order
and stops early when eighter maxpages is reached or bgwriter_percent
pages are scanned ?

Only if you redefine the meaning of bgwriter_percent. At present it's
defined by reference to the total number of dirty pages, and that can't
be known without collecting them all.

If it were, say, a percentage of the total length of the T1/T2 lists,
then we'd have some chance of stopping the scan early.

regards, tom lane

#7Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#6)
Re: bgwriter changes

On Tue, 2004-12-14 at 19:40, Tom Lane wrote:

"Zeugswetter Andreas DAZ SD" <ZeugswetterA@spardat.at> writes:

Is it possible to do a patch that produces a dirty buffer list in LRU order
and stops early when eighter maxpages is reached or bgwriter_percent
pages are scanned ?

Only if you redefine the meaning of bgwriter_percent. At present it's
defined by reference to the total number of dirty pages, and that can't
be known without collecting them all.

If it were, say, a percentage of the total length of the T1/T2 lists,
then we'd have some chance of stopping the scan early.

...which was exactly what was proposed for option (3).

--
Best Regards, Simon Riggs

#8Neil Conway
neilc@samurai.com
In reply to: Tom Lane (#3)
Re: bgwriter changes

On Tue, 2004-12-14 at 09:23 -0500, Tom Lane wrote:

At this point in the release cycle I'm not sure we should be making
any significant changes for anything less than a crashing bug.

Yes, that's true, and I am definitely hesitant to make changes during
RC. That said, "adjust bgwriter defaults" has been on the "open items"
list for quite some time -- in some sense #2 is just a variant on that
idea.

I'd want to see some pretty impressive benchmark results before we
consider making a change now.

http://archives.postgresql.org/pgsql-hackers/2004-12/msg00426.php

is with a patch from Simon that implements #3. While that's not exactly
the same as #2, it does seem to suggest that the performance difference
is rather noticeable. If the problem does indeed exacerbate BufMgrLock
contention, it might be more noticeable still on an SMP machine.

I'm going to try and get some more benchmark data; if anyone else wants
to try the patch and contribute results they are welcome to.

-Neil

#9Simon Riggs
simon@2ndQuadrant.com
In reply to: Neil Conway (#1)
Re: bgwriter changes

On Tue, 2004-12-14 at 13:30, Neil Conway wrote:

In recent discussion[1] with Simon Riggs, there has been some talk of
making some changes to the bgwriter. To summarize the problem, the
bgwriter currently scans the entire T1+T2 buffer lists and returns a
list of all the currently dirty buffers. It then selects a subset of
that list (computed using bgwriter_percent and bgwriter_maxpages) to
flush to disk. Not only does this mean we can end up scanning a
significant portion of shared_buffers for every invocation of the
bgwriter, we also do the scan while holding the BufMgrLock, likely
hurting scalability.

Neil's summary is very clear, many thanks.

There has been many suggestions, patches and test results, so I have
attempted to summarise everything here, using Neil's post to give
structure to the other information:

I think a fix for this in some fashion is warranted for 8.0. Possible
solutions:

I add 2 things to this structure
i) the name of the patch that implements that (authors initials)
ii) benchmark results published that run those

(1) Special-case bgwriter_percent=100. The only reason we need to return
a list of all the dirty buffers is so that we can choose n% of them to
satisfy bgwriter_percent. That is obviously unnecessary if we have
bgwriter_percent=100. I think this change won't help most users,
*unless* we also change bgwriter_percent=100 in the default configuration.

100pct.patch (SR)

Test results to date:
1. Mark Kirkwood ([HACKERS] [Testperf-general] BufferSync and bgwriter)
pgbench 1xCPU 1xDisk shared_buffers=10000
showed 8.0RC1 had regressed compared with 7.4.6, but patch improved
performance significantly against 8.0RC1

Discounted now by both Neil and myself, since the same idea has been
more generally implemented as ideas (2) and (3) below.

(2) Remove bgwriter_percent. I have yet to hear anyone argue that
there's an actual need for bgwriter_percent in tuning bgwriter behavior,
and one less GUC var is a good thing, all else being equal. This is
effectively the same as #1 with the default changed, only less flexibility.

There are 2 patches published which do same thing:
- Partially implemented following Neil's suggestion: bg3.patch (SR)
- Fully implemented: bgwriter_rem_percent-1.patch (NC)
Patches have an identical effect on performance.

Test results to date:
1. Neil's testing was "inconclusive" for shared_buffers = 2500 on a
single cpu, single disk system (test used bgwriter_rem_percent-1.patch)
2. Mark Wong's OSDL tests published as test 211
analysis already posted on this thread;
dbt-2 4 CPU, many disk, shared_buffers=60000 (test used bg3.patch)
3% overall benefit, greatly reduced max transaction times
3. Mark Kirkwood's tests
pgbench 2xCPU 2xdisk, shared_buffers=10000 (test used
bgwriter_rem_percent-1.patch)
Showed slight regression against RC1 - must be test variability because
the patch does less work and is very unlikely to cause a regression

(3) Change the meaning of bgwriter_percent, per Simon's proposal. Make
it mean "the percentage of the buffer pool to scan, at most, to look for
dirty buffers". I don't think this is workable, at least not at this
point in the release cycle, because it means we might not smooth of
checkpoint load, one of the primary goals of the bgwriter (in this
proposal bgwriter would only ever consider writing out a small subset of
the total shared buffer cache: the least-recently-used n%, with 2% being
a suggested default). Some variant of this might be worth exploring for
8.1 though.

Implemented as bg2.patch (SR)
Contains a small bug, easily fixed, which would not effect performance

Test results to date:
1. Mark Kirkwood's tests
pgbench 2xCPU 2xdisk, shared_buffers=10000 (test used bg2.patch)
Showed improvement on RC1 and best option out of all three tests
(compared RC1, bg2.patch, bgwriter_rem_percent-1.patch), possibly
similar within bounds of test variability - but interesting enough to
investigate further.

Current situation seems to be:
- all test results indicate performance regressions in RC1 when
shared_buffers >= 10000 and using multi-cpu/multi-disk systems
- option (2) has the most thoroughly confirmable test results and is
thought by all parties to be the simplest and most robust approach.
- some more test results would be useful to compare, to ensure that
applying the patch would be useful in all circumstances.

Approach (3) looks interesting and should be investigated for 8.1, since
it introduces a subtlely different algorithm that may have "interesting
flight characteristics" and is more of a risk to the 8.0 release.

Thanks very much to all performance testers. It's important work.

--
Best Regards, Simon Riggs

#10Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Simon Riggs (#9)
Re: bgwriter changes

and stops early when eighter maxpages is reached or bgwriter_percent
pages are scanned ?

Only if you redefine the meaning of bgwriter_percent. At present it's
defined by reference to the total number of dirty pages, and that can't
be known without collecting them all.

If it were, say, a percentage of the total length of the T1/T2 lists,
then we'd have some chance of stopping the scan early.

...which was exactly what was proposed for option (3).

But the benchmark run was with bgwriter_percent 100. I wanted to point out,
that I think 100% is too much (writes hot pages multiple times between checkpoints).
In the benchmark, bgwriter obviously falls behind, the delay is too long. But if you
reduce the delay you will start to see what I mean.

Actually I think what is really needed is a max number of pages we want dirty
during checkpoint. Since that would again require scanning all pages, the next best
definition would imho be stop at a percentage (or a number of pages short) of total T1/T2.
Then you can still calculate a worst case IO for checkpoint (assume that all hot pages are dirty)

Andreas

#11Simon Riggs
simon@2ndQuadrant.com
In reply to: Zeugswetter Andreas SB SD (#10)
Re: Re: bgwriter changes

Zeugswetter Andreas DAZ SD <ZeugswetterA@spardat.at> wrote on
15.12.2004, 11:39:44:

and stops early when eighter maxpages is reached or bgwriter_percent
pages are scanned ?

Only if you redefine the meaning of bgwriter_percent. At present it's
defined by reference to the total number of dirty pages, and that can't
be known without collecting them all.

If it were, say, a percentage of the total length of the T1/T2 lists,
then we'd have some chance of stopping the scan early.

...which was exactly what was proposed for option (3).

But the benchmark run was with bgwriter_percent 100.

Yes, it was for run 211, but the patch that was used effectively
disabled bgwriter_percent in favour of looking only at
bgwriter_maxpages.

The patch used was not exactly what was being discussed here. In that
patch, StrategyDirtyBufferList scans until it find bgwriter_maxpages
dirty pages, then stops. That means a varying number of buffers on the
list are scanned, starting from the LRU.

What is being suggested here was implemented for bg2.patch. The
algorithm in there was for StrategyDirtyBufferList to scan until it had
looked at the dirty/clean status of bgwriter_maxpages buffers. That
means a constant number of buffers on the list are scanned, starting
from the LRU.

The two alternative algorithms are similar, but have these differences:
The former (option (2)) finds a constant number of dirty pages, though
has varying search time. The latter (option (3)) has constant search
time, yet finds a varying number of dirty pages. Both alternatives
avoid scanning the whole of the buffer list, as is the case in 8.0RC1,
allowing the bgwriter to act more frequently at lower cost.

There's some evidence that the second algorithm may be better, but may
have other characteristics or side-effects that we don't yet know. So
At this stage of the game, I'm happier not to progress option (3) any
further for r8.0, since option(2) is closest to the one that has been
through beta-testing.

Best Regards, Simon Riggs

#12Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Simon Riggs (#11)
Re: bgwriter changes

The two alternative algorithms are similar, but have these
differences:
The former (option (2)) finds a constant number of dirty pages, though
has varying search time.

This has the disadvantage of converging against 0 dirty pages.
A system that has less than maxpages dirty will write every page with
every bgwriter run.

The latter (option (3)) has constant search
time, yet finds a varying number of dirty pages.

This might have the disadvantage of either leaving too much for the
checkpoint or writing too many dirty pages in one run. Is writing a lot
in one run actually a problem though ? Or does the bgwriter pause
periodically while writing the pages of one run ?
If this is expressed in pages it would naturally need to be more than the
current maxpages (to accomodate for clean pages). The suggested 2% sounded
way too low for me (that leaves 98% to the checkpoint).

Also I think we are doing too frequent checkpoints with bgwriter in
place. Every 15-30 minutes should be sufficient, even for benchmarks.
We need a tuned bgwriter for this though.

Andreas

#13Simon Riggs
simon@2ndQuadrant.com
In reply to: Zeugswetter Andreas SB SD (#12)
Re: RE: Re: bgwriter changes

Zeugswetter Andreas DAZ SD <ZeugswetterA@spardat.at> wrote on
15.12.2004, 15:33:16:

The two alternative algorithms are similar, but have these
differences:
The former (option (2)) finds a constant number of dirty pages, though
has varying search time.

This has the disadvantage of converging against 0 dirty pages.
A system that has less than maxpages dirty will write every page with
every bgwriter run.

Yes, that is my issue with that algorithm.... it causes more contention
when there are less dirty pages.

The latter (option (3)) has constant search
time, yet finds a varying number of dirty pages.

This might have the disadvantage of either leaving too much for the
checkpoint or writing too many dirty pages in one run. Is writing a lot
in one run actually a problem though ? Or does the bgwriter pause
periodically while writing the pages of one run ?
If this is expressed in pages it would naturally need to be more than the
current maxpages (to accomodate for clean pages). The suggested 2% sounded
way too low for me (that leaves 98% to the checkpoint).

This remains to be seen. We have Mark Kirkwood's test results that show
that the algorithm may work better, but no large scale OSDL run as yet.

My view is that the 2% is misleading. The whole buffer list is like a
conveyor belt moving towards the LRU. It is my *conjecture* that
cleaning the LRU would be sufficient to clean the whole list
eventually. Blocks in the buffer list that always stay near the MRU
would be dirtied again quickly even if you did clean them, so if they
don't reach nearly to the LRU then there is less benefit in cleaning
them. (1%, 2% or 5% would need to be a tunable factor; 2% was the
suggested default)

If the bgwriter writes too often it would get in the way of other work,
so there is clearly an optimum setting for any workload.

Also I think we are doing too frequent checkpoints with bgwriter in
place. Every 15-30 minutes should be sufficient, even for benchmarks.
We need a tuned bgwriter for this though.

Well, yes, you're right. ...but the bug limiting us to 255 files
restricts us there for higher performance situations.

Best Regards, Simon Riggs

#14Jan Wieck
JanWieck@Yahoo.com
In reply to: Tom Lane (#6)
Re: bgwriter changes

On 12/14/2004 2:40 PM, Tom Lane wrote:

"Zeugswetter Andreas DAZ SD" <ZeugswetterA@spardat.at> writes:

Is it possible to do a patch that produces a dirty buffer list in LRU order
and stops early when eighter maxpages is reached or bgwriter_percent
pages are scanned ?

Only if you redefine the meaning of bgwriter_percent. At present it's
defined by reference to the total number of dirty pages, and that can't
be known without collecting them all.

If it were, say, a percentage of the total length of the T1/T2 lists,
then we'd have some chance of stopping the scan early.

That definition is identical to a fixed maximum number of pages to write
per call. And since that parameter exists too, it would be redundant.

The other way around would make sense. In order to avoid writing the
busiest buffers at all (except for checkpoinging), the parameter should
mean "don't scan the last x% of the queue at all".

Still, we need to avoid scanning over all the clean blocks of a large
buffer pool, so there is need for a separate dirty-LRU.

Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jan Wieck (#14)
Re: bgwriter changes

Jan Wieck <JanWieck@Yahoo.com> writes:

Still, we need to avoid scanning over all the clean blocks of a large
buffer pool, so there is need for a separate dirty-LRU.

That's not happening, unless you want to undo the cntxDirty stuff,
with unknown implications for performance and deadlock safety. It's
definitely not happening in 8.0 ;-)

regards, tom lane

#16Jan Wieck
JanWieck@Yahoo.com
In reply to: Tom Lane (#15)
Re: bgwriter changes

On 12/15/2004 12:10 PM, Tom Lane wrote:

Jan Wieck <JanWieck@Yahoo.com> writes:

Still, we need to avoid scanning over all the clean blocks of a large
buffer pool, so there is need for a separate dirty-LRU.

That's not happening, unless you want to undo the cntxDirty stuff,
with unknown implications for performance and deadlock safety. It's
definitely not happening in 8.0 ;-)

Sure not.

Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #

#17Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Simon Riggs (#9)
Re: bgwriter changes

Simon Riggs wrote:

100pct.patch (SR)

Test results to date:
1. Mark Kirkwood ([HACKERS] [Testperf-general] BufferSync and bgwriter)
pgbench 1xCPU 1xDisk shared_buffers=10000
showed 8.0RC1 had regressed compared with 7.4.6, but patch improved
performance significantly against 8.0RC1

It occurs to me that cranking up the number of transactions (say
1000->100000) and seeing if said regression persists would be
interesting. This would give the smoothing effect of the bgwriter (plus
the ARC) a better chance to shine.

regards

Mark

#18Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Mark Kirkwood (#17)
Re: bgwriter changes

Only if you redefine the meaning of bgwriter_percent. At present it's
defined by reference to the total number of dirty pages, and that can't
be known without collecting them all.

If it were, say, a percentage of the total length of the T1/T2 lists,
then we'd have some chance of stopping the scan early.

The other way around would make sense. In order to avoid writing the
busiest buffers at all (except for checkpoinging), the parameter should
mean "don't scan the last x% of the queue at all".

Your meaning is 1 - above meaning (at least that is what Tom and I meant),
but is probably easier to understand (== Informix LRU_MIN_DIRTY).

Still, we need to avoid scanning over all the clean blocks of a large
buffer pool, so there is need for a separate dirty-LRU.

Maybe a "may be dirty" bitmap would be easier to do without beeing deadlock prone ?

Andreas

#19Neil Conway
neilc@samurai.com
In reply to: Zeugswetter Andreas SB SD (#12)
Re: bgwriter changes

Zeugswetter Andreas DAZ SD wrote:

This has the disadvantage of converging against 0 dirty pages.
A system that has less than maxpages dirty will write every page with
every bgwriter run.

Yeah, I'm concerned about the bgwriter being overly aggressive if we
disable bgwriter_percent. If we leave the settings as they are (delay =
200, maxpages = 100, shared_buffers = 1000 by default), we will be
writing all the dirty pages to disk every 2 seconds, which seems far too
much.

It might also be good to reduce the delay, in order to more proactively
keep the LRUs clean (e.g. scanning to find N dirty pages once per second
is likely to reach father away from the LRU than scanning for N/M pages
once per 1/M seconds). On the other hand the more often the bgwriter
scans the buffer pool, the more times the BufMgrLock needs to be
acquired -- and in a system in which pages aren't being dirtied very
rapidly (or the dirtied pages tend to be very hot), each of those scans
is going to take a while to find enough dirty pages using #2. So perhaps
it is best to leave the delay as is for 8.0.

This might have the disadvantage of either leaving too much for the
checkpoint or writing too many dirty pages in one run. Is writing a lot
in one run actually a problem though ? Or does the bgwriter pause
periodically while writing the pages of one run ?

The bgwriter does not pause between writing pages. What would be the
point of doing that? The kernel is going to be caching the write() anyway.

If this is expressed in pages it would naturally need to be more than the
current maxpages (to accomodate for clean pages). The suggested 2% sounded
way too low for me (that leaves 98% to the checkpoint).

I agree this might be a problem, but it doesn't necessarily leave 98% to
be written at checkpoint: if the buffers in the LRU change over time,
the set of pages searched by the bgwriter will also change. I'm not sure
how quickly the pages near the LRU change in a "typical workload";
moreover, I think this would vary between different workloads.

-Neil

#20Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Mark Kirkwood (#17)
Re: bgwriter changes

Mark Kirkwood wrote:

It occurs to me that cranking up the number of transactions (say
1000->100000) and seeing if said regression persists would be
interesting. This would give the smoothing effect of the bgwriter
(plus the ARC) a better chance to shine.

I ran a few of these over the weekend - since it rained here :-) , and
the results are quite interesting:

[2xPIII, 2G, 2xATA RAID 0, FreeBSD 5.3 with the same non default Pg
parameters as before]

clients = 4 transactions = 100000 (/client), each test run twice

Version tps
7.4.6 49
8.0.0.0RC1 50
8.0.0.0RC1 + rem 49
8.0.0.0RC1 + bg2 50

Needless to way, all well within measurement error of each other (the
variability was about 1).

I suspect that my previous tests had too few transactions to trigger
many (or any) checkpoints. With them now occurring in the test, they
look to be the most significant factor (contrast with 70-80 tps for 4
clients with 1000 transactions).

Also with a small number of transactions, the fsyn'ed blocks may have
all fitted in the ATA disk caches (2x2M). In hindsight I should have
disabled this! (might run the smaller no. transactions again with
hw.ata.wc=0 and see if this is enlightening)

regards

Mark

#21Simon Riggs
simon@2ndQuadrant.com
In reply to: Mark Kirkwood (#20)
#22Simon Riggs
simon@2ndQuadrant.com
In reply to: Neil Conway (#19)