Let PostgreSQL's On Schedule checkpoint write buffer smooth spread cycle by tuning IsCheckpointOnSchedule?
PostgreSQL (<=9.4) trend to smooth buffer write smooth in a
checkpoint_completion_target (checkpoint_timeout or checkpoint_segments),
but when we use synchronous_commit=off, there is a little problem for
the checkpoint_segments
target, because xlog write fast(for full page write which the first page
write after checkpoint), so checkpointer cann't sleep and write buffer not
smooth.
There is an test:
# stap -DMAXSKIPPED=100000 -v 11111 -e '
global s_var, e_var, stat_var;
/* probe smgr__md__read__start(ForkNumber, BlockNumber, Oid, Oid, Oid,
int); */
probe process("/opt/pgsql/bin/postgres").mark("smgr__md__read__start") {
s_var[pid(),1] = gettimeofday_us()
}
/* probe smgr__md__read__done(ForkNumber, BlockNumber, Oid, Oid, Oid, int,
int, int); */
probe process("/opt/pgsql/bin/postgres").mark("smgr__md__read__done") {
e_var[pid(),1] = gettimeofday_us()
if ( s_var[pid(),1] > 0 )
stat_var[pid(),1] <<< e_var[pid(),1] - s_var[pid(),1]
}
/* probe smgr__md__write__start(ForkNumber, BlockNumber, Oid, Oid, Oid,
int); */
probe process("/opt/pgsql/bin/postgres").mark("smgr__md__write__start") {
s_var[pid(),2] = gettimeofday_us()
}
/* probe smgr__md__write__done(ForkNumber, BlockNumber, Oid, Oid, Oid, int,
int, int); */
probe process("/opt/pgsql/bin/postgres").mark("smgr__md__write__done") {
e_var[pid(),2] = gettimeofday_us()
if ( s_var[pid(),2] > 0 )
stat_var[pid(),2] <<< e_var[pid(),2] - s_var[pid(),2]
}
probe process("/opt/pgsql/bin/postgres").mark("buffer__sync__start") {
printf("buffer__sync__start num_buffers: %d, dirty_buffers: %d\n",
$NBuffers, $num_to_write)
}
probe process("/opt/pgsql/bin/postgres").mark("checkpoint__start") {
printf("checkpoint start\n")
}
probe process("/opt/pgsql/bin/postgres").mark("checkpoint__done") {
printf("checkpoint done\n")
}
probe timer.s(1) {
foreach ([v1,v2] in stat_var +) {
if ( @count(stat_var[v1,v2]) >0 ) {
printf("r1_or_w2 %d, pid: %d, min: %d, max: %d, avg: %d, sum: %d,
count: %d\n", v2, v1, @min(stat_var[v1,v2]), @max(stat_var[v1,v2]),
@avg(stat_var[v1,v2]), @sum(stat_var[v1,v2]), @count(stat_var[v1,v2]))
}
}
printf("----------------------------------end-----------------------------\n")
delete s_var
delete e_var
delete stat_var
}'
Use the test table and data:
create table tbl(id primary key,info text,crt_time timestamp);
insert into tbl select generate_series(1,50000000),now(),now();
Use pgbench test it.
$ vi test.sql
\setrandom id 1 50000000
update tbl set crt_time=now() where id = :id ;
$ pgbench -M prepared -n -r -f ./test.sql -P 1 -c 28 -j 28 -T 100000000
When on schedule checkpoint occure , the tps:
progress: 255.0 s, 58152.2 tps, lat 0.462 ms stddev 0.504
progress: 256.0 s, 31382.8 tps, lat 0.844 ms stddev 2.331
progress: 257.0 s, 14615.5 tps, lat 1.863 ms stddev 4.554
progress: 258.0 s, 16258.4 tps, lat 1.652 ms stddev 4.139
progress: 259.0 s, 17814.7 tps, lat 1.526 ms stddev 4.035
progress: 260.0 s, 14573.8 tps, lat 1.825 ms stddev 5.592
progress: 261.0 s, 16736.6 tps, lat 1.600 ms stddev 5.018
progress: 262.0 s, 19060.5 tps, lat 1.448 ms stddev 4.818
progress: 263.0 s, 20553.2 tps, lat 1.290 ms stddev 4.146
progress: 264.0 s, 26223.0 tps, lat 1.042 ms stddev 3.711
progress: 265.0 s, 31953.0 tps, lat 0.836 ms stddev 2.837
progress: 266.0 s, 43396.1 tps, lat 0.627 ms stddev 1.615
progress: 267.0 s, 50487.8 tps, lat 0.533 ms stddev 0.647
progress: 268.0 s, 53537.7 tps, lat 0.502 ms stddev 0.598
progress: 269.0 s, 54259.3 tps, lat 0.496 ms stddev 0.624
progress: 270.0 s, 56139.8 tps, lat 0.479 ms stddev 0.524
The parameters for onschedule checkpoint:
checkpoint_segments = 512
checkpoint_timeout = 5min
checkpoint_completion_target = 0.9
stap's output :
there is 156467 dirty blocks, we can see the buffer write per second, write
buffer is not smooth between time target.
but between xlog target.
156467/(4.5*60*0.9) = 579.5 write per second.
checkpoint start
buffer__sync__start num_buffers: 262144, dirty_buffers: 156467
r1_or_w2 2, pid: 19848, min: 41, max: 1471, avg: 49, sum: 425291, count:
8596
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 41, max: 153, avg: 49, sum: 450597, count: 9078
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 41, max: 643, avg: 51, sum: 429193, count: 8397
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 41, max: 1042, avg: 55, sum: 449091, count:
8097
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 41, max: 254, avg: 52, sum: 296668, count: 5617
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 39, max: 171, avg: 54, sum: 321027, count: 5851
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 41, max: 138, avg: 60, sum: 300056, count: 4953
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 42, max: 1217, avg: 65, sum: 312859, count:
4748
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 41, max: 1371, avg: 56, sum: 353905, count:
6304
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 41, max: 358, avg: 58, sum: 236254, count: 4038
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 34, max: 1239, avg: 63, sum: 296906, count:
4703
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 31, max: 17408, avg: 63, sum: 415234, count:
6534
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 31, max: 5486, avg: 57, sum: 190345, count:
3318
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 29, max: 510, avg: 53, sum: 136221, count: 2563
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 32, max: 733, avg: 52, sum: 108327, count: 2070
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 34, max: 382, avg: 53, sum: 96157, count: 1812
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 43, max: 327, avg: 53, sum: 83641, count: 1571
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 33, max: 102, avg: 54, sum: 79991, count: 1468
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 35, max: 88, avg: 53, sum: 74338, count: 1389
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 32, max: 86, avg: 52, sum: 65710, count: 1243
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 30, max: 347, avg: 52, sum: 66866, count: 1263
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 31, max: 93, avg: 54, sum: 75642, count: 1398
----------------------------------end-----------------------------
r1_or_w2 2, pid: 19848, min: 33, max: 100, avg: 51, sum: 62302, count: 1216
......
I think we can add an condition to the IsCheckpointOnSchedule,
if (synchronous_commit != SYNCHRONOUS_COMMIT_OFF)
{
recptr = GetInsertRecPtr();
elapsed_xlogs = (((double) (recptr -
ckpt_start_recptr)) / XLogSegSize) / CheckPointSegments;
if (progress < elapsed_xlogs)
{
ckpt_cached_elapsed = elapsed_xlogs;
return false;
}
}
# vi src/backend/postmaster/checkpointer.c
#include "access/xact.h"
/*
* IsCheckpointOnSchedule -- are we on schedule to finish this checkpoint
* in time?
*
* Compares the current progress against the time/segments elapsed since
last
* checkpoint, and returns true if the progress we've made this far is
greater
* than the elapsed time/segments.
*/
static bool
IsCheckpointOnSchedule(double progress)
{
XLogRecPtr recptr;
struct timeval now;
double elapsed_xlogs,
elapsed_time;
Assert(ckpt_active);
/* Scale progress according to checkpoint_completion_target. */
progress *= CheckPointCompletionTarget;
/*
* Check against the cached value first. Only do the more expensive
* calculations once we reach the target previously calculated.
Since
* neither time or WAL insert pointer moves backwards, a freshly
* calculated value can only be greater than or equal to the cached
value.
*/
if (progress < ckpt_cached_elapsed)
return false;
/*
* Check progress against WAL segments written and
checkpoint_segments.
*
* We compare the current WAL insert location against the location
* computed before calling CreateCheckPoint. The code in XLogInsert
that
* actually triggers a checkpoint when checkpoint_segments is
exceeded
* compares against RedoRecptr, so this is not completely accurate.
* However, it's good enough for our purposes, we're only
calculating an
* estimate anyway.
*/
if (!RecoveryInProgress())
{
if (synchronous_commit != SYNCHRONOUS_COMMIT_OFF)
{
recptr = GetInsertRecPtr();
elapsed_xlogs = (((double) (recptr -
ckpt_start_recptr)) / XLogSegSize) / CheckPointSegments;
if (progress < elapsed_xlogs)
{
ckpt_cached_elapsed = elapsed_xlogs;
return false;
}
}
}
/*
* Check progress against time elapsed and checkpoint_timeout.
*/
gettimeofday(&now, NULL);
elapsed_time = ((double) ((pg_time_t) now.tv_sec - ckpt_start_time)
+
now.tv_usec / 1000000.0) /
CheckPointTimeout;
if (progress < elapsed_time)
{
ckpt_cached_elapsed = elapsed_time;
return false;
}
/* It looks like we're on schedule. */
return true;
}
# gmake && gmake install
$ pg_ctl restart -m fast
Test again:
progress: 291.0 s, 63144.9 tps, lat 0.426 ms stddev 0.383
progress: 292.0 s, 55063.7 tps, lat 0.480 ms stddev 1.433
progress: 293.0 s, 12225.3 tps, lat 2.238 ms stddev 4.460
progress: 294.0 s, 16436.4 tps, lat 1.621 ms stddev 4.043
progress: 295.0 s, 18516.5 tps, lat 1.444 ms stddev 3.286
progress: 296.0 s, 21983.7 tps, lat 1.251 ms stddev 2.941
progress: 297.0 s, 25759.7 tps, lat 1.034 ms stddev 2.356
progress: 298.0 s, 33139.4 tps, lat 0.821 ms stddev 1.676
progress: 299.0 s, 41904.9 tps, lat 0.644 ms stddev 1.134
progress: 300.0 s, 52432.9 tps, lat 0.513 ms stddev 0.470
progress: 301.0 s, 57115.4 tps, lat 0.471 ms stddev 0.325
progress: 302.0 s, 59422.1 tps, lat 0.452 ms stddev 0.297
progress: 303.0 s, 59860.5 tps, lat 0.449 ms stddev 0.309
We can see checkpointer wiriter buffer smooth(spread time perid) this time.
checkpoint start
----------------------------------end-----------------------------
buffer__sync__start num_buffers: 262144, dirty_buffers: 156761
r1_or_w2 2, pid: 22334, min: 51, max: 137, avg: 60, sum: 52016, count: 860
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 51, max: 108, avg: 58, sum: 35526, count: 604
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 51, max: 145, avg: 71, sum: 39779, count: 559
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 52, max: 172, avg: 79, sum: 47279, count: 594
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 44, max: 160, avg: 63, sum: 36907, count: 581
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 51, max: 113, avg: 61, sum: 33895, count: 552
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 51, max: 116, avg: 61, sum: 38177, count: 617
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 51, max: 113, avg: 62, sum: 34199, count: 550
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 53, max: 109, avg: 65, sum: 39842, count: 606
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 50, max: 118, avg: 64, sum: 35099, count: 545
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 50, max: 107, avg: 64, sum: 39027, count: 606
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 51, max: 114, avg: 62, sum: 34054, count: 545
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 47, max: 106, avg: 63, sum: 38573, count: 605
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 48, max: 101, avg: 62, sum: 38051, count: 607
----------------------------------end-----------------------------
r1_or_w2 2, pid: 22334, min: 42, max: 103, avg: 61, sum: 33596, count: 545
But there is also a little problem, When PostgreSQL write xlog reach
checkpoint_segments earlier then checkpoint_timeout, the next checkpoint
will start soon, so we must tuning the checkpoint_segments larger when the
checkpoint occure busy.
Regards,
Digoal
--
公益是一辈子的事,I'm Digoal,Just Do It.
On 05/12/2015 03:27 AM, digoal zhou wrote:
PostgreSQL (<=9.4) trend to smooth buffer write smooth in a
checkpoint_completion_target (checkpoint_timeout or checkpoint_segments),
but when we use synchronous_commit=off, there is a little problem for
the checkpoint_segments
target, because xlog write fast(for full page write which the first page
write after checkpoint), so checkpointer cann't sleep and write buffer not
smooth.
...
I think we can add an condition to the IsCheckpointOnSchedule,
if (synchronous_commit != SYNCHRONOUS_COMMIT_OFF)
{
recptr = GetInsertRecPtr();
elapsed_xlogs = (((double) (recptr -
ckpt_start_recptr)) / XLogSegSize) / CheckPointSegments;if (progress < elapsed_xlogs)
{
ckpt_cached_elapsed = elapsed_xlogs;
return false;
}
}
This has nothing to do with asynchronous_commit, except that setting
asynchronous_commit=off makes your test case run faster, and hit the
problem harder.
I think the real problem here is that IsCheckpointOnSchedule assumes
that the rate of WAL generated is constant throughout the checkpoint
cycle, but in reality you generate a lot more WAL immediately after the
checkpoint begins, thanks to full_page_writes. For example, in the
beginning of the cycle, you quickly use up, say, 20% of the WAL space in
the first 10 seconds, and the scheduling thinks it's in a lot of hurry
to finish the checkpoint because it extrapolates that the rest of the
WAL will be used up in the next 40 seconds. But in reality, the WAL
consumption levels off, and you have many minutes left until
CheckPointSegments.
Can you try the attached patch? It modifies the above calculation to
take the full-page-write effect into account. I used X^1.5 as the
corrective function, which roughly reflects the typical WAL consumption
pattern. You can adjust the exponent, 1.5, to make the correction more
or less aggressive.
- Heikki
Attachments:
compensate-fpw-effect-on-checkpoint-scheduling-1.patchapplication/x-patch; name=compensate-fpw-effect-on-checkpoint-scheduling-1.patchDownload
diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c
index 0dce6a8..fb02f56 100644
--- a/src/backend/postmaster/checkpointer.c
+++ b/src/backend/postmaster/checkpointer.c
@@ -763,6 +763,19 @@ IsCheckpointOnSchedule(double progress)
recptr = GetInsertRecPtr();
elapsed_xlogs = (((double) (recptr - ckpt_start_recptr)) / XLogSegSize) / CheckPointSegments;
+ /*
+ * Immediately after a checkpoint, a lot more WAL is generated when
+ * full_page_write is enabled, because every WAL record has to include
+ * a full image of the modified page. It levels off as time passes and
+ * more updates fall on pages that have already been modified since
+ * the last checkpoint.
+ *
+ * To correct for that effect, apply a corrective factor on the
+ * amount of WAL consumed so far.
+ */
+ if (fullPageWrites)
+ elapsed_xlogs = pow(elapsed_xlogs, 1.5);
+
if (progress < elapsed_xlogs)
{
ckpt_cached_elapsed = elapsed_xlogs;
(please keep the mailing list CC'd, and please don't top-post)
On 05/13/2015 05:00 AM, digoal zhou wrote:
I test it, but use exponent not very perfect in any environment.
why cann't use time only?
As you mentioned yourself earlier, if you only use time but you reach
checkpoint_segments before checkpoint_timeout, you will not complete the
checkpoint until you'd already need to begin the next checkpoint. You
can't completely ignore checkpoint_segments.
Comparing the numbers you give below with
compensate-fpw-effect-on-checkpoint-scheduling-1.patch, with the ones
from your first post, it looks like the patch already made the situation
much better. You still have a significant burst in the beginning of the
checkpoint cycle, but it's a lot smaller than without the patch. Before
the patch, the "count" topped at 9078, and below it topped at 2964.
There is a strange "lull" after the burst, I'm not sure what's going on
there, but overall it seems like a big improvement.
Did the patch alleviate the bump in latency that pgbench reports?
I put the "count" numbers from your original post and below into a
spreadsheet, and created some fancy charts. See attached. It shows the
same thing but with pretty pictures. Assuming we want the checkpoint to
be spread as evenly as possible across the cycle, the ideal would be a
straight line from 0 to about 150000 in 270 seconds in the cumulative
chart. You didn't give the full data, but you can extrapolate the lines
to get a rough picture of how close the different versions are from that
ideal.
In summary, the X^1.5 correction seems to work pretty well. It doesn't
completely eliminate the problem, but it makes it a lot better.
I don't want to over-compensate for the full-page-write effect either,
because there are also applications where that effect isn't so big. For
example, an application that performs a lot of updates, but all the
updates are on a small number of pages, so the full-page-write storm
immediately after checkpoint doesn't last long. A worst case for this
patch would be such an application - lots of updates on only a few pages
- with a long checkpoint_timeoout but relatively small
checkpoint_segments, so that checkpoints are always driven by
checkpoint_segments. I'd like to see some benchmarking of that worst
case before committing anything like this.
----------------------------------end-----------------------------
checkpoint start
buffer__sync__start num_buffers: 524288, dirty_buffers: 156931
r1_or_w2 2, pid: 29132, min: 44, max: 151, avg: 52, sum: 49387, count: 932
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 95, avg: 49, sum: 41532, count: 837
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 747, avg: 54, sum: 100419, count: 1849
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 372, avg: 52, sum: 110701, count: 2090
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 115, avg: 57, sum: 147510, count: 2575
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 470, avg: 58, sum: 145217, count: 2476
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 120, avg: 54, sum: 161401, count: 2964
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 208, avg: 59, sum: 170280, count: 2847
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 10089, avg: 62, sum: 136106, count:
2181
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 41, max: 487, avg: 56, sum: 88990, count: 1570
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 39, max: 102, avg: 55, sum: 59807, count: 1083
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 40, max: 557, avg: 56, sum: 117274, count: 2083
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 537, avg: 58, sum: 169867, count: 2882
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 147, avg: 60, sum: 92835, count: 1538
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 93, avg: 55, sum: 14641, count: 264
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 48, max: 92, avg: 56, sum: 11834, count: 210
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 91, avg: 56, sum: 9151, count: 162
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 92, avg: 57, sum: 8621, count: 151
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 36, max: 90, avg: 57, sum: 7962, count: 139
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 48, max: 93, avg: 58, sum: 7194, count: 123
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 33, max: 95, avg: 58, sum: 7143, count: 123
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 89, avg: 57, sum: 6801, count: 118
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 49, max: 100, avg: 58, sum: 6818, count: 117
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 90, avg: 57, sum: 6982, count: 121
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 88, avg: 55, sum: 6459, count: 117
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 48, max: 88, avg: 58, sum: 7022, count: 121
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 47, max: 94, avg: 57, sum: 5952, count: 104
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 49, max: 95, avg: 57, sum: 6871, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 49, max: 85, avg: 58, sum: 6829, count: 117
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 89, avg: 57, sum: 6851, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 49, max: 100, avg: 57, sum: 6779, count: 117
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 93, avg: 55, sum: 6502, count: 117
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 47, max: 98, avg: 58, sum: 6805, count: 117
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 38, max: 90, avg: 57, sum: 6771, count: 118
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 96, avg: 56, sum: 6593, count: 116
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 101, avg: 57, sum: 6809, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 100, avg: 57, sum: 6171, count: 107
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 48, max: 105, avg: 57, sum: 6801, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 95, avg: 57, sum: 6792, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 93, avg: 56, sum: 6693, count: 118
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 93, avg: 57, sum: 6878, count: 120
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 93, avg: 56, sum: 6664, count: 117
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 94, avg: 57, sum: 7051, count: 123
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 92, avg: 57, sum: 6957, count: 120
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 48, max: 94, avg: 57, sum: 6842, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 100, avg: 57, sum: 6865, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 49, max: 102, avg: 58, sum: 6915, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 48, max: 94, avg: 57, sum: 6187, count: 107
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 50, max: 86, avg: 58, sum: 6957, count: 119
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 97, avg: 55, sum: 33636, count: 609
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 36, max: 90, avg: 55, sum: 34180, count: 620
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 29, max: 92, avg: 53, sum: 36569, count: 680
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 40, max: 91, avg: 54, sum: 37374, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 32, max: 86, avg: 54, sum: 33347, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 94, avg: 54, sum: 37603, count: 684
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 32, max: 93, avg: 55, sum: 33777, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 104, avg: 55, sum: 37566, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 92, avg: 54, sum: 37037, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 42, max: 106, avg: 57, sum: 35181, count: 614
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 99, avg: 54, sum: 36981, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 88, avg: 53, sum: 33202, count: 622
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 89, avg: 54, sum: 36825, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 88, avg: 53, sum: 33917, count: 635
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 89, avg: 55, sum: 36234, count: 658
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 99, avg: 55, sum: 37719, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 93, avg: 54, sum: 33491, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 96, avg: 54, sum: 37365, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 101, avg: 54, sum: 33481, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 37, max: 93, avg: 54, sum: 37102, count: 685
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 40, max: 87, avg: 54, sum: 36968, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 84, avg: 54, sum: 33565, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 92, avg: 54, sum: 37271, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 48, max: 96, avg: 55, sum: 34272, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 85, avg: 54, sum: 37378, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 107, avg: 53, sum: 36715, count: 680
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 88, avg: 54, sum: 33620, count: 616
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 29, max: 94, avg: 54, sum: 37093, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 39, max: 110, avg: 53, sum: 33013, count: 612
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 40, max: 97, avg: 54, sum: 37215, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 37, max: 90, avg: 54, sum: 37240, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 41, max: 95, avg: 54, sum: 33555, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 38, max: 89, avg: 54, sum: 37503, count: 683
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 38, max: 95, avg: 55, sum: 33803, count: 614
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 89, avg: 56, sum: 38403, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 33, max: 92, avg: 54, sum: 37354, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 93, avg: 55, sum: 33881, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 34, max: 91, avg: 54, sum: 37047, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 32, max: 85, avg: 53, sum: 33003, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 92, avg: 53, sum: 36854, count: 683
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 40, max: 92, avg: 54, sum: 36597, count: 673
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 96, avg: 54, sum: 33689, count: 620
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 32, max: 92, avg: 54, sum: 37194, count: 684
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 90, avg: 53, sum: 32813, count: 612
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 32, max: 100, avg: 54, sum: 37485, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 31, max: 97, avg: 54, sum: 33294, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 33, max: 94, avg: 54, sum: 37320, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 28, max: 92, avg: 54, sum: 37067, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 87, avg: 54, sum: 33766, count: 614
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 33, max: 110, avg: 53, sum: 36220, count: 680
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 98, avg: 54, sum: 33442, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 33, max: 97, avg: 55, sum: 37692, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 33, max: 95, avg: 54, sum: 37073, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 88, avg: 54, sum: 33676, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 103, avg: 53, sum: 36770, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 96, avg: 54, sum: 33447, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 91, avg: 55, sum: 37643, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 38, max: 90, avg: 54, sum: 37377, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 103, avg: 56, sum: 34531, count: 614
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 121, avg: 54, sum: 37412, count: 683
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 40, max: 89, avg: 54, sum: 33173, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 31, max: 94, avg: 54, sum: 37385, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 28, max: 106, avg: 55, sum: 38132, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 34, max: 96, avg: 55, sum: 33800, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 37, max: 98, avg: 56, sum: 38305, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 28, max: 104, avg: 55, sum: 33744, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 103, avg: 54, sum: 36923, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 36, max: 89, avg: 55, sum: 37797, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 103, avg: 56, sum: 34902, count: 620
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 88, avg: 55, sum: 38025, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 43, max: 102, avg: 56, sum: 34545, count: 614
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 34, max: 94, avg: 55, sum: 37756, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 28, max: 93, avg: 54, sum: 33530, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 31, max: 97, avg: 55, sum: 37992, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 99, avg: 55, sum: 37923, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 39, max: 101, avg: 55, sum: 34027, count: 614
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 28, max: 93, avg: 53, sum: 36078, count: 680
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 41, max: 89, avg: 51, sum: 31563, count: 612
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 31, max: 92, avg: 52, sum: 35596, count: 680
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 102, avg: 55, sum: 37816, count: 685
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 45, max: 102, avg: 55, sum: 33828, count: 613
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 93, avg: 54, sum: 37285, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 90, avg: 55, sum: 34037, count: 614
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 86, avg: 54, sum: 37584, count: 684
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 32, max: 103, avg: 55, sum: 37946, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 97, avg: 56, sum: 34556, count: 617
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 33, max: 99, avg: 56, sum: 38213, count: 681
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 44, max: 97, avg: 56, sum: 34613, count: 614
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 101, avg: 55, sum: 37925, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 35, max: 93, avg: 55, sum: 35504, count: 639
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 46, max: 90, avg: 55, sum: 36459, count: 655
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 30, max: 97, avg: 54, sum: 37369, count: 682
----------------------------------end-----------------------------
r1_or_w2 2, pid: 29132, min: 31, max: 93, avg: 54, sum: 33161, count: 612
----------------------------------end----------------------------
- Heikki
Attachments:
checkpoint-progress-charts.odsapplication/vnd.oasis.opendocument.spreadsheet; name=checkpoint-progress-charts.odsDownload
PK pA�F�l9�. . mimetypeapplication/vnd.oasis.opendocument.spreadsheetPK pA�F settings.xml���s�8����`�N�m<��M�k��`���M�h"k=����6d����2}�S�e�vY�����>k�AH�|huN�V�����u7�l��>��:����`{��>p����~E6�v.�dyh���H$�6'>H[�6�������XX���Q�0�VJv���lN6��V������_u�/������CQ��$(��(����V��j��<0M�����������O�*�#�4v�#���i�)l��fe�;�sO%�3 3����z�re�:���y�9�U�X�,x�2�_��U����W�_]�2U�O�����> ��{�^Zl�*��]Ll�h�k/��TB{�5�|��*M#hJ��yI��-�
@u
8�$�-J���/���r'yL��E�6�1�
��\�������W(*�8E]S���{�yG�7����!�O _��������9n�����R���+R/P)�3s����?��Z]:�������e�@�E�_�%��
7�M�9"���!d�3����8��[q���)Il�{\��������w����%. ���������s�a2�����c������+*S�r��%rD� C�B3��P�;�u��%C������Z���S�\�tbDHHt�1@���B�{:p���t �sk�p�pJ���A�q����5%�$LA���-�;�#q�Y�W�;���k*s�� ��|Y�I��G*�-wW9� oW
���$��c��A(Ht���?� ��:� g[i���>�|�������w� gED��U�KJ�m�]��n�A� `�; �#Q��h9�"��� �� L�U��.Q�[*L�Cs"a�������*���?d�w�G\
]��0��v�
'����'��.�s��o���(G�AoR����d���|B��~!�
���
��U����*^���w
���Ks�y���~�9�
���<����K�%#�9�;��������x���%������O�����uc�i��4�e��
�,�\��1U�H��L��O����G�PKMC�= � PK pA�F content.xml�]]s���}�_��V���,��^O�f��}������L��#�6s%QEJ�q~�
J�<����2/�A�`4��������s�vu����btV-���^�^���������?�������b�L7�j�O����}��^v�����]^4eWw�rQu��E�����.��p�x�[?���x��u�e�=9`�-?�����gmy�=9`�Q�O�i�'�����[}�*���V|���_/Gw���b2���?��M{;a�� ��k�t�[m�9�f�I5���� ;g�-vQ�Kl�v�I���S��MS���X]�U�!�qC��]h��������>��`��]��������������(�w/�k'?����>>��v��W��j��+�cF���M���N����E!'��{��o���z]�{��7��r>�Y�Y<g4�c�W�C��9Q0D�� |���������>�<���#�~<����\>Z�
$���j�V��]�s�|=[|����b���~�Bo���Y�o���a�;��s]���������&