RE: v7.1b4 bad performance
-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Friday, February 16, 2001 7:13 PM
To: Schmidt, Peter
Cc: 'Bruce Momjian'; 'Michael Ansley'; 'pgsql-admin@postgresql.org'
Subject: Re: [ADMIN] v7.1b4 bad performance"Schmidt, Peter" <peter.schmidt@prismedia.com> writes:
I tried -B 1024 and got roughly the same results (~50 tps).
What were you using before?
However, when I change WAL option commit_delay from the default of 5
to 0, I get ~200 tps (which is double what I get with 7.03). I'm not
sure I want to do this, do I?Hmm. There have been several discussions about whether CommitDelay is
a good idea or not. What happens if you vary it --- try 1
microsecond,
and then various multiples of 1000. I suspect you may find that there
is no difference in the range 1..10000, then a step, then no change up
to 20000. In other words, your kernel may be rounding the delay up to
the next multiple of a clock tick, which might be 10 milliseconds.
That would explain a 50-tps limit real well...BTW, have you tried pgbench with multiple clients (-c) rather
than just
one?regards, tom lane
I get ~50 tps for any commit_delay value > 0. I've tried many values in the
range 0 - 999, and always get ~50 tps. commit_delay=0 always gets me ~200+
tps.
Yes, I have tried multiple clients but got stuck on the glaring difference
between versions with a single client. The tests that I ran showed the same
kind of results you got earlier today i.e. 1 client/1000 transactions = 10
clients/100 transactions.
So, is it OK to use commit_delay=0?
Peter
I get ~50 tps for any commit_delay value > 0. I've tried many values in the
range 0 - 999, and always get ~50 tps. commit_delay=0 always gets me ~200+
tps.Yes, I have tried multiple clients but got stuck on the glaring difference
between versions with a single client. The tests that I ran showed the same
kind of results you got earlier today i.e. 1 client/1000 transactions = 10
clients/100 transactions.So, is it OK to use commit_delay=0?
commit_delay was designed to provide better performance in multi-user
workloads. If you are going to use it with only a single backend, you
certainly should set it to zero. If you will have multiple backends
committing at the same time, we are not sure if 5 or 0 is the right
value. If multi-user benchmark shows 0 is faster, we may change the
default.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
"Schmidt, Peter" <peter.schmidt@prismedia.com> writes:
So, is it OK to use commit_delay=0?
Certainly. In fact, I think that's about to become the default ;-)
I have now experimented with several different platforms --- HPUX,
FreeBSD, and two considerably different strains of Linux --- and I find
that the minimum delay supported by select(2) is 10 or more milliseconds
on all of them, as much as 20 msec on some popular platforms. Try it
yourself (my test program is attached).
Thus, our past arguments about whether a few microseconds of delay
before commit are a good idea seem moot; we do not have any portable way
of implementing that, and a ten millisecond delay for commit is clearly
Not Good.
regards, tom lane
/* To use: gcc test.c, then
time ./a.out N
N=0 should return almost instantly, if your select(2) does not block as
per spec. N=1 shows the minimum achievable delay, * 1000 --- for
example, if time reports the elapsed time as 10 seconds, then select
has rounded your 1-microsecond delay request up to 10 milliseconds.
Some Unixen seem to throw in an extra ten millisec of delay just for
good measure, eg, on FreeBSD 4.2 N=1 takes 20 sec, N=20000 takes 30.
*/
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/types.h>
int main(int argc, char** argv)
{
struct timeval delay;
int i, del;
del = atoi(argv[1]);
for (i = 0; i < 1000; i++) {
delay.tv_sec = 0;
delay.tv_usec = del;
(void) select(0, NULL, NULL, NULL, &delay);
}
return 0;
}
I wrote:
Thus, our past arguments about whether a few microseconds of delay
before commit are a good idea seem moot; we do not have any portable way
of implementing that, and a ten millisecond delay for commit is clearly
Not Good.
I've now finished running a spectrum of pgbench scenarios, and I find
no case in which commit_delay = 0 is worse than commit_delay > 0.
Now this is just one benchmark on just one platform, but it's pretty
damning...
Platform: HPUX 10.20 on HPPA C180, fast wide SCSI discs, 7200rpm (I think).
Minimum select(2) delay is 10 msec on this platform.
POSTMASTER OPTIONS: -i -B 1024 -N 100
$ PGOPTIONS='-c commit_delay=1' pgbench -c 1 -t 1000 bench
tps = 13.304624(including connections establishing)
tps = 13.323967(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=0' pgbench -c 1 -t 1000 bench
tps = 16.614691(including connections establishing)
tps = 16.645832(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=1' pgbench -c 10 -t 100 bench
tps = 13.612502(including connections establishing)
tps = 13.712996(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=0' pgbench -c 10 -t 100 bench
tps = 14.674477(including connections establishing)
tps = 14.787715(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=1' pgbench -c 30 -t 100 bench
tps = 10.875912(including connections establishing)
tps = 10.932836(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=0' pgbench -c 30 -t 100 bench
tps = 12.853009(including connections establishing)
tps = 12.934365(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=1' pgbench -c 50 -t 100 bench
tps = 9.476856(including connections establishing)
tps = 9.520800(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=0' pgbench -c 50 -t 100 bench
tps = 9.807925(including connections establishing)
tps = 9.854161(excluding connections establishing)
With -F (no fsync), it's the same story:
POSTMASTER OPTIONS: -i -o -F -B 1024 -N 100
$ PGOPTIONS='-c commit_delay=1' pgbench -c 1 -t 1000 bench
tps = 40.584300(including connections establishing)
tps = 40.708855(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=0' pgbench -c 1 -t 1000 bench
tps = 51.585629(including connections establishing)
tps = 51.797280(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=1' pgbench -c 10 -t 100 bench
tps = 35.811729(including connections establishing)
tps = 36.448439(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=0' pgbench -c 10 -t 100 bench
tps = 43.878827(including connections establishing)
tps = 44.856029(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=1' pgbench -c 30 -t 100 bench
tps = 23.490464(including connections establishing)
tps = 23.749558(excluding connections establishing)
$ PGOPTIONS='-c commit_delay=0' pgbench -c 30 -t 100 bench
tps = 23.452935(including connections establishing)
tps = 23.716181(excluding connections establishing)
I vote for commit_delay = 0, unless someone can show cases where
positive delay is significantly better than zero delay.
regards, tom lane
"Schmidt, Peter" <peter.schmidt@prismedia.com> writes:
So, is it OK to use commit_delay=0?
Certainly. In fact, I think that's about to become the default ;-)
I agree with Tom. I did some benchmarking tests using pgbench for a
computer magazine in Japan. I got a almost equal or better result for
7.1 than 7.0.3 if commit_delay=0. See included png file.
--
Tatsuo Ishii
Attachments:
Tatsuo Ishii <t-ishii@sra.co.jp> writes:
I agree with Tom. I did some benchmarking tests using pgbench for a
computer magazine in Japan. I got a almost equal or better result for
7.1 than 7.0.3 if commit_delay=0. See included png file.
Interesting curves. One thing you might like to know is that while
poking around with a profiler this afternoon, I found that the vast
majority of the work done for this benchmark is in the uniqueness
checks driven by the unique indexes. Declare those as plain (non
unique) and the TPS figures would probably go up noticeably. That
doesn't make the test invalid, but it does suggest that pgbench is
emphasizing one aspect of system performance to the exclusion of
others ...
regards, tom lane
... See included png file.
What kind of machine was this run on?
- Thomas
lockhart> > ... See included png file.
lockhart>
lockhart> What kind of machine was this run on?
lockhart>
lockhart> - Thomas
Sorry to forget to mention about that.
SONY VAIO Z505CR/K (note PC)
Pentium III 750MHz/256MB memory/20GB IDE HDD
Linux (kernel 2.2.17)
configure --enable-multibyte=EUC_JP
postgresql.conf:
fsync = on
max_connections = 128
shared_buffers = 1024
silent_mode = on
commit_delay = 0
postmaster opts for 7.0.3:
-B 1024 -N 128 -S
pgbench settings:
scaling factor = 1
data excludes connetion establishing time
number of total transactions are always 640
(see included scripts I ran for the testing)
------------------------------------------------------
#! /bin/sh
pgbench -i test
for i in 1 2 4 8 16 32 64 128
do
t=`expr 640 / $i`
pgbench -t $t -c $i test
echo "===== sync ======"
sync;sync;sync;sleep 10
echo "===== sync done ======"
done
------------------------------------------------------
--
Tatsuo Ishii
* Tom Lane <tgl@sss.pgh.pa.us> [010216 22:49]:
"Schmidt, Peter" <peter.schmidt@prismedia.com> writes:
So, is it OK to use commit_delay=0?
Certainly. In fact, I think that's about to become the default ;-)
I have now experimented with several different platforms --- HPUX,
FreeBSD, and two considerably different strains of Linux --- and I find
that the minimum delay supported by select(2) is 10 or more milliseconds
on all of them, as much as 20 msec on some popular platforms. Try it
yourself (my test program is attached).Thus, our past arguments about whether a few microseconds of delay
before commit are a good idea seem moot; we do not have any portable way
of implementing that, and a ten millisecond delay for commit is clearly
Not Good.regards, tom lane
Here is another one. UnixWare 7.1.1 on a P-III 500 256 Meg Ram:
$ cc -o tgl.test -O tgl.test.c
$ time ./tgl.test 0
real 0m0.01s
user 0m0.01s
sys 0m0.00s
$ time ./tgl.test 1
real 0m10.01s
user 0m0.00s
sys 0m0.01s
$ time ./tgl.test 2
real 0m10.01s
user 0m0.00s
sys 0m0.00s
$ time ./tgl.test 3
real 0m10.11s
user 0m0.00s
sys 0m0.01s
$ uname -a
UnixWare lerami 5 7.1.1 i386 x86at SCO UNIX_SVR5
$
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749
On Sat, 17 Feb 2001, Tom Lane wrote:
[skip]
TL> Platform: HPUX 10.20 on HPPA C180, fast wide SCSI discs, 7200rpm (I think).
TL> Minimum select(2) delay is 10 msec on this platform.
[skip]
TL> I vote for commit_delay = 0, unless someone can show cases where
TL> positive delay is significantly better than zero delay.
BTW, for modern versions of FreeBSD kernels, there is HZ kernel option
which describes maximum timeslice granularity (actually, HZ value is
number of timeslice periods per second, with default of 100 = 10 ms). On
modern CPUs HZ may be increased to at least 1000, and sometimes even to
5000 (unfortunately, I haven't test platform by hand).
So, maybe you can test select granularity at ./configure phase and then
define default commit_delay accordingly.
Your thoughts?
Sincerely,
D.Marck [DM5020, DM268-RIPE, DM3-RIPN]
------------------------------------------------------------------------
*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***
------------------------------------------------------------------------
TL> I vote for commit_delay = 0, unless someone can show cases where
TL> positive delay is significantly better than zero delay.BTW, for modern versions of FreeBSD kernels, there is HZ kernel option
which describes maximum timeslice granularity (actually, HZ value is
number of timeslice periods per second, with default of 100 = 10 ms). On
modern CPUs HZ may be increased to at least 1000, and sometimes even to
5000 (unfortunately, I haven't test platform by hand).So, maybe you can test select granularity at ./configure phase and then
define default commit_delay accordingly.
According to the BSD4.4 book by Karels/McKusick, even though computers
are faster now, increasing the Hz doesn't seem to improve performance.
This is probably because of cache misses from context switches.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Sun, 18 Feb 2001, Dmitry Morozovsky wrote:
I just done the experiment with increasing HZ to 1000 on my own machine
(PII 374). Your test program reports 2 ms instead of 20. The other side
of increasing HZ is surely more overhead to scheduler system. Anyway, it's
a bit of data to dig into, I suppose ;-)
Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM
DTLA IDE in ATA66 mode with tag queueing and soft updates turned on)
default delay (5 us)
number of clients: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
tps = 96.678008(including connections establishing)
tps = 96.982619(excluding connections establishing)
number of clients: 10
number of transactions per client: 100
number of transactions actually processed: 1000/1000
tps = 77.538398(including connections establishing)
tps = 79.126914(excluding connections establishing)
number of clients: 20
number of transactions per client: 50
number of transactions actually processed: 1000/1000
tps = 68.448429(including connections establishing)
tps = 70.957500(excluding connections establishing)
delay of 0
number of clients: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
tps = 111.939751(including connections establishing)
tps = 112.335089(excluding connections establishing)
number of clients: 10
number of transactions per client: 100
number of transactions actually processed: 1000/1000
tps = 84.262936(including connections establishing)
tps = 86.152702(excluding connections establishing)
number of clients: 20
number of transactions per client: 50
number of transactions actually processed: 1000/1000
tps = 79.678831(including connections establishing)
tps = 83.106418(excluding connections establishing)
Results are very close... Another thing to dig into.
BTW, postgres parameters were: -B 256 -F -i -S
DM> BTW, for modern versions of FreeBSD kernels, there is HZ kernel option
DM> which describes maximum timeslice granularity (actually, HZ value is
DM> number of timeslice periods per second, with default of 100 = 10 ms). On
DM> modern CPUs HZ may be increased to at least 1000, and sometimes even to
DM> 5000 (unfortunately, I haven't test platform by hand).
DM>
DM> So, maybe you can test select granularity at ./configure phase and then
DM> define default commit_delay accordingly.
DM>
DM> Your thoughts?
DM>
DM> Sincerely,
DM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN]
DM> ------------------------------------------------------------------------
DM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***
DM> ------------------------------------------------------------------------
DM>
Sincerely,
D.Marck [DM5020, DM268-RIPE, DM3-RIPN]
------------------------------------------------------------------------
*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***
------------------------------------------------------------------------
On Sun, 18 Feb 2001, Dmitry Morozovsky wrote:
DM> I just done the experiment with increasing HZ to 1000 on my own machine
DM> (PII 374). Your test program reports 2 ms instead of 20. The other side
DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's
DM> a bit of data to dig into, I suppose ;-)
DM>
DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM
DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on)
Oh, I forgot to paste the results from original system (with HZ=100). Here
they are:
delay = 5
number of clients: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
tps = 47.422866(including connections establishing)
tps = 47.493439(excluding connections establishing)
number of clients: 10
number of transactions per client: 100
number of transactions actually processed: 1000/1000
tps = 37.930605(including connections establishing)
tps = 38.308613(excluding connections establishing)
number of clients: 20
number of transactions per client: 50
number of transactions actually processed: 1000/1000
tps = 35.757531(including connections establishing)
tps = 36.420532(excluding connections establishing)
delay = 0
number of clients: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
tps = 111.521859(including connections establishing)
tps = 111.904026(excluding connections establishing)
number of clients: 10
number of transactions per client: 100
number of transactions actually processed: 1000/1000
tps = 62.808216(including connections establishing)
tps = 63.819590(excluding connections establishing)
number of clients: 20
number of transactions per client: 50
number of transactions actually processed: 1000/1000
tps = 64.250431(including connections establishing)
tps = 66.438067(excluding connections establishing)
So, I suppose (very preliminary, of course ;):
1 - at least for dedicated PostgreSQL servers it _may_ be
reasonable to increase HZ
2 - there is still no advantages of using delay != 0.
Your ideas?
DM>
DM> >> default delay (5 us)
DM>
DM> number of clients: 1
DM> number of transactions per client: 1000
DM> number of transactions actually processed: 1000/1000
DM> tps = 96.678008(including connections establishing)
DM> tps = 96.982619(excluding connections establishing)
DM>
DM> number of clients: 10
DM> number of transactions per client: 100
DM> number of transactions actually processed: 1000/1000
DM> tps = 77.538398(including connections establishing)
DM> tps = 79.126914(excluding connections establishing)
DM>
DM> number of clients: 20
DM> number of transactions per client: 50
DM> number of transactions actually processed: 1000/1000
DM> tps = 68.448429(including connections establishing)
DM> tps = 70.957500(excluding connections establishing)
DM>
DM> >> delay of 0
DM>
DM> number of clients: 1
DM> number of transactions per client: 1000
DM> number of transactions actually processed: 1000/1000
DM> tps = 111.939751(including connections establishing)
DM> tps = 112.335089(excluding connections establishing)
DM>
DM> number of clients: 10
DM> number of transactions per client: 100
DM> number of transactions actually processed: 1000/1000
DM> tps = 84.262936(including connections establishing)
DM> tps = 86.152702(excluding connections establishing)
DM>
DM> number of clients: 20
DM> number of transactions per client: 50
DM> number of transactions actually processed: 1000/1000
DM> tps = 79.678831(including connections establishing)
DM> tps = 83.106418(excluding connections establishing)
DM>
DM>
DM> Results are very close... Another thing to dig into.
DM>
DM> BTW, postgres parameters were: -B 256 -F -i -S
DM>
DM>
DM>
DM>
DM> DM> BTW, for modern versions of FreeBSD kernels, there is HZ kernel option
DM> DM> which describes maximum timeslice granularity (actually, HZ value is
DM> DM> number of timeslice periods per second, with default of 100 = 10 ms). On
DM> DM> modern CPUs HZ may be increased to at least 1000, and sometimes even to
DM> DM> 5000 (unfortunately, I haven't test platform by hand).
DM> DM>
DM> DM> So, maybe you can test select granularity at ./configure phase and then
DM> DM> define default commit_delay accordingly.
DM> DM>
DM> DM> Your thoughts?
DM> DM>
DM> DM> Sincerely,
DM> DM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN]
DM> DM> ------------------------------------------------------------------------
DM> DM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***
DM> DM> ------------------------------------------------------------------------
DM> DM>
DM>
DM> Sincerely,
DM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN]
DM> ------------------------------------------------------------------------
DM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***
DM> ------------------------------------------------------------------------
DM>
DM>
Sincerely,
D.Marck [DM5020, DM268-RIPE, DM3-RIPN]
------------------------------------------------------------------------
*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***
------------------------------------------------------------------------
Tom Lane wrote:
I wrote:
Thus, our past arguments about whether a few microseconds of delay
before commit are a good idea seem moot; we do not have any portable way
of implementing that, and a ten millisecond delay for commit is clearly
Not Good.I've now finished running a spectrum of pgbench scenarios, and I find
no case in which commit_delay = 0 is worse than commit_delay > 0.
Now this is just one benchmark on just one platform, but it's pretty
damning...
In your test cases I always see "where bid = 1" at "update branches"
i.e.
update branches set bbalance = bbalance + ... where bid = 1
ISTM there's no multiple COMMIT in your senario-s due to
their lock conflicts.
Regards,
Hiroshi Inoue
I did not realize how much WAL improved performance when using fsync.
"Schmidt, Peter" <peter.schmidt@prismedia.com> writes:
So, is it OK to use commit_delay=0?
Certainly. In fact, I think that's about to become the default ;-)
I agree with Tom. I did some benchmarking tests using pgbench for a
computer magazine in Japan. I got a almost equal or better result for
7.1 than 7.0.3 if commit_delay=0. See included png file.
--
Tatsuo Ishii
[ Attachment, skipping... ]
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
In your test cases I always see "where bid = 1" at "update branches"
i.e.
update branches set bbalance = bbalance + ... where bid = 1
ISTM there's no multiple COMMIT in your senario-s due to
their lock conflicts.
Hmm. It looks like using a 'scaling factor' larger than 1 is necessary
to spread out the updates of "branches". AFAIR, the people who reported
runs with scaling factors > 1 got pretty much the same results though.
regards, tom lane
Tom Lane wrote:
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
In your test cases I always see "where bid = 1" at "update branches"
i.e.
update branches set bbalance = bbalance + ... where bid = 1ISTM there's no multiple COMMIT in your senario-s due to
their lock conflicts.Hmm. It looks like using a 'scaling factor' larger than 1 is necessary
to spread out the updates of "branches". AFAIR, the people who reported
runs with scaling factors > 1 got pretty much the same results though.
People seem to believe your results are decisive
and would raise your results if the evidence is
required.
All clients of pgbench execute the same sequence
of queries. There could be various conflicts e.g.
oridinary lock, buffer lock, IO spinlock ...
I've been suspicious if pgbench is an (unique)
appropiriate test case for evaluaing commit_delay.
Regards,
Hiroshi Inoue
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
I've been suspicious if pgbench is an (unique)
appropiriate test case for evaluaing commit_delay.
Of course it isn't. Never trust only one benchmark.
I've asked the Great Bridge folks to run their TPC-C benchmark with both
zero and small nonzero commit_delay. It will be a couple of days before
we have the results, however. Can anyone else offer any comparisons
based on other multiuser benchmarks?
regards, tom lane
Tom Lane wrote:
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
I've been suspicious if pgbench is an (unique)
appropiriate test case for evaluaing commit_delay.Of course it isn't. Never trust only one benchmark.
I've asked the Great Bridge folks to run their TPC-C benchmark with both
zero and small nonzero commit_delay. It will be a couple of days before
we have the results, however. Can anyone else offer any comparisons
based on other multiuser benchmarks?
I changed pgbench so that different connection connects
to the different database and got the following results.
The results of
pgbench -c 10 -t 100
[CommitDelay=0]
1st)tps = 18.484611(including connections establishing)
tps = 19.827988(excluding connections establishing)
2nd)tps = 18.754826(including connections establishing)
tps = 19.352268(excluditp connections establishing)
3rd)tps = 18.771225(including connections establishing)
tps = 19.261843(excluding connections establishing)
[CommitDelay=1]
1st)tps = 20.317649(including connections establishing)
tps = 20.975151(excluding connections establishing)
2nd)tps = 24.208025(including connections establishing)
tps = 24.663665(excluding connections establishing)
3rd)tps = 25.821156(including connections establishing)
tps = 26.842741(excluding connections establishing)
Regards,
Hiroshi Inoue
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
I changed pgbench so that different connection connects
to the different database and got the following results.
Hmm, you mean you set up a separate test database for each pgbench
"client", but all under the same postmaster?
The results of
pgbench -c 10 -t 100
[CommitDelay=0]
1st)tps = 18.484611(including connections establishing)
tps = 19.827988(excluding connections establishing)
2nd)tps = 18.754826(including connections establishing)
tps = 19.352268(excluditp connections establishing)
3rd)tps = 18.771225(including connections establishing)
tps = 19.261843(excluding connections establishing)
[CommitDelay=1]
1st)tps = 20.317649(including connections establishing)
tps = 20.975151(excluding connections establishing)
2nd)tps = 24.208025(including connections establishing)
tps = 24.663665(excluding connections establishing)
3rd)tps = 25.821156(including connections establishing)
tps = 26.842741(excluding connections establishing)
What platform is this on --- in particular, how long a delay
is CommitDelay=1 in reality? What -B did you use?
regards, tom lane