logical replication - still unstable after all these months
If you run a pgbench session of 1 minute over a logical replication
connection and repeat that 100x this is what you get:
At clients 90, 64, 8, scale 25:
-- out_20170525_0944.txt
100 -- pgbench -c 90 -j 8 -T 60 -P 12 -n -- scale 25
93 -- All is well.
7 -- Not good.
-- out_20170525_1426.txt
100 -- pgbench -c 64 -j 8 -T 60 -P 12 -n -- scale 25
82 -- All is well.
18 -- Not good.
-- out_20170525_2049.txt
100 -- pgbench -c 8 -j 8 -T 60 -P 12 -n -- scale 25
90 -- All is well.
10 -- Not good
At clients 90, 64, 8, scale 25:
-- out_20170526_0126.txt
100 -- pgbench -c 90 -j 8 -T 60 -P 12 -n -- scale 5
98 -- All is well.
2 -- Not good.
-- out_20170526_0352.txt
100 -- pgbench -c 64 -j 8 -T 60 -P 12 -n -- scale 5
97 -- All is well.
3 -- Not good.
-- out_20170526_0621.txt
45 -- pgbench -c 8 -j 8 -T 60 -P 12 -n -- scale 5
41 -- All is well.
3 -- Not good.
(That last one obviously not finished)
I think this is pretty awful, really, for a beta level.
The above installations (master+replica) are with Petr Jelinek's (and
Michael Paquier's) last patches
0001-Fix-signal-handling-in-logical-workers.patch
0002-Make-tablesync-worker-exit-when-apply-dies-while-it-.patch
0003-Receive-invalidation-messages-correctly-in-tablesync.patch
Remove-the-SKIP-REFRESH-syntax-suggar-in-ALTER-SUBSC-v2.patch
Now, it could be that there is somehow something wrong with my
test-setup (as opposed to some bug in log-repl). I can post my test
program, but I'll do that separately (but below is the core all my tests
-- it's basically still that very first test that I started out with,
many months ago...)
I'd like to find out/know more about:
- Do you agree this number of failures is far too high?
- Am I the only one finding so many failures?
- Is anyone else testing the same way (more or less continually, finding
only succes)?
- Which of the Open Items could be resposible for this failure rate? (I
don't see a match.)
- What tests do others do? Could we somehow concentrate results and
method somewhere?
Thanks,
Erik Rijkers
PS
The core of the 'pgbench_derail' test (bash) is simply:
echo "drop table if exists pgbench_accounts;
drop table if exists pgbench_branches;
drop table if exists pgbench_tellers;
drop table if exists pgbench_history;" | psql -qXp $port1 \
&& echo "drop table if exists pgbench_accounts;
drop table if exists pgbench_branches;
drop table if exists pgbench_tellers;
drop table if exists pgbench_history;" | psql -qXp $port2 \
&& pgbench -p $port1 -qis $scale \
&& echo "alter table pgbench_history add column hid serial primary key;"
\
| psql -q1Xp $port1 && pg_dump -F c -p $port1 \
--exclude-table-data=pgbench_history \
--exclude-table-data=pgbench_accounts \
--exclude-table-data=pgbench_branches \
--exclude-table-data=pgbench_tellers \
-t pgbench_history -t pgbench_accounts \
-t pgbench_branches -t pgbench_tellers \
| pg_restore -1 -p $port2 -d testdb
appname=derail2
echo "create publication pub1 for all tables;" | psql -p $port1 -aqtAX
echo "create subscription sub1 connection 'port=${port1}
application_name=$appname' publication pub1 with(enabled=false);
alter subscription sub1 enable;" | psql -p $port2 -aqtAX
pgbench -c $clients -j $threads -T $duration -P $pseconds -n # scale
$scale
Now compare md5's of the sorted content of each of the 4 pgbench tables
on primary and replica. They should be the same.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 26 May 2017 at 07:10, Erik Rijkers <er@xs4all.nl> wrote:
- Do you agree this number of failures is far too high?
- Am I the only one finding so many failures?
What type of failure are you getting?
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-26 08:58, Simon Riggs wrote:
On 26 May 2017 at 07:10, Erik Rijkers <er@xs4all.nl> wrote:
- Do you agree this number of failures is far too high?
- Am I the only one finding so many failures?What type of failure are you getting?
The failure is that in the result state the replicated tables differ
from the original tables.
For instance,
-- out_20170525_0944.txt
100 -- pgbench -c 90 -j 8 -T 60 -P 12 -n -- scale 25
93 -- All is well.
7 -- Not good.
These numbers mean: the result state of primary and replica is not the
same, in 7 out of 100 runs.
'not the same state' means: at least one of the 4 md5's of the sorted
content of the 4 pgbench tables on the primary is different from those
taken from the replica.
So, 'failure' means: the 4 pgbench tables on primary and replica are not
exactly the same after the (one-minute) pgbench-run has finished, and
logical replication has 'finished'. (plenty of time is given for the
replica to catchup. The test only calls 'failure' after 20x waiting (for
15 seconds) and 20x finding the same erroneous state (erroneous because
not-same as on primary).
I would really like to know it you think that that doesn't amount to
'failure'.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 26 May 2017 at 08:27, Erik Rijkers <er@xs4all.nl> wrote:
On 2017-05-26 08:58, Simon Riggs wrote:
On 26 May 2017 at 07:10, Erik Rijkers <er@xs4all.nl> wrote:
- Do you agree this number of failures is far too high?
- Am I the only one finding so many failures?What type of failure are you getting?
The failure is that in the result state the replicated tables differ from
the original tables.
An important point would be note that that this is time dependent.
I would really like to know it you think that that doesn't amount to
'failure'.
Yes, your test has failed.
Even one record on one test is a serious problem and needs to be fixed.
If we can find out what the bug is with a repeatable test case we can fix it.
Could you provide more details? Thanks
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-26 09:40, Simon Riggs wrote:
If we can find out what the bug is with a repeatable test case we can
fix it.Could you provide more details? Thanks
I will, just need some time to clean things up a bit.
But what I would like is for someone else to repeat my 100x1-minute
tests, taking as core that snippet I posted in my previous email. I
built bash-stuff around that core (to take md5's, shut-down/start-up the
two instances between runs, write info to log-files, etc). But it would
be good if someone else made that separately because if that then does
not fail, it would prove that my test-harness is at fault (and not
logical replication).
The idea is simple enough:
startup instance1
startup instance2 (on same machine)
primary: init pgbench tables
primary: add primary key to pgbench_history
copy empty tables to replica by dump/restore
primary: start publication
replica: start subscription
primary: run 1-minute pgbench
wait till the 4 md5's of primary pgbench tables
are the same as the 4 md5's of replica pgbench
tables (this will need a time-out).
log 'ok' or 'not ok'
primary: clean up publication
replica: clean up subscription
shutdown primary
shutdown replica
this whole thing 100x
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 26/05/17 20:09, Erik Rijkers wrote:
On 2017-05-26 09:40, Simon Riggs wrote:
If we can find out what the bug is with a repeatable test case we can
fix it.Could you provide more details? Thanks
I will, just need some time to clean things up a bit.
But what I would like is for someone else to repeat my 100x1-minute
tests, taking as core that snippet I posted in my previous email. I
built bash-stuff around that core (to take md5's, shut-down/start-up
the two instances between runs, write info to log-files, etc). But it
would be good if someone else made that separately because if that
then does not fail, it would prove that my test-harness is at fault
(and not logical replication).
Will do - what I had been doing was running pgbench, waiting until the
row counts on the replica pgbench_history were the same as the primary,
then summing the %balance and delta fields from the primary and replica
dbs and comparing. So far - all match up ok. However I'd been running a
longer time frames (5 minutes), so not the same number of repetitions as
yet.
regards
Mark
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-26 10:29, Mark Kirkwood wrote:
On 26/05/17 20:09, Erik Rijkers wrote:
On 2017-05-26 09:40, Simon Riggs wrote:
If we can find out what the bug is with a repeatable test case we can
fix it.Could you provide more details? Thanks
I will, just need some time to clean things up a bit.
But what I would like is for someone else to repeat my 100x1-minute
tests, taking as core that snippet I posted in my previous email. I
built bash-stuff around that core (to take md5's, shut-down/start-up
the two instances between runs, write info to log-files, etc). But it
would be good if someone else made that separately because if that
then does not fail, it would prove that my test-harness is at fault
(and not logical replication).Will do - what I had been doing was running pgbench, waiting until the
Great!
You'll have to think about whether to go with instances of either
master, or master+those 4 patches. I guess either choice makes sense.
row counts on the replica pgbench_history were the same as the
primary, then summing the %balance and delta fields from the primary
and replica dbs and comparing. So far - all match up ok. However I'd
I did number-summing for a while as well (because it's a lot faster than
taking md5's over the full content).
But the problem with summing is that (I think) in the end you cannot be
really sure that the result is correct (false positives, although I
don't understand the odds).
been running a longer time frames (5 minutes), so not the same number
of repetitions as yet.
I've run 3600-, 30- and 15-minute runs too, but in this case (these 100x
tests) I wanted to especially test the area around startup/initialise of
logical replication. Also the increasing quality of logical replication
(once it runs with the correct
thanks,
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 05/26/2017 12:57 PM, Erik Rijkers wrote:
The failure is that in the result state the replicated tables differ
from the original tables.
I am also getting similar behavior
Master=
run pgbench with scaling factor =1 (./pg_bench -i -s 1 postgres )
delete rows from pgbench_history ( delete from pgbench_history)
create publication (create publication pub for table pgbench_history)
Slave=
run pgbench with scaling factor =1 (./pg_bench -i -s 1 postgres -p 5000 )
delete rows from pgbench_history ( delete from pgbench_history)
create subscription (create subscription sub connection 'dbname=postgres
host=localhost user=centos) publication pub;
create a test.sql file , having an insert statement
[centos@centos-cpula bin]$ cat test.sql
insert into pgbench_history values (1,1,1,1,now(),'anv');
now run pgbench with -T / -c / -j options
First time = ./pgbench -t 5 -c 90 -j 90 -f test.sql postgres
count on Master/slave are SAME .
run second time =
./pgbench -T 20 -c 90 -j 90 -f test.sql postgres
check the row count on master/standby
Master=
postgres=# select count(*) from pgbench_history ;
count
--------
536836
(1 row)
Standby =
postgres=# select count(*) from pgbench_history ;
count
---------
1090959
(1 row)
--
regards,tushar
EnterpriseDB https://www.enterprisedb.com/
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
Hmm, I was under the impression that the changes we proposed in the
snapbuild thread fixed your issues, does this mean they didn't? Or the
modified versions of those that were eventually committed didn't? Or did
issues reappear at some point?
--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-26 15:59, Petr Jelinek wrote:
Hi,
Hmm, I was under the impression that the changes we proposed in the
snapbuild thread fixed your issues, does this mean they didn't? Or the
modified versions of those that were eventually committed didn't? Or
did
issues reappear at some point?
I do think the snapbuild fixed solved certain problems. I can't say
where the present problems are caused (as I have said, I suspect logical
replication, but also my own test-harness: perhaps it leaves some
error-state lying around (although I do try hard to prevent that) -- so
I just don't know.
I wouldn't say that problems (re)appeared at a certain point; my
impression is rather that logical replication has become better and
better. But I kept getting the odd failure, without a clear cause, but
always (eventually) repeatable on other machines. I did the 1-minute
pgbench-derail version exactly because of the earlier problems with
snapbuild: I wanted a test that does a lot of starting and stopping of
publication and subscription.
Erik Rijkers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Erik Rijkers wrote:
I wouldn't say that problems (re)appeared at a certain point; my impression
is rather that logical replication has become better and better. But I kept
getting the odd failure, without a clear cause, but always (eventually)
repeatable on other machines. I did the 1-minute pgbench-derail version
exactly because of the earlier problems with snapbuild: I wanted a test that
does a lot of starting and stopping of publication and subscription.
I think it is pretty unlikely that the logical replication plumbing is
the buggy place. You're just seeing it now becaues we didn't have any
mechanism as convenient to consume logical decoding output. In other
words, I strongly suspect that the hyphothetical bugs are in the logical
decoding side (and snapbuild sounds the most promising candidate) rather
than logical replication per se.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 26/05/17 16:51, Alvaro Herrera wrote:
Erik Rijkers wrote:
I wouldn't say that problems (re)appeared at a certain point; my impression
is rather that logical replication has become better and better. But I kept
getting the odd failure, without a clear cause, but always (eventually)
repeatable on other machines. I did the 1-minute pgbench-derail version
exactly because of the earlier problems with snapbuild: I wanted a test that
does a lot of starting and stopping of publication and subscription.I think it is pretty unlikely that the logical replication plumbing is
the buggy place. You're just seeing it now becaues we didn't have any
mechanism as convenient to consume logical decoding output. In other
words, I strongly suspect that the hyphothetical bugs are in the logical
decoding side (and snapbuild sounds the most promising candidate) rather
than logical replication per se.
Well, that was true for the previous issues Erik found as well (mostly
snapshot builder was problematic). But that does not mean there are no
issues elsewhere. We could do with some more output from the tests (do
they log some intermediary states of those md5 checksums, maybe numbers
of rows etc?), description of the problems, errors from logs, etc. I for
example don't get any issues from similar test as the one described in
this thread so without more info it might be hard to reproduce and fix
whatever the underlaying issue is.
--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, May 26, 2017 at 5:17 AM, tushar <tushar.ahuja@enterprisedb.com>
wrote:
run second time =
./pgbench -T 20 -c 90 -j 90 -f test.sql postgrescheck the row count on master/standby
Master=
postgres=# select count(*) from pgbench_history ;
count
--------
536836
(1 row)Standby =
postgres=# select count(*) from pgbench_history ;
count
---------
1090959
(1 row)
Hi Tushar,
pgbench starts out by truncating pgbench_history. That truncation does not
get replicated to the subscriber. The later inserts do. So your
subscriber ends up with about twice as many rows.
Cheers,
Jeff
On Fri, May 26, 2017 at 12:27 AM, Erik Rijkers <er@xs4all.nl> wrote:
On 2017-05-26 08:58, Simon Riggs wrote:
On 26 May 2017 at 07:10, Erik Rijkers <er@xs4all.nl> wrote:
- Do you agree this number of failures is far too high?
- Am I the only one finding so many failures?
What type of failure are you getting?
The failure is that in the result state the replicated tables differ from
the original tables.
But what is the actual failure? Which tables differ? Do they have the
same number of rows? Do they only differ in the *balance column or
something else? Are they transactionally consistent?
I have not been able to replicate the problem.
Cheers,
Jeff
On 26/05/17 20:09, Erik Rijkers wrote:
The idea is simple enough:
startup instance1
startup instance2 (on same machine)
primary: init pgbench tables
primary: add primary key to pgbench_history
copy empty tables to replica by dump/restore
primary: start publication
replica: start subscription
primary: run 1-minute pgbench
wait till the 4 md5's of primary pgbench tables
are the same as the 4 md5's of replica pgbench
tables (this will need a time-out).
log 'ok' or 'not ok'
primary: clean up publication
replica: clean up subscription
shutdown primary
shutdown replicathis whole thing 100
I might have a look at scripting this up (especially if it keeps raining
here)...
Some questions that might help me get it right:
- do you think we need to stop and start the instances every time?
- do we need to init pgbench each time?
- could we just drop the subscription and publication and truncate the
replica tables instead?
- what scale pgbench are you running?
- how many clients for the 1 min pgbench run?
- are you starting the pgbench run while the copy_data jobs for the
subscription are still running?
- how exactly are you calculating those md5's?
Cheers
Mark
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-27 01:35, Mark Kirkwood wrote:
On 26/05/17 20:09, Erik Rijkers wrote:
this whole thing 100x
Some questions that might help me get it right:
- do you think we need to stop and start the instances every time?
- do we need to init pgbench each time?
- could we just drop the subscription and publication and truncate the
replica tables instead?
I have done all that in earler versions.
I deliberately added these 'complications' in view of the intractability
of the problem: my fear is that an earlier failure leaves some
half-failed state behind in an instance, which then might cause more
failure. This would undermine the intent of the whole exercise (which
is to count succes/failure rate). So it is important to be as sure as
possible that each cycle starts out as cleanly as possible.
- what scale pgbench are you running?
I use a small script to call the main script; at the moment it does
something like:
-------------------
duration=60
from=1
to=100
for scale in 25 5
do
for clients in 90 64 8
do
date_str=$(date +"%Y%m%d_%H%M")
outfile=out_${date_str}.txt
time for x in `seq $from $to`
do
./pgbench_derail2.sh $scale $clients $duration $date_str
[...]
-------------------
- how many clients for the 1 min pgbench run?
see above
- are you starting the pgbench run while the copy_data jobs for the
subscription are still running?
I assume with copy_data you mean the data sync of the original table
before pgbench starts.
And yes, I think here might be the origin of the problem.
( I think the problem I get is actually easily avoided by putting wait
states here and there in between separate steps. But the testing idea
here is to force the system into error, not to avoid any errors)
- how exactly are you calculating those md5's?
Here is the bash function: cb (I forget what that stands for, I guess
'content bench'). $outf is a log file to which the program writes
output:
---------------------------
function cb()
{
# display the 4 pgbench tables' accumulated content as md5s
# a,b,t,h stand for: pgbench_accounts, -branches, -tellers, -history
num_tables=$( echo "select count(*) from pg_class where relkind = 'r'
and relname ~ '^pgbench_'" | psql -qtAX )
if [[ $num_tables -ne 4 ]]
then
echo "pgbench tables not 4 - exit" >> $outf
exit
fi
for port in $port1 $port2
do
md5_a=$(echo "select * from pgbench_accounts order by aid"|psql
-qtAXp $port|md5sum|cut -b 1-9)
md5_b=$(echo "select * from pgbench_branches order by bid"|psql
-qtAXp $port|md5sum|cut -b 1-9)
md5_t=$(echo "select * from pgbench_tellers order by tid"|psql
-qtAXp $port|md5sum|cut -b 1-9)
md5_h=$(echo "select * from pgbench_history order by hid"|psql
-qtAXp $port|md5sum|cut -b 1-9)
cnt_a=$(echo "select count(*) from pgbench_accounts" |psql
-qtAXp $port)
cnt_b=$(echo "select count(*) from pgbench_branches" |psql
-qtAXp $port)
cnt_t=$(echo "select count(*) from pgbench_tellers" |psql
-qtAXp $port)
cnt_h=$(echo "select count(*) from pgbench_history" |psql
-qtAXp $port)
md5_total[$port]=$( echo "${md5_a} ${md5_b} ${md5_t} ${md5_h}" |
md5sum )
printf "$port a,b,t,h: %8d %6d %6d %6d" $cnt_a $cnt_b $cnt_t $cnt_h
echo -n " $md5_a $md5_b $md5_t $md5_h"
if [[ $port -eq $port1 ]]; then echo " master"
elif [[ $port -eq $port2 ]]; then echo -n " replica"
else echo " ERROR "
fi
done
if [[ "${md5_total[$port1]}" == "${md5_total[$port2]}" ]]
then
echo " ok"
else
echo " NOK"
fi
}
---------------------------
this enables:
echo "-- getting md5 (cb)"
cb_text1=$(cb)
and testing that string like:
if echo "$cb_text1" | grep -qw 'replica ok';
then
echo "-- All is well."
[...]
Later today I'll try to clean up the whole thing and post it.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-05-27 01:35, Mark Kirkwood wrote:
On 26/05/17 20:09, Erik Rijkers wrote:
The idea is simple enough:
startup instance1
startup instance2 (on same machine)
primary: init pgbench tables
primary: add primary key to pgbench_history
copy empty tables to replica by dump/restore
primary: start publication
replica: start subscription
primary: run 1-minute pgbench
wait till the 4 md5's of primary pgbench tables
are the same as the 4 md5's of replica pgbench
tables (this will need a time-out).
log 'ok' or 'not ok'
primary: clean up publication
replica: clean up subscription
shutdown primary
shutdown replicathis whole thing 100x
Here is what I have:
instances.sh:
starts up 2 assert enabled sessions
instances_fast.sh:
alternative to instances.sh
starts up 2 assert disabled 'fast' sessions
testset.sh
loop to call pgbench_derail2.sh with varying params
pgbench_derail2.sh
main test program
can be called 'standalone'
./pgbench_derail2.sh $scale $clients $duration $date_str
so for instance this should work:
./pgbench_derail2.sh 25 64 60 20170527_1019
to remove publication and subscription from sessions, add a 5th
parameter 'clean'
./pgbench_derail2.sh 1 1 1 1 'clean'
pubsub.sh
displays replication state. also called by pgbench_derail2.sh
must be in path
result.sh
display results
I keep this in a screen-session as:
watch -n 20 './result.sh 201705'
Peculiar to my setup also:
server version at compile time stamped with date + commit hash
I misuse information_schema.sql_packages at compile time to store
patch information
instances are in $pg_stuff_dir/pg_installations/pgsql.<project name>
So you'll have to outcomment a line here and there, and adapt paths,
ports, and things like that.
It's a bit messy, I should have used perl from the beginning...
Good luck :)
Erik Rijkers
Attachments:
instances.shtext/x-shellscript; name=instances.shDownload
instances_fast.shtext/x-shellscript; name=instances_fast.shDownload
testset.shtext/x-shellscript; name=testset.shDownload
pgbench_derail2.shtext/x-shellscript; name=pgbench_derail2.shDownload
pubsub.shtext/x-shellscript; name=pubsub.shDownload
results.shtext/x-shellscript; name=results.shDownload
On 2017-05-27 10:30, Erik Rijkers wrote:
On 2017-05-27 01:35, Mark Kirkwood wrote:
Here is what I have:
instances.sh:
testset.sh
pgbench_derail2.sh
pubsub.sh
To be clear:
( Apart from that standalone call like
./pgbench_derail2.sh $scale $clients $duration $date_str
)
I normally run by editing the parameters in testset.sh, then run:
./testset.sh
that then shows a tail -F of the output-logfile (to paste into another
screen).
in yet another screen the 'watch -n20 results.sh' line
The output=files are the .txt files.
The logfiles of the instances are (at the end of each test) copied to
directory logfiles/
under a meaningful name that shows the parameters, and with an extension
like '.ok.log' or '.NOK.log'.
I am very curious at your results.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 27 May 2017 at 09:44, Erik Rijkers <er@xs4all.nl> wrote:
I am very curious at your results.
We take your bug report on good faith, but we still haven't seen
details of the problem or how to recreate it.
Please post some details. Thanks.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On May 27, 2017 6:13:19 AM EDT, Simon Riggs <simon@2ndquadrant.com> wrote:
On 27 May 2017 at 09:44, Erik Rijkers <er@xs4all.nl> wrote:
I am very curious at your results.
We take your bug report on good faith, but we still haven't seen
details of the problem or how to recreate it.Please post some details. Thanks.
?
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers