WIP Patch: Pgbench Serialization and deadlock errors

Started by Marina Polyakovaalmost 9 years ago203 messageshackers
Jump to latest
#1Marina Polyakova
m.polyakova@postgrespro.ru

Hello, hackers!

Now in pgbench we can test only transactions with Read Committed
isolation level because client sessions are disconnected forever on
serialization failures. There were some proposals and discussions about
it (see message here [1]/messages/by-id/4EC65830020000250004323F@gw.wicourts.gov and thread here [2]/messages/by-id/alpine.DEB.2.02.1305182259550.1473@localhost6.localdomain6).

I suggest a patch where pgbench client sessions are not disconnected
because of serialization or deadlock failures and these failures are
mentioned in reports. In details:
- transaction with one of these failures continue run normally, but its
result is rolled back;
- if there were these failures during script execution this
"transaction" is marked
appropriately in logs;
- numbers of "transactions" with these failures are printed in progress,
in aggregation logs and in the end with other results (all and for each
script);

Advanced options:
- mostly for testing built-in scripts: you can set the default
transaction isolation level by the appropriate benchmarking option (-I);
- for more detailed reports: to know per-statement serialization and
deadlock failures you can use the appropriate benchmarking option
(--report-failures).

Also: TAP tests for new functionality and changed documentation with new
examples.

Patches are attached. Any suggestions are welcome!

P.S. Does this use case (do not retry transaction with serialization or
deadlock failure) is most interesting or failed transactions should be
retried (and how much times if there seems to be no hope of success...)?

[1]: /messages/by-id/4EC65830020000250004323F@gw.wicourts.gov
/messages/by-id/4EC65830020000250004323F@gw.wicourts.gov
[2]: /messages/by-id/alpine.DEB.2.02.1305182259550.1473@localhost6.localdomain6
/messages/by-id/alpine.DEB.2.02.1305182259550.1473@localhost6.localdomain6

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

v1-0002-Pgbench-Set-default-transaction-isolation-level.patchtext/x-diff; name=v1-0002-Pgbench-Set-default-transaction-isolation-level.patchDownload+141-2
v1-0003-Pgbench-Report-per-statement-serialization-and-de.patchtext/x-diff; name=v1-0003-Pgbench-Report-per-statement-serialization-and-de.patchDownload+60-10
v1-0004-Pgbench-Fix-documentation.patchtext/x-diff; name=v1-0004-Pgbench-Fix-documentation.patchDownload+152-29
v1-0001-Pgbench-Serialization-and-deadlock-errors.patchtext/x-diff; name=v1-0001-Pgbench-Serialization-and-deadlock-errors.patchDownload+308-30
#2Robert Haas
robertmhaas@gmail.com
In reply to: Marina Polyakova (#1)
Re: WIP Patch: Pgbench Serialization and deadlock errors

On Wed, Jun 14, 2017 at 4:48 AM, Marina Polyakova
<m.polyakova@postgrespro.ru> wrote:

Now in pgbench we can test only transactions with Read Committed isolation
level because client sessions are disconnected forever on serialization
failures. There were some proposals and discussions about it (see message
here [1] and thread here [2]).

I suggest a patch where pgbench client sessions are not disconnected because
of serialization or deadlock failures and these failures are mentioned in
reports. In details:
- transaction with one of these failures continue run normally, but its
result is rolled back;
- if there were these failures during script execution this "transaction" is
marked
appropriately in logs;
- numbers of "transactions" with these failures are printed in progress, in
aggregation logs and in the end with other results (all and for each
script);

Advanced options:
- mostly for testing built-in scripts: you can set the default transaction
isolation level by the appropriate benchmarking option (-I);
- for more detailed reports: to know per-statement serialization and
deadlock failures you can use the appropriate benchmarking option
(--report-failures).

Also: TAP tests for new functionality and changed documentation with new
examples.

Patches are attached. Any suggestions are welcome!

Sounds like a good idea. Please add to the next CommitFest and review
somebody else's patch in exchange for having your own patch reviewed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Robert Haas (#2)
Re: WIP Patch: Pgbench Serialization and deadlock errors

Sounds like a good idea.

Thank you!

Please add to the next CommitFest

Done: https://commitfest.postgresql.org/14/1170/

and review
somebody else's patch in exchange for having your own patch reviewed.

Of course, I remember about it.

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Andres Freund
andres@anarazel.de
In reply to: Marina Polyakova (#1)
Re: WIP Patch: Pgbench Serialization and deadlock errors

Hi,

On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:

Now in pgbench we can test only transactions with Read Committed isolation
level because client sessions are disconnected forever on serialization
failures. There were some proposals and discussions about it (see message
here [1] and thread here [2]).

I suggest a patch where pgbench client sessions are not disconnected because
of serialization or deadlock failures and these failures are mentioned in
reports.

I think that's a good idea and sorely needed.

In details:

- if there were these failures during script execution this "transaction" is
marked
appropriately in logs;
- numbers of "transactions" with these failures are printed in progress, in
aggregation logs and in the end with other results (all and for each
script);

I guess that'll include a "rolled-back %' or 'retried %' somewhere?

Advanced options:
- mostly for testing built-in scripts: you can set the default transaction
isolation level by the appropriate benchmarking option (-I);

I'm less convinced of the need of htat, you can already set arbitrary
connection options with
PGOPTIONS='-c default_transaction_isolation=serializable' pgbench

P.S. Does this use case (do not retry transaction with serialization or
deadlock failure) is most interesting or failed transactions should be
retried (and how much times if there seems to be no hope of success...)?

I can't quite parse that sentence, could you restate?

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Andres Freund (#4)
Re: WIP Patch: Pgbench Serialization and deadlock errors

On Thu, Jun 15, 2017 at 2:16 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:

I suggest a patch where pgbench client sessions are not disconnected because
of serialization or deadlock failures and these failures are mentioned in
reports.

I think that's a good idea and sorely needed.

+1

P.S. Does this use case (do not retry transaction with serialization or
deadlock failure) is most interesting or failed transactions should be
retried (and how much times if there seems to be no hope of success...)?

I can't quite parse that sentence, could you restate?

The way I read it was that the most interesting solution would retry
a transaction from the beginning on a serialization failure or
deadlock failure. Most people who use serializable transactions (at
least in my experience) run though a framework that does that
automatically, regardless of what client code initiated the
transaction. These retries are generally hidden from the client
code -- it just looks like the transaction took a bit longer.
Sometimes people will have a limit on the number of retries. I
never used such a limit and never had a problem, because our
implementation of serializable transactions will not throw a
serialization failure error until one of the transactions involved
in causing it has successfully committed -- meaning that the retry
can only hit this again on a *new* set of transactions.

Essentially, the transaction should only count toward the TPS rate
when it eventually completes without a serialization failure.

Marina, did I understand you correctly?

--
Kevin Grittner
VMware vCenter Server
https://www.vmware.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Kevin Grittner (#5)
Re: WIP Patch: Pgbench Serialization and deadlock errors

Kevin Grittner wrote:

On Thu, Jun 15, 2017 at 2:16 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:

P.S. Does this use case (do not retry transaction with serialization or
deadlock failure) is most interesting or failed transactions should be
retried (and how much times if there seems to be no hope of success...)?

I can't quite parse that sentence, could you restate?

The way I read it was that the most interesting solution would retry
a transaction from the beginning on a serialization failure or
deadlock failure.

As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded. And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.

I think it's pretty obvious that transactions that failed with
some serializability problem should be retried.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Thomas Munro
thomas.munro@gmail.com
In reply to: Alvaro Herrera (#6)
Re: WIP Patch: Pgbench Serialization and deadlock errors

On Fri, Jun 16, 2017 at 9:18 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Kevin Grittner wrote:

On Thu, Jun 15, 2017 at 2:16 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:

P.S. Does this use case (do not retry transaction with serialization or
deadlock failure) is most interesting or failed transactions should be
retried (and how much times if there seems to be no hope of success...)?

I can't quite parse that sentence, could you restate?

The way I read it was that the most interesting solution would retry
a transaction from the beginning on a serialization failure or
deadlock failure.

As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded. And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.

I think it's pretty obvious that transactions that failed with
some serializability problem should be retried.

+1 for retry with reporting of retry rates

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Alvaro Herrera (#6)
Re: WIP Patch: Pgbench Serialization and deadlock errors

On Thu, Jun 15, 2017 at 4:18 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Kevin Grittner wrote:

As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded. And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.

I think it's pretty obvious that transactions that failed with
some serializability problem should be retried.

Agreed all around.

--
Kevin Grittner
VMware vCenter Server
https://www.vmware.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Andres Freund (#4)
Re: WIP Patch: Pgbench Serialization and deadlock errors

Hi,

Hello!

I think that's a good idea and sorely needed.

Thanks, I'm very glad to hear it!

- if there were these failures during script execution this
"transaction" is
marked
appropriately in logs;
- numbers of "transactions" with these failures are printed in
progress, in
aggregation logs and in the end with other results (all and for each
script);

I guess that'll include a "rolled-back %' or 'retried %' somewhere?

Not exactly, see documentation:

+   If transaction has serialization / deadlock failure or them both 
(last thing
+   is possible if used script contains several transactions; see
+   <xref linkend="transactions-and-scripts"
+   endterm="transactions-and-scripts-title"> for more information), its
+   <replaceable>time</> will be reported as <literal>serialization 
failure</> /
+   <literal>deadlock failure</> /
+   <literal>serialization and deadlock failures</> appropriately.
+   Example with serialization, deadlock and both these failures:
+<screen>
+1 128 24968 0 1496759158 426984
+0 129 serialization failure 0 1496759158 427023
+3 129 serialization failure 0 1496759158 432662
+2 128 serialization failure 0 1496759158 432765
+0 130 deadlock failure 0 1496759159 460070
+1 129 serialization failure 0 1496759160 485188
+2 129 serialization and deadlock failures 0 1496759160 485339
+4 130 serialization failure 0 1496759160 485465
+</screen>

I have understood proposals in next messages of this thread that the
most interesting case is to retry failed transaction. Do you think it's
better to write for example "rolled-back after % retries (serialization
failure)' or "time (retried % times, serialization and deadlock
failures)'?

Advanced options:
- mostly for testing built-in scripts: you can set the default
transaction
isolation level by the appropriate benchmarking option (-I);

I'm less convinced of the need of htat, you can already set arbitrary
connection options with
PGOPTIONS='-c default_transaction_isolation=serializable' pgbench

Oh, thanks, I forgot about it =[

P.S. Does this use case (do not retry transaction with serialization
or
deadlock failure) is most interesting or failed transactions should be
retried (and how much times if there seems to be no hope of
success...)?

I can't quite parse that sentence, could you restate?

Álvaro Herrera later in this thread has understood my text right:

As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded. And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.

With his explanation has my text become clearer?

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Kevin Grittner (#5)
Re: WIP Patch: Pgbench Serialization and deadlock errors

P.S. Does this use case (do not retry transaction with serialization
or
deadlock failure) is most interesting or failed transactions should
be
retried (and how much times if there seems to be no hope of
success...)?

I can't quite parse that sentence, could you restate?

The way I read it was that the most interesting solution would retry
a transaction from the beginning on a serialization failure or
deadlock failure. Most people who use serializable transactions (at
least in my experience) run though a framework that does that
automatically, regardless of what client code initiated the
transaction. These retries are generally hidden from the client
code -- it just looks like the transaction took a bit longer.
Sometimes people will have a limit on the number of retries. I
never used such a limit and never had a problem, because our
implementation of serializable transactions will not throw a
serialization failure error until one of the transactions involved
in causing it has successfully committed -- meaning that the retry
can only hit this again on a *new* set of transactions.

Essentially, the transaction should only count toward the TPS rate
when it eventually completes without a serialization failure.

Marina, did I understand you correctly?

Álvaro Herrera in next message of this thread has understood my text
right:

As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded. And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.

And thank you very much for your explanation how and why transactions
with failures should be retried! I'll try to implement all of it.

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Alvaro Herrera (#6)
Re: WIP Patch: Pgbench Serialization and deadlock errors

P.S. Does this use case (do not retry transaction with serialization or
deadlock failure) is most interesting or failed transactions should be
retried (and how much times if there seems to be no hope of success...)?

I can't quite parse that sentence, could you restate?

The way I read it was that the most interesting solution would retry
a transaction from the beginning on a serialization failure or
deadlock failure.

As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded. And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.

Yes, I have meant this, thank you!

I think it's pretty obvious that transactions that failed with
some serializability problem should be retried.

Thank you voted :)

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Marina Polyakova (#10)
Re: WIP Patch: Pgbench Serialization and deadlock errors

On Fri, Jun 16, 2017 at 5:31 AM, Marina Polyakova
<m.polyakova@postgrespro.ru> wrote:

And thank you very much for your explanation how and why transactions with
failures should be retried! I'll try to implement all of it.

To be clear, part of "retrying from the beginning" means that if a
result from one statement is used to determine the content (or
whether to run) a subsequent statement, that first statement must be
run in the new transaction and the results evaluated again to
determine what to use for the later statement. You can't simply
replay the statements that were run during the first try. For
examples, to help get a feel of why that is, see:

https://wiki.postgresql.org/wiki/SSI

--
Kevin Grittner
VMware vCenter Server
https://www.vmware.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Kevin Grittner (#12)
Re: WIP Patch: Pgbench Serialization and deadlock errors

<div dir='auto'><div class="gmail_extra" dir="auto"><div style="font-family: sans-serif; font-size: 13.696px;" dir="auto"><div style="font-size: 13.696px;" dir="auto"><div><font face="arial, helvetica, sans-serif">&gt; To be clear, part of "retrying from the beginning" means that if a</font></div><div><span style="font-family:'arial' , 'helvetica' , sans-serif">&gt;</span><span style="font-family:'arial' , 'helvetica' , sans-serif">&nbsp;</span><font face="arial, helvetica, sans-serif">result from one statement is used to determine the content (or</font></div><div><span style="font-family:'arial' , 'helvetica' , sans-serif">&gt;</span><span style="font-family:'arial' , 'helvetica' , sans-serif">&nbsp;</span><font face="arial, helvetica, sans-serif">whether to run) a subsequent statement, that first statement must be</font></div><div><span style="font-family:'arial' , 'helvetica' , sans-serif">&gt;</span><span style="font-family:'arial' , 'helvetica' , sans-serif">&nbsp;</span><font face="arial, helvetica, sans-serif">run in the new transaction and the results evaluated again to</font></div><div><span style="font-family:'arial' , 'helvetica' , sans-serif">&gt;</span><span style="font-family:'arial' , 'helvetica' , sans-serif">&nbsp;</span><font face="arial, helvetica, sans-serif">determine what to use for the later statement.&nbsp; You can't simply</font></div><div><span style="font-family:'arial' , 'helvetica' , sans-serif">&gt;</span><span style="font-family:'arial' , 'helvetica' , sans-serif">&nbsp;</span><font face="arial, helvetica, sans-serif">replay the statements that were run during the first try.&nbsp; For</font></div><div><span style="font-family:'arial' , 'helvetica' , sans-serif">&gt;</span><span style="font-family:'arial' , 'helvetica' , sans-serif">&nbsp;</span><font face="arial, helvetica, sans-serif">examples, to help get a feel of why that is, see:</font></div><div><span style="font-family:'arial' , 'helvetica' , sans-serif">&gt;&nbsp;</span><font face="arial, helvetica, sans-serif"><br></font></div><div><span style="font-family:'arial' , 'helvetica' , sans-serif">&gt;</span><span style="font-family:'arial' , 'helvetica' , sans-serif">&nbsp;</span><font face="arial, helvetica, sans-serif"><a href="https://wiki.postgresql.org/wiki/SSI&quot; style="text-decoration-line: none; color: rgb(66, 133, 244);">https://wiki.postgresql.org/&lt;wbr&gt;wiki/SSI&lt;/a&gt;&lt;/font&gt;&lt;/div&gt;&lt;div&gt;&lt;font face="arial, helvetica, sans-serif"><br></font></div><div><font face="arial, helvetica, sans-serif">Thank you again! :))</font></div></div><div style="font-size: 13.696px; font-family: arial, helvetica, sans-serif;" dir="auto"><br></div><div style="font-size: 13.696px; font-family: arial, helvetica, sans-serif;" dir="auto">--&nbsp;</div><div style="font-size: 13.696px; font-family: arial, helvetica, sans-serif;" dir="auto">Marina Polyakova</div><div style="font-size: 13.696px; font-family: arial, helvetica, sans-serif;" dir="auto">Postgres Professional:&nbsp;<a href="http://www.postgrespro.com/&quot; style="text-decoration-line: none; color: rgb(66, 133, 244);">http://www.postgrespro.com&lt;/a&gt;&lt;/div&gt;&lt;div style="font-size: 13.696px; font-family: arial, helvetica, sans-serif;" dir="auto">The Russian Postgres Company</div></div></div></div>

#14Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#1)
Re: WIP Patch: Pgbench Serialization and deadlock errors

Hello Marina,

A few comments about the submitted patches.

I agree that improving the error handling ability of pgbench is a good
thing, although I'm not sure about the implications...

About the "retry" discussion: I agree that retry is the relevant option
from an application point of view.

ISTM that the retry implementation should be implemented somehow in the
automaton, restarting the same script for the beginning.

As pointed out in the discussion, the same values/commands should be
executed, which suggests that random generated values should be the same
on the retry runs, so that for a simple script the same operations are
attempted. This means that the random generator state must be kept &
reinstated for a client on retries. Currently the random state is in the
thread, which is not convenient for this purpose, so it should be moved in
the client so that it can be saved at transaction start and reinstated on
retries.

The number of retries and maybe failures should be counted, maybe with
some adjustable maximum, as suggested.

About 0001:

In accumStats, just use one level if, the two levels bring nothing.

In doLog, added columns should be at the end of the format. The number of
column MUST NOT change when different issues arise, so that it works well
with cut/... unix commands, so inserting a sentence such as "serialization
and deadlock failures" is a bad idea.

threadRun: the point of the progress format is to fit on one not too wide
line on a terminal and to allow some simple automatic processing. Adding a
verbose sentence in the middle of it is not the way to go.

About tests: I do not understand why test 003 includes 2 transactions.
It would seem more logical to have two scripts.

About 0003:

I'm not sure that there should be an new option to report failures, the
information when relevant should be integrated in a clean format into the
existing reports... Maybe the "per command latency" report/option should
be renamed if it becomes more general.

About 0004:

The documentation must not be in a separate patch, but in the same patch
as their corresponding code.

--
Fabien.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#14)
Re: WIP Patch: Pgbench Serialization and deadlock errors

Hello Marina,

Hello, Fabien!

A few comments about the submitted patches.

Thank you very much for them!

I agree that improving the error handling ability of pgbench is a good
thing, although I'm not sure about the implications...

Could you tell a little bit more exactly.. What implications are you
worried about?

About the "retry" discussion: I agree that retry is the relevant
option from an application point of view.

I'm glad to hear it!

ISTM that the retry implementation should be implemented somehow in
the automaton, restarting the same script for the beginning.

If there are several transactions in this script - don't you think that
we should restart only the failed transaction?..

As pointed out in the discussion, the same values/commands should be
executed, which suggests that random generated values should be the
same on the retry runs, so that for a simple script the same
operations are attempted. This means that the random generator state
must be kept & reinstated for a client on retries. Currently the
random state is in the thread, which is not convenient for this
purpose, so it should be moved in the client so that it can be saved
at transaction start and reinstated on retries.

I think about it in the same way =)

The number of retries and maybe failures should be counted, maybe with
some adjustable maximum, as suggested.

If we fix the maximum number of attempts the maximum number of failures
for one script execution will be bounded above
(number_of_transactions_in_script * maximum_number_of_attempts). Do you
think we should make the option in program to limit this number much
more?

About 0001:

In accumStats, just use one level if, the two levels bring nothing.

Thanks, I agree =[

In doLog, added columns should be at the end of the format.

I have inserted it earlier because these columns are not optional. Do
you think they should be optional?

The number
of column MUST NOT change when different issues arise, so that it
works well with cut/... unix commands, so inserting a sentence such as
"serialization and deadlock failures" is a bad idea.

Thanks, I agree again.

threadRun: the point of the progress format is to fit on one not too
wide line on a terminal and to allow some simple automatic processing.
Adding a verbose sentence in the middle of it is not the way to go.

I was thinking about it.. Thanks, I'll try to make it shorter.

About tests: I do not understand why test 003 includes 2 transactions.
It would seem more logical to have two scripts.

Ok!

About 0003:

I'm not sure that there should be an new option to report failures,
the information when relevant should be integrated in a clean format
into the existing reports... Maybe the "per command latency"
report/option should be renamed if it becomes more general.

I have tried do not change other parts of program as much as possible.
But if you think that it will be more useful to change the option I'll
do it.

About 0004:

The documentation must not be in a separate patch, but in the same
patch as their corresponding code.

Ok!

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#15)
Re: WIP Patch: Pgbench Serialization and deadlock errors

Hello Marina,

I agree that improving the error handling ability of pgbench is a good
thing, although I'm not sure about the implications...

Could you tell a little bit more exactly.. What implications are you worried
about?

The current error handling is either "close connection" or maybe in some
cases even "exit". If this is changed, then the client may continue
execution in some unforseen state and behave unexpectedly. We'll see.

ISTM that the retry implementation should be implemented somehow in
the automaton, restarting the same script for the beginning.

If there are several transactions in this script - don't you think that we
should restart only the failed transaction?..

On some transaction failures based on their status. My point is that the
retry process must be implemented clearly with a new state in the client
automaton. Exactly when the transition to this new state must be taken is
another issue.

The number of retries and maybe failures should be counted, maybe with
some adjustable maximum, as suggested.

If we fix the maximum number of attempts the maximum number of failures for
one script execution will be bounded above (number_of_transactions_in_script
* maximum_number_of_attempts). Do you think we should make the option in
program to limit this number much more?

Probably not. I think that there should be a configurable maximum of
retries on a transaction, which may be 0 by default if we want to be
upward compatible with the current behavior, or maybe something else.

In doLog, added columns should be at the end of the format.

I have inserted it earlier because these columns are not optional. Do you
think they should be optional?

I think that new non-optional columns it should be at the end of the
existing non-optional columns so that existing scripts which may process
the output may not need to be updated.

I'm not sure that there should be an new option to report failures,
the information when relevant should be integrated in a clean format
into the existing reports... Maybe the "per command latency"
report/option should be renamed if it becomes more general.

I have tried do not change other parts of program as much as possible. But if
you think that it will be more useful to change the option I'll do it.

I think that the option should change if its naming becomes less relevant,
which is to be determined. AFAICS, ISTM that new measures should be added
to the various existing reports unconditionnaly (i.e. without a new
option), so maybe no new option would be needed.

--
Fabien.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#16)
Re: WIP Patch: Pgbench Serialization and deadlock errors

The current error handling is either "close connection" or maybe in
some cases even "exit". If this is changed, then the client may
continue execution in some unforseen state and behave unexpectedly.
We'll see.

Thanks, now I understand this.

ISTM that the retry implementation should be implemented somehow in
the automaton, restarting the same script for the beginning.

If there are several transactions in this script - don't you think
that we should restart only the failed transaction?..

On some transaction failures based on their status. My point is that
the retry process must be implemented clearly with a new state in the
client automaton. Exactly when the transition to this new state must
be taken is another issue.

About it, I agree with you that it should be done in this way.

The number of retries and maybe failures should be counted, maybe
with
some adjustable maximum, as suggested.

If we fix the maximum number of attempts the maximum number of
failures for one script execution will be bounded above
(number_of_transactions_in_script * maximum_number_of_attempts). Do
you think we should make the option in program to limit this number
much more?

Probably not. I think that there should be a configurable maximum of
retries on a transaction, which may be 0 by default if we want to be
upward compatible with the current behavior, or maybe something else.

I propose the option --max-attempts-number=NUM which NUM cannot be less
than 1. I propose it because I think that, for example,
--max-attempts-number=100 is better than --max-retries-number=99. And
maybe it's better to set its default value to 1 too because retrying of
shell commands can produce new errors..

In doLog, added columns should be at the end of the format.

I have inserted it earlier because these columns are not optional. Do
you think they should be optional?

I think that new non-optional columns it should be at the end of the
existing non-optional columns so that existing scripts which may
process the output may not need to be updated.

Thanks, I agree with you :)

I'm not sure that there should be an new option to report failures,
the information when relevant should be integrated in a clean format
into the existing reports... Maybe the "per command latency"
report/option should be renamed if it becomes more general.

I have tried do not change other parts of program as much as possible.
But if you think that it will be more useful to change the option I'll
do it.

I think that the option should change if its naming becomes less
relevant, which is to be determined. AFAICS, ISTM that new measures
should be added to the various existing reports unconditionnaly (i.e.
without a new option), so maybe no new option would be needed.

Thanks! I didn't think about it in this way..

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#17)
Re: WIP Patch: Pgbench Serialization and deadlock errors

The number of retries and maybe failures should be counted, maybe with
some adjustable maximum, as suggested.

If we fix the maximum number of attempts the maximum number of failures
for one script execution will be bounded above
(number_of_transactions_in_script * maximum_number_of_attempts). Do you
think we should make the option in program to limit this number much more?

Probably not. I think that there should be a configurable maximum of
retries on a transaction, which may be 0 by default if we want to be
upward compatible with the current behavior, or maybe something else.

I propose the option --max-attempts-number=NUM which NUM cannot be less than
1. I propose it because I think that, for example, --max-attempts-number=100
is better than --max-retries-number=99. And maybe it's better to set its
default value to 1 too because retrying of shell commands can produce new
errors..

Personnaly, I like counting retries because it also counts the number of
time the transaction actually failed for some reason. But this is a
marginal preference, and one can be switchted to the other easily.

--
Fabien.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Alexander Korotkov
aekorotkov@gmail.com
In reply to: Andres Freund (#4)
Re: WIP Patch: Pgbench Serialization and deadlock errors

On Thu, Jun 15, 2017 at 10:16 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:

Advanced options:
- mostly for testing built-in scripts: you can set the default

transaction

isolation level by the appropriate benchmarking option (-I);

I'm less convinced of the need of htat, you can already set arbitrary
connection options with
PGOPTIONS='-c default_transaction_isolation=serializable' pgbench

Right, there is already way to specify default isolation level using
environment variables.
However, once we make pgbench work with various isolation levels, users may
want to run pgbench multiple times in a row with different isolation
levels. Command line option would be very convenient in this case.
In addition, isolation level is vital parameter to interpret benchmark
results correctly. Often, graphs with pgbench results are entitled with
pgbench command line. Having, isolation level specified in command line
would naturally fit into this entitling scheme.
Of course, this is solely usability question and it's fair enough to live
without such a command line option. But I'm +1 to add this option.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#20Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#18)
Re: WIP Patch: Pgbench Serialization and deadlock errors

Hello everyone!

There's the second version of my patch for pgbench. Now transactions
with serialization and deadlock failures are rolled back and retried
until they end successfully or their number of attempts reaches maximum.

In details:
- You can set the maximum number of attempts by the appropriate
benchmarking option (--max-attempts-number). Its default value is 1
partly because retrying of shell commands can produce new errors.
- Statistics of attempts and failures is printed in progress, in
transaction / aggregation logs and in the end with other results (all
and for each script). The transaction failure is reported here only if
the last retry of this transaction fails.
- Also failures and average numbers of transactions attempts are printed
per-command with average latencies if you use the appropriate
benchmarking option (--report-per-command, -r) (it replaces the option
--report-latencies as I was advised here [1]/messages/by-id/alpine.DEB.2.20.1707031321370.3419@lancre). Average numbers of
transactions attempts are printed only for commands which start
transactions.

As usual: TAP tests for new functionality and changed documentation with
new examples.

Patch is attached. Any suggestions are welcome!

[1]: /messages/by-id/alpine.DEB.2.20.1707031321370.3419@lancre
/messages/by-id/alpine.DEB.2.20.1707031321370.3419@lancre

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

v2-0001-Pgbench-Retry-transactions-with-serialization-or-.patchtext/x-diff; name=v2-0001-Pgbench-Retry-transactions-with-serialization-or-.patchDownload+1421-139
#21Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#20)
#22Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Fabien COELHO (#21)
#23Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#21)
#24Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#22)
#25Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#23)
#26Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#24)
#27Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#25)
#28Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#26)
#29Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#27)
#30Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#28)
#31Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#29)
#32Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#30)
#33Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#32)
#34Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#31)
#35Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#33)
#36Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#34)
#37Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Marina Polyakova (#36)
#38Andres Freund
andres@anarazel.de
In reply to: Marina Polyakova (#37)
#39Alexander Korotkov
aekorotkov@gmail.com
In reply to: Andres Freund (#38)
#40Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Andres Freund (#38)
#41Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#37)
#42Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#41)
#43Teodor Sigaev
teodor@sigaev.ru
In reply to: Marina Polyakova (#1)
#44Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Teodor Sigaev (#43)
#45Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#44)
#46Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#45)
#47Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#46)
#48Teodor Sigaev
teodor@sigaev.ru
In reply to: Marina Polyakova (#47)
#49Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Teodor Sigaev (#48)
#50Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#49)
#51Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Marina Polyakova (#50)
#52Ildus Kurbangaliev
i.kurbangaliev@postgrespro.ru
In reply to: Marina Polyakova (#51)
#53Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Ildus Kurbangaliev (#52)
#54Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#53)
#55Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Fabien COELHO (#54)
#56Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Alvaro Herrera (#55)
#57Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Fabien COELHO (#56)
#58Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Alvaro Herrera (#57)
#59Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#58)
#60Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#58)
#61Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#59)
#62Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#60)
#63Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#58)
#64Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#63)
#65Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#64)
#66Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Fabien COELHO (#65)
#67Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Alvaro Herrera (#66)
#68Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#65)
#69Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Alvaro Herrera (#66)
#70Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#58)
#71Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#70)
#72Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#71)
#73Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#72)
#74Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Marina Polyakova (#73)
#75Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Marina Polyakova (#58)
#76Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#73)
#77Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Alvaro Herrera (#74)
#78Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Alvaro Herrera (#75)
#79Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#76)
#80Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#79)
#81Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#80)
#82Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#81)
#83Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#82)
#84Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#81)
#85Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#84)
#86Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#85)
#87Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#86)
#88Arthur Zakirov
a.zakirov@postgrespro.ru
In reply to: Marina Polyakova (#85)
#89Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Arthur Zakirov (#88)
#90Arthur Zakirov
a.zakirov@postgrespro.ru
In reply to: Marina Polyakova (#89)
#91Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Arthur Zakirov (#90)
#92Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#81)
#93Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Fabien COELHO (#92)
#94Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#93)
#95Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#81)
#96Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#95)
#97Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#96)
#98Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#97)
#99Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#98)
#100Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#99)
#101Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Marina Polyakova (#100)
#102Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#101)
#103Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#101)
#104Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#102)
#105Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#103)
#106Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Marina Polyakova (#105)
#107Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#105)
#108Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#107)
#109Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Marina Polyakova (#108)
#110Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Fabien COELHO (#109)
#111Michael Paquier
michael@paquier.xyz
In reply to: Marina Polyakova (#110)
#112Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Marina Polyakova (#101)
#113Marina Polyakova
m.polyakova@postgrespro.ru
In reply to: Alvaro Herrera (#112)
#114Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Marina Polyakova (#113)
#115Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Alvaro Herrera (#114)
#116Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Fabien COELHO (#115)
#117Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Alvaro Herrera (#116)
#118Thomas Munro
thomas.munro@gmail.com
In reply to: Marina Polyakova (#113)
#119Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Thomas Munro (#118)
#120Thomas Munro
thomas.munro@gmail.com
In reply to: Fabien COELHO (#119)
#121Yugo Nagata
nagata@sraoss.co.jp
In reply to: Thomas Munro (#120)
#122Yugo Nagata
nagata@sraoss.co.jp
In reply to: Yugo Nagata (#121)
#123Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Yugo Nagata (#122)
#124Yugo Nagata
nagata@sraoss.co.jp
In reply to: Fabien COELHO (#123)
#125Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Yugo Nagata (#122)
#126Yugo Nagata
nagata@sraoss.co.jp
In reply to: Fabien COELHO (#125)
#127Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Yugo Nagata (#126)
#128Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Yugo Nagata (#122)
#129Yugo Nagata
nagata@sraoss.co.jp
In reply to: Fabien COELHO (#128)
#130Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Yugo Nagata (#129)
#131Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#129)
#132Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#131)
#133Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#132)
#134Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#132)
#135Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#133)
#136Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#135)
#137Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#136)
#138Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#137)
#139Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#138)
#140Yugo Nagata
nagata@sraoss.co.jp
In reply to: Fabien COELHO (#130)
#141Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#140)
#142Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Yugo Nagata (#139)
#143Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Fabien COELHO (#142)
#144Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#143)
#145Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#144)
#146Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#145)
#147Yugo Nagata
nagata@sraoss.co.jp
In reply to: Yugo Nagata (#146)
#148Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Yugo Nagata (#147)
#149Yugo Nagata
nagata@sraoss.co.jp
In reply to: Fabien COELHO (#148)
#150Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#149)
#151Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Tatsuo Ishii (#150)
#152Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Yugo Nagata (#149)
#153Yugo Nagata
nagata@sraoss.co.jp
In reply to: Fabien COELHO (#152)
#154Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#153)
#155Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#154)
#156Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#155)
#157Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#154)
#158Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#156)
#159Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#158)
#160Yugo Nagata
nagata@sraoss.co.jp
In reply to: Yugo Nagata (#157)
#161Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#160)
#162Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#161)
#163Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#162)
#164Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tom Lane (#163)
#165Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#164)
#166Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#165)
#167Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#166)
#168Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#167)
#169Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#168)
#170Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#169)
#171Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#169)
#172Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#170)
#173Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#171)
#174Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#173)
#175Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#174)
#176Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#175)
#177Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#176)
#178Yugo Nagata
nagata@sraoss.co.jp
In reply to: Tatsuo Ishii (#177)
#179Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Yugo Nagata (#178)
#180Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#179)
#181Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tatsuo Ishii (#175)
#182Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#181)
#183Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Alvaro Herrera (#181)
#184Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#182)
#185Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Tatsuo Ishii (#184)
#186Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#184)
#187Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Tatsuo Ishii (#186)
#188Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Fabien COELHO (#187)
#189Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Fabien COELHO (#187)
#190Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Alvaro Herrera (#189)
#191Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#189)
#192Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#190)
#193Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Tatsuo Ishii (#192)
#194Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Fabien COELHO (#193)
#195Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tatsuo Ishii (#194)
#196Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#195)
#197Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#196)
#198Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Tom Lane (#196)
#199Tom Lane
tgl@sss.pgh.pa.us
In reply to: Fabien COELHO (#198)
#200Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Fabien COELHO (#198)
#201Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Tatsuo Ishii (#200)
#202Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Fabien COELHO (#201)
#203Fabien COELHO
coelho@cri.ensmp.fr
In reply to: Tatsuo Ishii (#202)