BUG #6497: Error sent to client, but data written anyway
The following bug has been logged on the website:
Bug reference: 6497
Logged by: Ryan Lowe
Email address: rlowe@pablowe.net
PostgreSQL version: 9.1.3
Operating system: Linux
Description:
There is an edge case where Postgres will return an error to the client, but
commit the operation anyway, thus breaking the contract with the client:
postgres=# begin; insert into s.t (c) values (1);
<postgres process core dumps (note that the forked processes were not
affected)>
postgres=# commit;
<this returns an error to the application>
<postgress process restarts>
The data is now on both the master and slave, but the application
believes that the transaction was rolled back. A well-behaved
application will dutifully retry the transaction because it *should*
have rolled back. However, the data is there so we'll have had the
transaction execute *twice*.
rlowe@pablowe.net writes:
There is an edge case where Postgres will return an error to the client, but
commit the operation anyway, thus breaking the contract with the client:
postgres=# begin; insert into s.t (c) values (1);
<postgres process core dumps (note that the forked processes were not
affected)>
postgres=# commit;
<this returns an error to the application>
<postgress process restarts>
The data is now on both the master and slave, but the application
believes that the transaction was rolled back. A well-behaved
application will dutifully retry the transaction because it *should*
have rolled back. However, the data is there so we'll have had the
transaction execute *twice*.
Huh? If the backend dumped core before you sent it the commit, the
data will certainly not be committed. It might be physically present on
disk, but it won't be considered valid.
regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> wrote:
rlowe@pablowe.net writes:
There is an edge case where Postgres will return an error to the
client, but commit the operation anyway, thus breaking the
contract with the client:postgres=# begin; insert into s.t (c) values (1);
<postgres process core dumps (note that the forked processes were
not affected)>
postgres=# commit;
<this returns an error to the application>
<postgress process restarts>
The data is now on both the master and slave, but the application
believes that the transaction was rolled back. A well-behaved
application will dutifully retry the transaction because it
*should* have rolled back. However, the data is there so we'll
have had the transaction execute *twice*.Huh? If the backend dumped core before you sent it the commit,
the data will certainly not be committed. It might be physically
present on disk, but it won't be considered valid.
I suppose that there is a window of time between the commit becoming
effective and the return to the application which, from a user
perspective, would be hard to distinguish from what the OP
described. I don't really see how that can be avoided, though,
short of a transaction manager using 2PC. Any time the server
crashes while a COMMIT is pending, one must check to see whether it
"took".
As long as the client application sees the connection as dead in
this situation, I think PostgreSQL is doing everything just as it
should.
-Kevin
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
Tom Lane <tgl@sss.pgh.pa.us> wrote:
Huh? If the backend dumped core before you sent it the commit,
the data will certainly not be committed. It might be physically
present on disk, but it won't be considered valid.
I suppose that there is a window of time between the commit becoming
effective and the return to the application which, from a user
perspective, would be hard to distinguish from what the OP
described. I don't really see how that can be avoided, though,
short of a transaction manager using 2PC. Any time the server
crashes while a COMMIT is pending, one must check to see whether it
"took".
Yeah, a post-commit crash leaves you in doubt about whether the commit
"took", but you get into byzantine-generals issues as soon as you try to
design that out. Consider the possibility that the commit went fine but
the network eats the reply packet saying so. Really, any application
that is constructed to reconnect and retry has got to have logic to test
whether the commit occurred, not just assume that it did or didn't.
(2PC is hardly a magic bullet for this, either.)
regards, tom lane
Thanks for all the responses, but I think I'm being unclear here. Let's
walk through this case step-by-step. I start with a happy instance of
Postgres:
<pre>
% pstree -anpcul postgres
postgres,1944 -D /data/pgsql # Main process
├─postgres,1948 # Logger process
├─postgres,1950 # Writer process
├─postgres,1952 # WAL writer process
├─postgres,1953 # Auto vacuum launcher process
├─postgres,1954 # Archiver process
├─postgres,1955 # Stats collector process
├─postgres,1963 # WAL sender process
</pre>
(formatted for readability)
To mimic the failure scenario I am describing, let's start by creating a
table and opening a transaction:
<pre>
postgres=# CREATE TABLE test.t1 (c1 int);
postgres=# BEGIN; INSERT INTO test.t1 (c1) VALUES (1);
</pre>
At this point (in another shell), let's mimic a crash of the main process:
<pre>
% kill -9 1944 # Note this was postgres -D /data/pgsql from the pstree above
<pre>
Now let's see what is running:
<pre>
% kill -9 1944
% pstree -anpcul postgres
postgres,1948 # logger process
postgres,1950 # writer process
postgres,1955 # stats collector process
</pre>
The processes have all been adopted by init (expected). If we go back to
the original psql session after postgres is in the above state and complete
the transaction, this is what happens:
<pre>
postgres=# COMMIT;
The connection to the server was lost. Attempting reset: Failed.
</pre>
Horray. We've received an error because the transaction couldn't be
committed, right? Let's verify:
<pre>
% service postgresql-9.1 start
% psql
postgres=# SELECT * FROM test.t1;
c1
------
1
(1 row)
</pre>
Clearly this is not correct behavior. There has been talk in this thread
that the application should simply always try and validate that its
transactions have in fact failed, but that is not a feasible solution (for
many reasons). Thoughts?
-- Ryan Lowe
On Tue, Feb 28, 2012 at 2:42 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Show quoted text
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
Tom Lane <tgl@sss.pgh.pa.us> wrote:
Huh? If the backend dumped core before you sent it the commit,
the data will certainly not be committed. It might be physically
present on disk, but it won't be considered valid.I suppose that there is a window of time between the commit becoming
effective and the return to the application which, from a user
perspective, would be hard to distinguish from what the OP
described. I don't really see how that can be avoided, though,
short of a transaction manager using 2PC. Any time the server
crashes while a COMMIT is pending, one must check to see whether it
"took".Yeah, a post-commit crash leaves you in doubt about whether the commit
"took", but you get into byzantine-generals issues as soon as you try to
design that out. Consider the possibility that the commit went fine but
the network eats the reply packet saying so. Really, any application
that is constructed to reconnect and retry has got to have logic to test
whether the commit occurred, not just assume that it did or didn't.(2PC is hardly a magic bullet for this, either.)
regards, tom lane
Ryan Lowe <rlowe@pablowe.net> writes:
Thanks for all the responses, but I think I'm being unclear here. Let's
walk through this case step-by-step. I start with a happy instance of
Postgres:
This example does not have anything to do with the actual behavior of
Postgres, at least not on any system I know about. Killing the
postmaster does not cause child backends to die, and it certainly
doesn't cause them to commit open transactions and then die.
The system would be quite seriously broken if it acted as you describe.
I tested this, just to see if somebody had broken it when I wasn't
looking. The behavior that I see here is:
1. Killing the postmaster doesn't affect open transactions.
2. Killing the specific backend prevents the transaction from committing.
Which is what I expected.
... There has been talk in this thread
that the application should simply always try and validate that its
transactions have in fact failed, but that is not a feasible solution (for
many reasons). Thoughts?
You might wish to believe that you can ignore the problem, but you can't.
No matter what Postgres does or doesn't do, external issues such as
network failures can create the problem of a transaction possibly being
committed while the client remains in doubt whether it happened or not.
regards, tom lane
On Feb 29, 2012, at 9:26 AM, Ryan Lowe wrote:
Thanks for all the responses, but I think I'm being unclear here.
The point Tom was making is that this is indistinguishable from the scenario:
1. Client sends commit.
2. Server successfully commits data.
3. Server starts to write response.
4. Network fails.
5. Client receives abnormal disconnection response from its network connection to the server.
--
-- Christophe Pettus
xof@thebuild.com
Christophe Pettus <xof@thebuild.com> writes:
On Feb 29, 2012, at 9:26 AM, Ryan Lowe wrote:
Thanks for all the responses, but I think I'm being unclear here.
The point Tom was making is that this is indistinguishable from the scenario:
1. Client sends commit.
2. Server successfully commits data.
3. Server starts to write response.
4. Network fails.
5. Client receives abnormal disconnection response from its network connection to the server.
Well, that's the argument as to why the client code has to be capable of
checking whether the commit happened if it's going to attempt a
reconnect and retry. But there's a quite separate question as to
whether the behavior Ryan is claiming for a pre-commit crash is actually
possible. I don't believe it, and I failed to reproduce his test
scenario.
regards, tom lane
On Feb 29, 2012, at 1:15 PM, Tom Lane wrote:
But there's a quite separate question as to
whether the behavior Ryan is claiming for a pre-commit crash is actually
possible. I don't believe it, and I failed to reproduce his test
scenario.
Did you check it with killing the postmaster (as Ryan was) as opposed to the backend?
--
-- Christophe Pettus
xof@thebuild.com
Christophe Pettus <xof@thebuild.com> writes:
On Feb 29, 2012, at 1:15 PM, Tom Lane wrote:
But there's a quite separate question as to
whether the behavior Ryan is claiming for a pre-commit crash is actually
possible. I don't believe it, and I failed to reproduce his test
scenario.
Did you check it with killing the postmaster (as Ryan was) as opposed to the backend?
I tried both, and got the expected (though different) results both
ways. See my previous response.
regards, tom lane
You might wish to believe that you can ignore the problem, but you can't.
No matter what Postgres does or doesn't do, external issues such as
network failures can create the problem of a transaction possibly being
committed while the client remains in doubt whether it happened or not.
I think this is a well known issue and the only solution is a
transaction system covering the communication path. Since there's no
such a transaction system exists in PostgreSQL, clients should be very
carefull when try to resend DMLs as you already pointed out.
F.Y.I. pgpool-II disconnects the connection to PostgreSQL upon failure
including network problem, rather than trying to resend packet to
PostgreSQL due to the reason stated above.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp