bigserial continuity safety

Started by Pawel Veselovalmost 11 years ago4 messagesgeneral
Jump to latest
#1Pawel Veselov
pawel.veselov@gmail.com

Hi.

If I have a table created as:

CREATE TABLE xq_agr (
id BIGSERIAL PRIMARY KEY,
node text not null
);

and that multiple applications insert into. The applications never
explicitly specify the value for 'id'.
Is it safe to, on a single connection, do:

- open transaction (default transaction isolation)
- Open cursor for select * from xq_agr order by id asc
- do something with current record
- advance the cursor (and repeat something), but stop at some point (id =
LAST_ID), and
- delete from xq_agr where id <= LAST_ID;
- commit

"safe to" means - whether the cursor will not miss any records that were
deleted at the end.

I'm suspecting that depending on the commit order, I may have situations
when:
- TX1 insert ID 1
- TX2 insert ID 2
- TX2 commits
- TX3 scans 2
- TX1 commits
- TX3 deletes <= 2
- record ID1 is deleted, but never processed.

Is that sequence of events as listed above possible? If yes, is there a
transaction isolation I can use to avoid that?

Table and sequence definition, as present in the DB:

db=> \d+ xq_agr_id_seq
Sequence "public.xq_agr_id_seq"
Column | Type | Value | Storage
---------------+---------+---------------------+---------
sequence_name | name | xq_agr_id_seq | plain
last_value | bigint | 139898829 | plain
start_value | bigint | 1 | plain
increment_by | bigint | 1 | plain
max_value | bigint | 9223372036854775807 | plain
min_value | bigint | 1 | plain
cache_value | bigint | 1 | plain
log_cnt | bigint | 27 | plain
is_cycled | boolean | f | plain
is_called | boolean | t | plain

db=> \d xq_agr
Table "public.xq_agr"
Column | Type | Modifiers

-------------------+---------+-----------------------------------------------------
id | bigint | not null default
nextval('xq_agr_id_seq'::regclass)
node | text | not null
Indexes:
"xq_agr_pkey" PRIMARY KEY, btree (id)

#2David G. Johnston
david.g.johnston@gmail.com
In reply to: Pawel Veselov (#1)
Re: bigserial continuity safety

On Mon, Apr 13, 2015 at 3:05 PM, Pawel Veselov <pawel.veselov@gmail.com>
wrote:

Hi.

If I have a table created as:

CREATE TABLE xq_agr (
id BIGSERIAL PRIMARY KEY,
node text not null
);

and that multiple applications insert into. The applications never
explicitly specify the value for 'id'.
Is it safe to, on a single connection, do:

- open transaction (default transaction isolation)
- Open cursor for select * from xq_agr order by id asc
- do something with current record
- advance the cursor (and repeat something), but stop at some point (id =
LAST_ID), and
- delete from xq_agr where id <= LAST_ID;
- commit

"safe to" means - whether the cursor will not miss any records that were
deleted at the end.

I'm suspecting that depending on the commit order, I may have situations
when:
- TX1 insert ID 1
- TX2 insert ID 2
- TX2 commits
- TX3 scans 2
- TX1 commits
- TX3 deletes <= 2
- record ID1 is deleted, but never processed.

​Going to ignore the MVC question for the moment and describe a better
"state transition mechanism" to consider.

pending -> active -> completed

If you ensure you never delete (i.e., transition to completed) something
that isn't active then you can never delete an item in pending.

​Limit the locking to the state transitions only.

The downside is the need to deal with "active" items that have been
abandoned by whatever process marked them active.

Back to your question: you should probably not use "<=" in your where
clause. However, in READ COMMITTED TX3 cannot see ID1 since the snapshot
it took out was created before TX1 committed. I am not fluent enough to
work through the entire scenario in my head. I'd suggest you actually open
up 3 psql sessions and play with them to see how things really behave.

For me, a simply "SELECT FOR UPDATE / UPDATE WHERE" command in a function
solves the problem as small scale with minimal performance degradation.
The transition from "pending" to "active" is effectively serialized and the
transition from "active" to "completed" only occurs when the process has
been performed and it is not possible to have two client simultaneously
processing the same work.

David J.

#3Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: David G. Johnston (#2)
Re: bigserial continuity safety

On 4/13/15 7:45 PM, David G. Johnston wrote:

On Mon, Apr 13, 2015 at 3:05 PM, Pawel Veselov <pawel.veselov@gmail.com
<mailto:pawel.veselov@gmail.com>>wrote:

Hi.

If I have a table created as:

CREATE TABLE xq_agr (
id BIGSERIAL PRIMARY KEY,
node text not null
);

and that multiple applications insert into. The applications never
explicitly specify the value for 'id'.
Is it safe to, on a single connection, do:

- open transaction (default transaction isolation)
- Open cursor for select * from xq_agr order by id asc
- do something with current record
- advance the cursor (and repeat something), but stop at some point
(id = LAST_ID), and
- delete from xq_agr where id <= LAST_ID;
- commit

"safe to" means - whether the cursor will not miss any records that
were deleted at the end.

I'm suspecting that depending on the commit order, I may have
situations when:
- TX1 insert ID 1
- TX2 insert ID 2
- TX2 commits
- TX3 scans 2
- TX1 commits
- TX3 deletes <= 2
- record ID1 is deleted, but never processed.

​Going to ignore the MVC question for the moment and describe a better
"state transition mechanism" to consider.

pending -> active -> completed

If you ensure you never delete (i.e., transition to completed) something
that isn't active then you can never delete an item in pending.

​Limit the locking to the state transitions only.

The downside is the need to deal with "active" items that have been
abandoned by whatever process marked them active.

Another option is DELETE RETURNING. Instead of an initial SELECT to find
records to work on, you would do DELETE FROM WHERE RETURNING * and deal
with those records. I don't know if that's safe with a cursor though; I
believe the DELETE fully materializes before records start coming back.
So you need to handle all the rows from the SELECT or abort.

Back to your question: you should probably not use "<=" in your where
clause. However, in READ COMMITTED TX3 cannot see ID1 since the
snapshot it took out was created before TX1 committed. I am not fluent

Actually, that's not necessarily true. It depends when TX3 actually
takes it's snapshot, which is NOT when it runs BEGIN. I believe there's
other problems you'd run into as well. Basically, READ COMMITTED does
nothing to protect you from "phantom reads".

REPEATABLE READ should protect you from phantom reads, but it won't help
you if someone changes the data. If you're going to try and go this
route, SERIALIZABLE is your best bet.

enough to work through the entire scenario in my head. I'd suggest you
actually open up 3 psql sessions and play with them to see how things
really behave.

That's really not safe enough. There's just too many different race
conditions you can encounter, and I'd bet that you couldn't even
reproduce some of them from a client.

For me, a simply "SELECT FOR UPDATE / UPDATE WHERE" command in a
function solves the problem as small scale with minimal performance
degradation. The transition from "pending" to "active" is effectively
serialized and the transition from "active" to "completed" only occurs
when the process has been performed and it is not possible to have two
client simultaneously processing the same work.

Note that that isn't safe from repeatable reads. What you're describing
is only safe if the WHERE clause on the update is guaranteed to always
find only one row (ie, in this example, by using xq_agr.id = something).
Anything other than that is asking for trouble.

BTW, since it looks like you're just looking for a queue, you should
take a look at PgQ (https://wiki.postgresql.org/wiki/PGQ_Tutorial). It
uses something more efficient than bulk deletes to handle it's queue,
and you can set it up so it will handle a large amount of queued items
rather well (increase the number of tables). The one downside you may
run into is you MUST consume every event in a single batch. There used
to be support for the concept of "retrying" an event, but that may have
been removed. It would be safe for you to put failed events into a
second queue.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#4David G. Johnston
david.g.johnston@gmail.com
In reply to: Jim Nasby (#3)
Re: bigserial continuity safety

On Mon, Apr 13, 2015 at 7:01 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 4/13/15 7:45 PM, David G. Johnston wrote:

On Mon, Apr 13, 2015 at 3:05 PM, Pawel Veselov <pawel.veselov@gmail.com
<mailto:pawel.veselov@gmail.com>>wrote:

Hi.

If I have a table created as:

CREATE TABLE xq_agr (
id BIGSERIAL PRIMARY KEY,
node text not null
);

and that multiple applications insert into. The applications never
explicitly specify the value for 'id'.
Is it safe to, on a single connection, do:

- open transaction (default transaction isolation)
- Open cursor for select * from xq_agr order by id asc
- do something with current record
- advance the cursor (and repeat something), but stop at some point
(id = LAST_ID), and
- delete from xq_agr where id <= LAST_ID;
- commit

"safe to" means - whether the cursor will not miss any records that
were deleted at the end.

I'm suspecting that depending on the commit order, I may have
situations when:
- TX1 insert ID 1
- TX2 insert ID 2
​​
- TX2 commits
- TX3 scans 2
- TX1 commits
- TX3 deletes <= 2
- record ID1 is deleted, but never processed.

​Going to ignore the MVC question for the moment and describe a better
"state transition mechanism" to consider.

pending -> active -> completed

If you ensure you never delete (i.e., transition to completed) something
that isn't active then you can never delete an item in pending.

​Limit the locking to the state transitions only.

The downside is the need to deal with "active" items that have been
abandoned by whatever process marked them active.

Another option is DELETE RETURNING. Instead of an initial SELECT to find
records to work on, you would do DELETE FROM WHERE RETURNING * and deal
with those records. I don't know if that's safe with a cursor though; I
believe the DELETE fully materializes before records start coming back. So
you need to handle all the rows from the SELECT or abort.

Back to your question: you should probably not use "<=" in your where

clause. However, in READ COMMITTED TX3 cannot see ID1 since the
snapshot it took out was created before TX1 committed. I am not fluent

Actually, that's not necessarily true. It depends when TX3 actually takes
it's snapshot, which is NOT when it runs BEGIN. I believe there's other
problems you'd run into as well. Basically, READ COMMITTED does nothing to
protect you from "phantom reads".

​I was basing that off of:

​​ - TX2 commits
- TX3 scans 2
- TX1 commits

​The scanning causes the snapshot to be taken and occurs before TX1.​

As for the isolation levels I apparently got the two READ ones reversed in
my head...my bad :(

REPEATABLE READ should protect you from phantom reads, but it won't help

you if someone changes the data. If you're going to try and go this route,
SERIALIZABLE is your best bet.

While this is likely true if there is no other use of SERIALIZABLE in the
existing codebase then doing so requires learning/adding transaction retry
to the necessary skills and tools. Its worth considering other approaches
to avoid the cognitive overhead of serializable.​

enough to work through the entire scenario in my head. I'd suggest you

actually open up 3 psql sessions and play with them to see how things
really behave.

That's really not safe enough. There's just too many different race
conditions you can encounter, and I'd bet that you couldn't even reproduce
some of them from a client.

​I guess there is a difference between knowing something is obviously wrong
because this simple testing failed and not realizing that you still have a
problem because there was no way to reasonably test the condition you are
hitting.

The question becomes whether you rephrase the solution to make it simpler
and thus not as exposed to race conditions and the like (or fails
gracefully if it is - no deadlocks and hopefully minimal waiting) or
whether to simply prevent them outright (and quickly?) by failing with a
serialization exception.

For me, a simply "SELECT FOR UPDATE / UPDATE WHERE" command in a

function solves the problem as small scale with minimal performance
degradation. The transition from "pending" to "active" is effectively
serialized and the transition from "active" to "completed" only occurs
when the process has been performed and it is not possible to have two
client simultaneously processing the same work.

Note that that isn't safe from repeatable reads. What you're describing is
only safe if the WHERE clause on the update is guaranteed to always find
only one row (ie, in this example, by using xq_agr.id = something).
Anything other than that is asking for trouble.

​Yes, the result of the select returns an ID of (one or more) "pending"
jobs which has a LIMIT 1 applied to it and the first record is immediately
updated to reflect its "active" status.​ At the scale I operate at this
has proven acceptable. There are a few nuances - workers are not
homogeneous and can only work on a subset of all available tasks based upon
meta-data associated with each task.

David J.