advancing snapshot's xmin

Started by Alvaro Herreraalmost 18 years ago22 messages
#1Alvaro Herrera
alvherre@commandprompt.com

Hi,

I've finished (hopefully) the code to handle a current list of open
snapshots in a transaction. I'm now wondering how to put it to good use
;-) I'm not posting it yet -- first I want to get some feedback on the
previous patch I posted,
http://archives.postgresql.org/pgsql-patches/2008-03/msg00245.php

I think the important change here is switching the semantics of
MyProc->xmin. Currently, it is "the minimum of Xmin and Xid, across all
backends, at the moment the current transaction fetches its serializable
snapshot". The first important bit is that it is computed only once:
when the serializable snapshot is taken.

So ISTM the important change is that we will have to update MyProc->xmin
more frequently than that. I'm thinking in keeping enough local state
so that we can detect at what time the earliest open snapshot is
unregistered; when that happens, we can recalculate MyProc->xmin based
on the snapshots we have and the Xid/Xmin of remote backends (which
could have also been updating their own xmins).

There is one hole here: contention on ProcArrayLock. Basically, for
simple transactions we will need to update MyProc after every command.
It has been reported that ProcArrayLock is the most contended lock for
some loads; this would only add to that, and heavily I think. Perhaps
we could restructure the locking here somehow to avoid this problem, but
it is complex enough already that it may not even be possible.

Another idea is to throttle the updating of Xmin so it only happens once
in a while, but it's difficult to find a useful criterion and avoid
falling into the trap that we just neglected to update it before a large
command.

Thoughts?

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#2Simon Riggs
simon@2ndquadrant.com
In reply to: Alvaro Herrera (#1)
Re: advancing snapshot's xmin

On Tue, 2008-03-25 at 17:26 -0300, Alvaro Herrera wrote:

I've finished (hopefully) the code to handle a current list of open
snapshots in a transaction. I'm now wondering how to put it to good use
;-) I'm not posting it yet -- first I want to get some feedback on the
previous patch I posted,
http://archives.postgresql.org/pgsql-patches/2008-03/msg00245.php

As I said before, it looks fine. In your words, it "just moves code
around", so there's not much to complain about.

I think the important change here is switching the semantics of
MyProc->xmin. Currently, it is "the minimum of Xmin and Xid, across all
backends, at the moment the current transaction fetches its serializable
snapshot". The first important bit is that it is computed only once:
when the serializable snapshot is taken.

Yes, I see that as necessary. So the refactoring makes sense, since
we'll be adding lots of stuff in that area and its good to keep it
isolated.

So ISTM the important change is that we will have to update MyProc->xmin
more frequently than that. I'm thinking in keeping enough local state
so that we can detect at what time the earliest open snapshot is
unregistered; when that happens, we can recalculate MyProc->xmin based
on the snapshots we have and the Xid/Xmin of remote backends (which
could have also been updating their own xmins).

There is one hole here: contention on ProcArrayLock. Basically, for
simple transactions we will need to update MyProc after every command.
It has been reported that ProcArrayLock is the most contended lock for
some loads; this would only add to that, and heavily I think. Perhaps
we could restructure the locking here somehow to avoid this problem, but
it is complex enough already that it may not even be possible.

I don't see that this would be a contention problem.

We are already careful to read the xmin just once during
GetSnapshotData(). We advance it while holding only a LW_SHARED lock
during a serializable snapshot, so not sure why we wouldn't advance it
at other times also without contention issues. Why does anyone else know
or care whether we're taking a serializable snapshot or not?

The issue is whether we agree that is correct to do so. If we're
advancing it in the circumstances you say, then yes I agree it is.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk

#3Neil Conway
neilc@samurai.com
In reply to: Alvaro Herrera (#1)
Re: advancing snapshot's xmin

On Tue, 2008-03-25 at 17:26 -0300, Alvaro Herrera wrote:

There is one hole here: contention on ProcArrayLock. Basically, for
simple transactions we will need to update MyProc after every command.

If we're just updating MyProc->xmin, we only need to acquire
ProcArrayLock in shared mode, right?

Another idea is to throttle the updating of Xmin so it only happens once
in a while, but it's difficult to find a useful criterion and avoid
falling into the trap that we just neglected to update it before a large
command.

Using LWLockConditionalAcquire() might help also.

-Neil

#4Heikki Linnakangas
heikki@enterprisedb.com
In reply to: Neil Conway (#3)
Re: advancing snapshot's xmin

Neil Conway wrote:

On Tue, 2008-03-25 at 17:26 -0300, Alvaro Herrera wrote:

There is one hole here: contention on ProcArrayLock. Basically, for
simple transactions we will need to update MyProc after every command.

If we're just updating MyProc->xmin, we only need to acquire
ProcArrayLock in shared mode, right?

In fact, do you need a lock at all? We already assume that
reading/writing a TransactionId is atomic in many places. We acquire
ProcArrayLock at the end of transaction when we clear MyProc->xid, to
ensure that we don't exit the set of running transactions while someone
else is taking a snapshot, but AFAICS that's not necessary when we just
advance MyProc->xmin.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Heikki Linnakangas (#4)
Re: advancing snapshot's xmin

"Heikki Linnakangas" <heikki@enterprisedb.com> writes:

Neil Conway wrote:

If we're just updating MyProc->xmin, we only need to acquire
ProcArrayLock in shared mode, right?

In fact, do you need a lock at all?

I think you probably do. GetSnapshotData needs to be confident that the
global xmin it computes is <= the xmin that any other backend might be
about to store into its MyProc->xmin; how can you ensure that if there's
no locking happening?

Now the way I'd been envisioning this would work is that whenever the
number of active snapshots goes to zero, we clear MyProc->xmin, and
that probably could be done without a lock. Then the next time we do
GetSnapshotData, it would compute and store a new MyProc->xmin
(this would be the same activity that we currently think of as "setting
the serializable snapshot"). So you don't need any more locking than
already exists.

regards, tom lane

#6Dimitri Fontaine
dfontaine@hi-media.com
In reply to: Tom Lane (#5)
Re: advancing snapshot's xmin

Le mercredi 26 mars 2008, Tom Lane a écrit :

whenever the number of active snapshots goes to zero

Does this ever happen?
I mean, if the way to avoid locking contention is to rely on a production
system which let the service "breathe" from time to time, maybe there's
something wrong in the reasoning.

Of course I'm much more ready to accept I don't understand the first bit of it
all than to consider you're off-tracks here, but...
--
dim

If you ask a stupid question, you may feel stupid. If you don’t ask a stupid
question, you remain stupid.
-- Tony Rothman, Ph.D.U. Chicago, Physics

#7Gregory Stark
stark@enterprisedb.com
In reply to: Tom Lane (#5)
Re: advancing snapshot's xmin

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

"Heikki Linnakangas" <heikki@enterprisedb.com> writes:

Neil Conway wrote:

If we're just updating MyProc->xmin, we only need to acquire
ProcArrayLock in shared mode, right?

In fact, do you need a lock at all?

I think you probably do. GetSnapshotData needs to be confident that the
global xmin it computes is <= the xmin that any other backend might be
about to store into its MyProc->xmin; how can you ensure that if there's
no locking happening?

Surely xmin would only ever advance? How can removing snapshots cause xmin to
retreat at all, let alone behind the gloal xmin GetSnapshotData calculated?

Now the way I'd been envisioning this would work is that whenever the
number of active snapshots goes to zero, we clear MyProc->xmin, and
that probably could be done without a lock. Then the next time we do
GetSnapshotData, it would compute and store a new MyProc->xmin
(this would be the same activity that we currently think of as "setting
the serializable snapshot"). So you don't need any more locking than
already exists.

It's the same locking in theory from the point of view of where in the code
the locking happens. But I don't think it's the same locking in practice from
the point of view of how much wall-clock time passes between locks.

Consider a data loading job which has millions of INSERT statements in a file.
Currently if you put them all in a transaction it takes a single snapshot and
runs them all with the same snapshot.

If you reset xmin whenever you have no live snapshots then that job would be
doing that between every INSERT statement.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Dimitri Fontaine (#6)
Re: advancing snapshot's xmin

Dimitri Fontaine <dfontaine@hi-media.com> writes:

Le mercredi 26 mars 2008, Tom Lane a écrit :

whenever the number of active snapshots goes to zero

Does this ever happen?

Certainly: between any two commands of a non-serializable transaction.

In a serializable transaction the whole thing is a dead issue
anyway, since the original snapshot has to be kept.

There are corner cases involving open cursors where a snapshot
might persist longer, and then the optimization wouldn't apply.

The formulation that Alvaro gave would sometimes be able to
move xmin forward when the simple no-snaps-left rule wouldn't,
such as create cursor A, create cursor B (with a newer snap),
close cursor A. However I really doubt that scenarios like
this occur often enough to be worth having a much more expensive
snapshot-management mechanism.

regards, tom lane

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Gregory Stark (#7)
Re: advancing snapshot's xmin

Gregory Stark <stark@enterprisedb.com> writes:

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

I think you probably do. GetSnapshotData needs to be confident that the
global xmin it computes is <= the xmin that any other backend might be
about to store into its MyProc->xmin; how can you ensure that if there's
no locking happening?

Surely xmin would only ever advance?

You couldn't guarantee that without any lock. The risk case is where
someone else is in progress of setting his own xmin, but is running so
slowly that he's included an XID that isn't there anymore. So someone
else coming in and doing a computation of global xmin will compute a
higher value than what the slow guy is about to publish.

I agree that it would be safe for a backend to increase its
already-published xmin to some higher value without a lock. But I don't
see the point. The place where you'd actually be computing the new
value is in GetSnapshotData, and that can't run without a lock for the
above-mentioned reason.

It's the same locking in theory from the point of view of where in the code
the locking happens. But I don't think it's the same locking in practice from
the point of view of how much wall-clock time passes between locks.

Consider a data loading job which has millions of INSERT statements in a file.
Currently if you put them all in a transaction it takes a single snapshot and
runs them all with the same snapshot.

If you reset xmin whenever you have no live snapshots then that job would be
doing that between every INSERT statement.

These statements are 100% nonsense.

regards, tom lane

#10Gregory Stark
stark@enterprisedb.com
In reply to: Tom Lane (#9)
Re: advancing snapshot's xmin

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

Consider a data loading job which has millions of INSERT statements in a file.
Currently if you put them all in a transaction it takes a single snapshot and
runs them all with the same snapshot.

If you reset xmin whenever you have no live snapshots then that job would be
doing that between every INSERT statement.

These statements are 100% nonsense.

Uhm, yeah, I somehow didn't write was I was thinking. I didn't mean to say we
would be taking a new snapshot for each INSERT but that we would be resetting
xmin for each INSERT. Whereas currently we only set xmin once when we set the
serializable snapshot.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Gregory Stark (#10)
Re: advancing snapshot's xmin

Gregory Stark <stark@enterprisedb.com> writes:

Uhm, yeah, I somehow didn't write was I was thinking. I didn't mean to say we
would be taking a new snapshot for each INSERT but that we would be resetting
xmin for each INSERT. Whereas currently we only set xmin once when we set the
serializable snapshot.

Right, but setting xmin within GetSnapshotData is essentially free.
What I'm envisioning is that we lose the notion of "this is a
serializable snapshot" that that function currently has, and just
give it the rule "if MyProc->xmin is currently zero, then set it".
Then the only additional mechanism needed is for the snapshot
manager to detect when all snapshots are gone and zero out
MyProc->xmin --- that would happen sometime during command shutdown,
and per current discussion it shouldn't need a lock.

regards, tom lane

#12Dimitri Fontaine
dfontaine@hi-media.com
In reply to: Tom Lane (#8)
Re: advancing snapshot's xmin

Le mercredi 26 mars 2008, Tom Lane a écrit :

Dimitri Fontaine <dfontaine@hi-media.com> writes:

Le mercredi 26 mars 2008, Tom Lane a écrit :

whenever the number of active snapshots goes to zero

Does this ever happen?

Certainly: between any two commands of a non-serializable transaction.

Oh, it's a transaction scope snapshot when I though about cluster global
snapshots. Thanks a lot for explaining, and sorry for disturbing! :)

--
dim

#13Gregory Stark
stark@enterprisedb.com
In reply to: Tom Lane (#11)
Re: advancing snapshot's xmin

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

Gregory Stark <stark@enterprisedb.com> writes:

Uhm, yeah, I somehow didn't write was I was thinking. I didn't mean to say we
would be taking a new snapshot for each INSERT but that we would be resetting
xmin for each INSERT. Whereas currently we only set xmin once when we set the
serializable snapshot.

Right, but setting xmin within GetSnapshotData is essentially free.
What I'm envisioning is that we lose the notion of "this is a
serializable snapshot" that that function currently has, and just
give it the rule "if MyProc->xmin is currently zero, then set it".
Then the only additional mechanism needed is for the snapshot
manager to detect when all snapshots are gone and zero out
MyProc->xmin --- that would happen sometime during command shutdown,
and per current discussion it shouldn't need a lock.

It would be nice if there was some way to notice that no other transactions
have committed since last we calculated a snapshot and just reuse that
snapshot.

I would say ideally before we throw out our xmin but I suspect the point of
synchronization needed to notice this condition would be tantamount to that
same lock anyways.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!

#14Alvaro Herrera
alvherre@commandprompt.com
In reply to: Tom Lane (#11)
Re: advancing snapshot's xmin

Tom Lane wrote:

What I'm envisioning is that we lose the notion of "this is a
serializable snapshot" that that function currently has, and just
give it the rule "if MyProc->xmin is currently zero, then set it".
Then the only additional mechanism needed is for the snapshot
manager to detect when all snapshots are gone and zero out
MyProc->xmin --- that would happen sometime during command shutdown,
and per current discussion it shouldn't need a lock.

This is all easily done -- it's just a couple of extra lines.

However I am now having a definitional problem. Perhaps it is so
obvious to everyone else that nobody bothered mentioning it. I know I
wasn't aware until I tried a simple test and found that the Xmin wasn't
advancing as I was expecting.

The problem is that we always consider every transaction's PGPROC->xid
in calculating MyProc->xmin. So if you have a long running transaction,
it doesn't matter how far beyond the snapshots are -- the value returned
by GetOldestXmin will always be at most the old transaction's Xid. Even
if that transaction cannot see the old rows because all of its snapshots
are way in the future.

As far as I can see, for the purposes of VACUUM we can remove any tuple
that was deleted after the old transaction's Xid but before that
transaction's Xmin (i.e. all of its live snapshots). This means we get
to ignore Xid in GetOldestXmin and in the TransactionXmin calculations
in GetSnapshotData. It would not surprise me, however, to find out that
I am overlooking something and this is incorrect.

Am I blind?

It is quite possible that for the other purposes that we're using Xmins
for, this is not so. If that's the case, I would argue that we would
need to introduce a separate TransactionId to keep track of, which would
retain the current semantics of Xmin, and let VACUUM use what I am
proposing. I haven't examined those other uses though.

Thoughs?

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#15Simon Riggs
simon@2ndquadrant.com
In reply to: Alvaro Herrera (#14)
Re: advancing snapshot's xmin

On Fri, 2008-03-28 at 10:35 -0300, Alvaro Herrera wrote:

The problem is that we always consider every transaction's PGPROC->xid
in calculating MyProc->xmin. So if you have a long running
transaction, it doesn't matter how far beyond the snapshots are -- the
value returned by GetOldestXmin will always be at most the old
transaction's Xid. Even if that transaction cannot see the old rows
because all of its snapshots are way in the future.

It may not have a TransactionId yet.

So we should have the capability to prevent long running read-only
transactions from causing a build up of dead row versions. But long
running write transactions would still be a problem.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk

#16Alvaro Herrera
alvherre@commandprompt.com
In reply to: Simon Riggs (#15)
Re: advancing snapshot's xmin

Simon Riggs wrote:

On Fri, 2008-03-28 at 10:35 -0300, Alvaro Herrera wrote:

The problem is that we always consider every transaction's PGPROC->xid
in calculating MyProc->xmin. So if you have a long running
transaction, it doesn't matter how far beyond the snapshots are -- the
value returned by GetOldestXmin will always be at most the old
transaction's Xid. Even if that transaction cannot see the old rows
because all of its snapshots are way in the future.

It may not have a TransactionId yet.

How is this a problen? If it ever gets one, it will be in the future.

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

#17Simon Riggs
simon@2ndquadrant.com
In reply to: Alvaro Herrera (#16)
Re: advancing snapshot's xmin

On Fri, 2008-03-28 at 11:26 -0300, Alvaro Herrera wrote:

Simon Riggs wrote:

On Fri, 2008-03-28 at 10:35 -0300, Alvaro Herrera wrote:

The problem is that we always consider every transaction's PGPROC->xid
in calculating MyProc->xmin. So if you have a long running
transaction, it doesn't matter how far beyond the snapshots are -- the
value returned by GetOldestXmin will always be at most the old
transaction's Xid. Even if that transaction cannot see the old rows
because all of its snapshots are way in the future.

It may not have a TransactionId yet.

How is this a problen? If it ever gets one, it will be in the future.

Yeh, that was my point. So the problem you mention mostly goes away.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk

#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#14)
Re: advancing snapshot's xmin

Alvaro Herrera <alvherre@commandprompt.com> writes:

As far as I can see, for the purposes of VACUUM we can remove any tuple
that was deleted after the old transaction's Xid but before that
transaction's Xmin (i.e. all of its live snapshots). This means we get
to ignore Xid in GetOldestXmin and in the TransactionXmin calculations
in GetSnapshotData. It would not surprise me, however, to find out that
I am overlooking something and this is incorrect.

This seems entirely off-base to me. In particular, if a transaction
has an XID then its XMIN will never be greater than that, so I don't
even see how you figure the case will arise.

regards, tom lane

#19Alvaro Herrera
alvherre@commandprompt.com
In reply to: Tom Lane (#18)
Re: advancing snapshot's xmin

Tom Lane wrote:

Alvaro Herrera <alvherre@commandprompt.com> writes:

As far as I can see, for the purposes of VACUUM we can remove any tuple
that was deleted after the old transaction's Xid but before that
transaction's Xmin (i.e. all of its live snapshots). This means we get
to ignore Xid in GetOldestXmin and in the TransactionXmin calculations
in GetSnapshotData. It would not surprise me, however, to find out that
I am overlooking something and this is incorrect.

This seems entirely off-base to me. In particular, if a transaction
has an XID then its XMIN will never be greater than that, so I don't
even see how you figure the case will arise.

My point exactly -- can we let the Xmin go past its Xid? You imply we
can't, but why?

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#20Heikki Linnakangas
heikki@enterprisedb.com
In reply to: Alvaro Herrera (#19)
Re: advancing snapshot's xmin

Alvaro Herrera wrote:

Tom Lane wrote:

Alvaro Herrera <alvherre@commandprompt.com> writes:

As far as I can see, for the purposes of VACUUM we can remove any tuple
that was deleted after the old transaction's Xid but before that
transaction's Xmin (i.e. all of its live snapshots). This means we get
to ignore Xid in GetOldestXmin and in the TransactionXmin calculations
in GetSnapshotData. It would not surprise me, however, to find out that
I am overlooking something and this is incorrect.

This seems entirely off-base to me. In particular, if a transaction
has an XID then its XMIN will never be greater than that, so I don't
even see how you figure the case will arise.

My point exactly -- can we let the Xmin go past its Xid? You imply we
can't, but why?

Everything < xmin is considered to be not running anymore. Other
transactions would consider the still-alive transaction as aborted, and
start setting hint bits etc.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#21Alvaro Herrera
alvherre@commandprompt.com
In reply to: Heikki Linnakangas (#20)
Re: advancing snapshot's xmin

Heikki Linnakangas wrote:

Alvaro Herrera wrote:

Tom Lane wrote:

Alvaro Herrera <alvherre@commandprompt.com> writes:

As far as I can see, for the purposes of VACUUM we can remove any tuple
that was deleted after the old transaction's Xid but before that
transaction's Xmin (i.e. all of its live snapshots). This means we get
to ignore Xid in GetOldestXmin and in the TransactionXmin calculations
in GetSnapshotData. It would not surprise me, however, to find out that
I am overlooking something and this is incorrect.

This seems entirely off-base to me. In particular, if a transaction
has an XID then its XMIN will never be greater than that, so I don't
even see how you figure the case will arise.

My point exactly -- can we let the Xmin go past its Xid? You imply we
can't, but why?

Everything < xmin is considered to be not running anymore. Other
transactions would consider the still-alive transaction as aborted, and
start setting hint bits etc.

Okay. So let's say we invent another TransactionId counter -- we keep
Xmin for the current purposes, and the other counter keeps track of
snapshots ignoring Xid. This new counter could be used by VACUUM to
trim dead tuples.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#22Heikki Linnakangas
heikki@enterprisedb.com
In reply to: Alvaro Herrera (#21)
Re: advancing snapshot's xmin

Alvaro Herrera wrote:

Heikki Linnakangas wrote:

Alvaro Herrera wrote:

Tom Lane wrote:

Alvaro Herrera <alvherre@commandprompt.com> writes:

As far as I can see, for the purposes of VACUUM we can remove any tuple
that was deleted after the old transaction's Xid but before that
transaction's Xmin (i.e. all of its live snapshots). This means we get
to ignore Xid in GetOldestXmin and in the TransactionXmin calculations
in GetSnapshotData. It would not surprise me, however, to find out that
I am overlooking something and this is incorrect.

This seems entirely off-base to me. In particular, if a transaction
has an XID then its XMIN will never be greater than that, so I don't
even see how you figure the case will arise.

My point exactly -- can we let the Xmin go past its Xid? You imply we
can't, but why?

Everything < xmin is considered to be not running anymore. Other
transactions would consider the still-alive transaction as aborted, and
start setting hint bits etc.

Okay. So let's say we invent another TransactionId counter -- we keep
Xmin for the current purposes, and the other counter keeps track of
snapshots ignoring Xid. This new counter could be used by VACUUM to
trim dead tuples.

Hmm. So if we call that counter VacuumXmin for now, you could remove
deleted rows with xmax < VacuumXmin, as long as that xmax is not in the
set of running transactions? I guess that would work.

In general: VACUUM can remove any tuple that's not visible to any
snapshot in the system. We don't want to keep all snapshots in shared
memory, so we use some conservative approximation of that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com