ANALYZE locks pg_listener in EXCLUSIVE for long time?
We are seeing occasional long lockouts from out DB. When I do a 'select *
from pg_locks', I find that everybody is waiting for pg_listener, and that
the lock on pg_listener is currently held by a long-running ANALYZE VERBOSE.
I saw the following in the change logs (not sure if it's relevant):
should have gotten this notify. But to do that, we'd have to wait
to see if he commits or not, or make UNLISTEN hold exclusive lock
on pg_listener until commit. Either of these answers is
deadlock-prone, not to mention horrible for interactive
performance. Do it this way for now. (What happened to that
project to do LISTEN/NOTIFY in memory with no table, anyway?)
Does this mean that ANALYZE will take an exclusive lock on pg_listener
until the ANALYZE finishes? Or is there some other cause?
Any help or suggestions would be appreciated...
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
Philip Warner <pjw@rhyme.com.au> writes:
Does this mean that ANALYZE will take an exclusive lock on pg_listener
until the ANALYZE finishes? Or is there some other cause?
ANALYZE does not take an exclusive lock on anything. However, the
async.c functions want AccessExclusiveLock on pg_listener, so they
quite possibly would get blocked by ANALYZE's not-so-exclusive lock.
Possibly we could reduce the strength of the lock taken by the async.c
functions ... I haven't thought hard about it. The long-term answer is
certainly a wholesale rewrite of the listen/notify mechanism.
regards, tom lane
Tom Lane wrote:
Philip Warner <pjw@rhyme.com.au> writes:
Does this mean that ANALYZE will take an exclusive lock on pg_listener
until the ANALYZE finishes? Or is there some other cause?ANALYZE does not take an exclusive lock on anything. However, the
async.c functions want AccessExclusiveLock on pg_listener, so they
quite possibly would get blocked by ANALYZE's not-so-exclusive lock.Possibly we could reduce the strength of the lock taken by the async.c
functions ... I haven't thought hard about it. The long-term answer is
certainly a wholesale rewrite of the listen/notify mechanism.
Gavin was working on it a while ago but I am not sure how far he got.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
At 12:45 PM 3/05/2004, Tom Lane wrote:
Possibly we could reduce the strength of the lock taken by the async.c
functions
If possible, this seems like a great option. We currently have a large
database with several hundred users who get locked out for as much as half
an hour while ANALYZE runs.
The data in the database is extremely dynamic, so the analyze needs to be
run regularly; we could run less often but this will just mean the problem
happens once per week instead of once per day.
Would ACCESS SHARE be OK?
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
Philip Warner <pjw@rhyme.com.au> writes:
If possible, this seems like a great option. We currently have a large
database with several hundred users who get locked out for as much as half
an hour while ANALYZE runs.
If it takes half an hour to ANALYZE pg_listener, I think that ANALYZE is
not your real problem :-(. You need a much more aggressive vacuuming
policy on that table. Maybe a cron job issuing "vacuum pg_listener"
once a minute would do? And get the size of the table knocked down to
something less stratospheric to begin with --- perhaps stop all the
listeners while you TRUNCATE the table.
The existing listen/notify infrastructure isn't really designed for
notification rates exceeding a few events per minute ...
regards, tom lane
Philip Warner <pjw@rhyme.com.au> writes:
Would ACCESS SHARE be OK?
Certainly not, since the point of the locks in async.c is that only one
backend should execute those routines at a time. ExclusiveLock might
work okay ... but I still haven't thought hard about it ...
regards, tom lane
At 01:46 PM 3/05/2004, Tom Lane wrote:
If it takes half an hour to ANALYZE pg_listener, I think that ANALYZE is
not your real problem :-(. You need a much more aggressive vacuuming
policy on that table. Maybe a cron job issuing "vacuum pg_listener"
once a minute would do? And get the size of the table knocked down to
something less stratospheric to begin with --- perhaps stop all the
listeners while you TRUNCATE the table.
It's a general ANALYZE command for the entire DB. It's about 6GB in size,
and is vacuumed as frequently as possible; there is certainly unreclaimed
space, but it does not substantially outweigh used space. My *guess* is
that the largest table is being ANALYZEd at the time (it uses most of the
6GB), and for some reason pg_listeners is being locked in ACCESS SHARE the
entire time.
Just vacuuming pg_listener produces:
vacuum verbose pg_listener;
INFO: vacuuming "pg_catalog.pg_listener"
INFO: "pg_listener": found 0 removable, 0 nonremovable row versions in 0 pages
VACUUM
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
On Sun, 2 May 2004, Bruce Momjian wrote:
Tom Lane wrote:
Philip Warner <pjw@rhyme.com.au> writes:
Does this mean that ANALYZE will take an exclusive lock on pg_listener
until the ANALYZE finishes? Or is there some other cause?ANALYZE does not take an exclusive lock on anything. However, the
async.c functions want AccessExclusiveLock on pg_listener, so they
quite possibly would get blocked by ANALYZE's not-so-exclusive lock.Possibly we could reduce the strength of the lock taken by the async.c
functions ... I haven't thought hard about it. The long-term answer is
certainly a wholesale rewrite of the listen/notify mechanism.Gavin was working on it a while ago but I am not sure how far he got.
Its basically written.
It is implemented using shared memory. I got stuck when I considered the
situation where we rung out of shared memory. Some emails in the archive
suggested we just fire all listeners but I didn't like that.
What I was considering was that when someone issues a NOTIFY, we reserve a
slot for the NOTIFY (plus a message, which is why I originally looked at
the problem) in shared memory. At the end of the transaction, we update a
flag to say that the transaction successed or we remove it if we've
aborted.
Does anyone else have any thoughts about it?
Gavin
Philip Warner <pjw@rhyme.com.au> writes:
... for some reason pg_listeners is being locked in ACCESS SHARE the
entire time.
Just vacuuming pg_listener produces:
vacuum verbose pg_listener;
INFO: vacuuming "pg_catalog.pg_listener"
INFO: "pg_listener": found 0 removable, 0 nonremovable row versions in 0 pages
VACUUM
[blinks...] There's something pretty strange about that. Are you using
LISTEN/NOTIFY at all?
regards, tom lane
At 02:21 PM 3/05/2004, Tom Lane wrote:
[blinks...] There's something pretty strange about that. Are you using
LISTEN/NOTIFY at all?
Nope.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
Philip Warner <pjw@rhyme.com.au> writes:
At 02:21 PM 3/05/2004, Tom Lane wrote:
[blinks...] There's something pretty strange about that. Are you using
LISTEN/NOTIFY at all?
Nope.
In that case there's no reason for anything to be taking any particular
locks on pg_listener; and it's simply not possible for ANALYZE to spend
half an hour on a zero-page table if it's not blocked by a lock. Could
you dig a little deeper and see where the problem really is?
regards, tom lane
At 02:33 PM 3/05/2004, Tom Lane wrote:
Could
you dig a little deeper and see where the problem really is?
I thought I had 8-(.
The result of a 'select * from pg_locks where not granted' was a bunch of
locks on the pg_listener relation, and no others. Only one process had a
lock on that relation, and it was an ANALYZE command. If there is something
else to look for next time it happens, please let me know.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
Philip Warner <pjw@rhyme.com.au> writes:
At 02:33 PM 3/05/2004, Tom Lane wrote:
Could you dig a little deeper and see where the problem really is?
I thought I had 8-(.
The result of a 'select * from pg_locks where not granted' was a bunch of
locks on the pg_listener relation, and no others. Only one process had a
lock on that relation, and it was an ANALYZE command.
I don't believe any of this. In the first place, if you're not using
LISTEN/NOTIFY then there's no reason for any backend to try to take out
an AccessExclusive lock on pg_listener (which is the only kind that would
be blocked by ANALYZE's measly AccessShareLock). In the second place,
an ANALYZE on a zero-page relation cannot conceivably take half an hour,
unless it's in turn being blocked by something else.
Please dig deeper.
regards, tom lane
At 02:54 PM 3/05/2004, Tom Lane wrote:
I don't believe any of this.
mail=# select * from pg_locks where not granted;
relation | database | transaction | pid | mode | granted
----------+----------+-------------+-------+---------------------+---------
16414 | 17149 | | 7847 | AccessExclusiveLock | f
16414 | 17149 | | 51015 | AccessExclusiveLock | f
16414 | 17149 | | 51149 | AccessExclusiveLock | f
16414 | 17149 | | 51024 | AccessExclusiveLock | f
16414 | 17149 | | 48838 | AccessExclusiveLock | f
16414 | 17149 | | 57670 | AccessExclusiveLock | f
16414 | 17149 | | 54535 | AccessExclusiveLock | f
16414 | 17149 | | 57756 | AccessExclusiveLock | f
16414 | 17149 | | 25894 | AccessExclusiveLock | f
16414 | 17149 | | 55813 | AccessExclusiveLock | f
16414 | 17149 | | 18181 | AccessExclusiveLock | f
16414 | 17149 | | 57846 | AccessExclusiveLock | f
16414 | 17149 | | 44827 | AccessExclusiveLock | f
16414 | 17149 | | 52352 | AccessExclusiveLock | f
16414 | 17149 | | 51239 | AccessExclusiveLock | f
16414 | 17149 | | 14610 | AccessExclusiveLock | f
16414 | 17149 | | 88549 | AccessExclusiveLock | f
16414 | 17149 | | 36939 | AccessExclusiveLock | f
16414 | 17149 | | 57120 | AccessExclusiveLock | f
16414 | 17149 | | 89107 | AccessExclusiveLock | f
16414 | 17149 | | 48915 | AccessExclusiveLock | f
16414 | 17149 | | 74102 | AccessExclusiveLock | f
16414 | 17149 | | 51029 | AccessExclusiveLock | f
16414 | 17149 | | 51100 | AccessExclusiveLock | f
16414 | 17149 | | 57865 | AccessExclusiveLock | f
16414 | 17149 | | 45777 | AccessExclusiveLock | f
16414 | 17149 | | 22288 | AccessExclusiveLock | f
16414 | 17149 | | 52006 | AccessExclusiveLock | f
16414 | 17149 | | 44829 | AccessExclusiveLock | f
16414 | 17149 | | 49073 | AccessExclusiveLock | f
16414 | 17149 | | 52827 | AccessExclusiveLock | f
16414 | 17149 | | 21003 | AccessExclusiveLock | f
16414 | 17149 | | 50204 | AccessExclusiveLock | f
mail=# select * from pg_class where oid=16414;
relname | relnamespace | reltype | relowner | relam | relfilenode |
relpages | reltuples | reltoastrelid | reltoastidxid | relh
asindex | relisshared | relkind | relnatts | relchecks | reltriggers |
relukeys | relfkeys | relrefs | relhasoids | relhaspkey | rel
hasrules | relhassubclass | relacl
-------------+--------------+---------+----------+-------+-------------+----------+-----------+---------------+---------------+-----
--------+-------------+---------+----------+-----------+-------------+----------+----------+---------+------------+------------+----
---------+----------------+------------
pg_listener | 11 | 16415 | 1 | 0 | 16414
| 0 | 0 | 0 | 0 | f
| f | r | 3 | 0 | 0
| 0 | 0 | 0 | f | f | f
| f | {=r/pgsql}
(1 row)
mail=# select * from pg_locks where granted and relation=16414;
relation | database | transaction | pid | mode | granted
----------+----------+-------------+-------+-----------------+---------
16414 | 17149 | | 56399 | AccessShareLock | t
(1 row)
mail=# \q
[~] root@beast>ps ax | grep 56399;
56399 ?? D 0:03.41 postmaster: pgsql mail 127.0.0.1 ANALYZE (postgres)
[~] root@beast>kill -TERM 56399
[~] root@beast>ps ax | grep 56399;
[~] root@beast>psql -p5443 -U pgsql mail
Welcome to psql 7.3.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
mail=# select * from pg_locks where granted and relation=16414;
relation | database | transaction | pid | mode | granted
----------+----------+-------------+-----+------+---------
(0 rows)
mail=# select * from pg_locks where not granted;
relation | database | transaction | pid | mode | granted
----------+----------+-------------+-------+-----------+---------
| | 65738548 | 72279 | ShareLock | f
| | 65738548 | 40479 | ShareLock | f
| | 65738548 | 46830 | ShareLock | f
| | 65738478 | 20762 | ShareLock | f
(4 rows)
mail=# select * from pg_locks where not granted;
relation | database | transaction | pid | mode | granted
----------+----------+-------------+-----+------+---------
(0 rows)
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
At 02:54 PM 3/05/2004, Tom Lane wrote:
Please dig deeper.
I will log everything I check next time; unfortunately, when it happens,
the priority is on unlocking everything so I have a limited time to play.
So far, killing the ANALYZE has fixed the problem each time.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
At 02:54 PM 3/05/2004, Tom Lane wrote:
Please dig deeper.
I may have found the problem; all the hung processes show 'async_notify
waiting' in ps, and the ANALYZE eventually dies with 'tuple concurrently
updated'.
The routine 'ProcessIncomingNotify' in async.c does indeed try to lock
pg_listener (even if we're not using NOTIFY/LISTEN). Not sure why the
ANALYZE is locking the relation, though...but it is locked in AccessShareLock.
I can send a log of my investigations if necessary.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
Import Notes
Resolved by subject fallback
Further to this, ProcessIncomingNotify seems to hold the lock on the
listener relation until it's current transaction exits. If the ANALYZE was
not the source of the error, but was just another victim, does that mean it
might hold the lock for a very long time if the analyze is lengthy?
At 02:54 PM 3/05/2004, Tom Lane wrote:
Please dig deeper.
I may have found the problem; all the hung processes show 'async_notify
waiting' in ps, and the ANALYZE eventually dies with 'tuple concurrently
updated'.
The routine 'ProcessIncomingNotify' in async.c does indeed try to lock
pg_listener (even if we're not using NOTIFY/LISTEN). Not sure why the
ANALYZE is locking the relation, though...but it is locked in AccessShareLock.
I can send a log of my investigations if necessary.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
Import Notes
Resolved by subject fallback
At 06:21 PM 3/05/2004, Philip Warner wrote:
'tuple concurrently updated'
The database logs show the same error in each case where a long delay has
occurred. And before anyone suggests it, we already have processes in place
to prevent to ANALYZEs running at the same time.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
At 06:21 PM 3/05/2004, Philip Warner wrote:
'tuple concurrently updated'
I lied. The database DO NOT logs show the same error in each case where a
long delay has occurred. It happens sometimes; recent process logs do show
the 'async_notify waiting' status, however.
I'll try not to send any more emails until someone responds ;-)
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
Import Notes
Resolved by subject fallback
Philip Warner <pjw@rhyme.com.au> writes:
I may have found the problem; all the hung processes show 'async_notify
waiting' in ps, and the ANALYZE eventually dies with 'tuple concurrently
updated'.
The routine 'ProcessIncomingNotify' in async.c does indeed try to lock
pg_listener (even if we're not using NOTIFY/LISTEN). Not sure why the
ANALYZE is locking the relation, though...but it is locked in AccessShareLock.
Hm. What seems likely to have happened is that the sinval message queue
got full. We currently deal with this by sending SIGUSR2 to all
backends, which forces them through a NOTIFY-check cycle; a byproduct of
the transaction start is to read pending sinval messages. (This is
somebody's ugly quick hack from years ago; we really oughta find a less
expensive way of doing it.)
That would have left all the idle backends trying to get exclusive lock
on pg_listener, and if the ANALYZE subsequently reached pg_listener, its
share lock would queue up behind those requests.
What is not clear yet is why *all* of them are blocked. Seems something
else must have some kind of lock already on pg_listener; but who?
Can you get a dump of the pg_locks view while this is happening?
And before anyone suggests it, we already have processes in place
to prevent to ANALYZEs running at the same time.
How confident are you in those "processes"? I don't know of any other
mechanism for 'tuple concurrently updated' failures in ANALYZE than
concurrent analyze runs ...
regards, tom lane
At 07:33 PM 3/05/2004, Philip Warner wrote:
I'll try not to send any more emails until someone responds ;-)
I also noticed this in SIInsertDataEntry sinvaladt.c:
/*
* Try to prevent table overflow. When the table is 70% full send a
* WAKEN_CHILDREN request to the postmaster. The postmaster will send
* a SIGUSR2 signal (ordinarily a NOTIFY signal) to all the backends.
* This will force idle backends to execute a transaction to look
* through pg_listener for NOTIFY messages, and as a byproduct of the
* transaction start they will read SI entries.
*
* This should never happen if all the backends are actively executing
* queries, but if a backend is sitting idle then it won't be starting
* transactions and so won't be reading SI entries.
*
* dz - 27 Jan 1998
*/
Would a long-running ANALYZE (or other activity on a busy database) cause
the shared buffers to get to the 70% threshold while doing a long-running
ANALYZE?
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
At 11:04 PM 3/05/2004, Tom Lane wrote:
Hm. What seems likely to have happened is that the sinval message queue
got full.
I agree (our emails crossed).
That would have left all the idle backends trying to get exclusive lock
on pg_listener, and if the ANALYZE subsequently reached pg_listener, its
share lock would queue up behind those requests.
What I see is that the ANALYZE job already has it in ACCESS SHARED mode,
and keeps the lock until it dies with the 'concurrent update' error.
What is not clear yet is why *all* of them are blocked. Seems something
else must have some kind of lock already on pg_listener; but who?
ANALYZE.
Can you get a dump of the pg_locks view while this is happening?
Attached.
How confident are you in those "processes"? I don't know of any other
mechanism for 'tuple concurrently updated' failures in ANALYZE than
concurrent analyze runs ...
Fairly. In this particular instance the error was probably caused bu a
manually run VACUUM (part of me stressing it to encourage the error).
Contrary to my other email, we haven't had the 'tuple concurrently updated'
error since March (until today, with me messing around).
What I do have is minute-by-minute dumps of pg_locks and ps for the day. At
each hang there were many processes in 'async_notify waiting' and an
ANALYZE job had the lock in shared mode.
I do not have minute-by-minute logs for more than today, but there were 3
hangs today, and only one with the concurrent update error.
It would be interesting if we could find a piece of backend code that did a
'select * from pg_listener', and hence locked it in ACCESS SHARED.
At the moment, it looks like either the ANALYZE is triggering an error that
causes it's backend to read pg_listeners, or it is dying while ANALYZING
pg_listeners. The latter seems unlikely since it hangs frequently, and
pg_listeners is empty.
Does ANALYZE rollback if it dies? Could this account for the delay?
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/
Attachments:
On Mon, May 03, 2004 at 02:14:18PM +1000, Gavin Sherry wrote:
It is implemented using shared memory. I got stuck when I considered the
situation where we rung out of shared memory. Some emails in the archive
suggested we just fire all listeners but I didn't like that.
Can this be kept in backend local memory and then sent to the other
backends at transaction commit? If you run out of local memory you can
just spill to disk. (With shared memory this seems pretty hard to do.)
I'm not sure how would one "send to the other backends." Maybe write
another file on disk, one for each remote backend? Surely this can be
done somehow. I've heard that on linux-2.6 they are implementing "POSIX
message queues" (not sure what those are anyway); maybe we can do that
on platforms that support it, for performance.
--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"In a specialized industrial society, it would be a disaster
to have kids running around loose." (Paul Graham)
Philip Warner <pjw@rhyme.com.au> writes:
At 11:04 PM 3/05/2004, Tom Lane wrote:
How confident are you in those "processes"? I don't know of any other
mechanism for 'tuple concurrently updated' failures in ANALYZE than
concurrent analyze runs ...
Fairly. In this particular instance the error was probably caused bu a
manually run VACUUM (part of me stressing it to encourage the error).
Yeah, I see a process 14295 in your dump that seems to be trying to
ANALYZE (at least, it's got write lock on pg_statistic...). 8631 is the
incumbent ANALYZE, and it's got locks all over the place :-(
I think what we have here is an evil side-effect of the change a couple
versions back to allow standalone ANALYZE to run as a single
transaction. A database-wide ANALYZE will therefore accumulate
AccessShareLock on each table it touches, and it won't release these
locks until commit. So the scenario goes like this:
1. Somewhere relatively early in its run, ANALYZE processes
pg_statistic. So it's now holding AccessShareLock on pg_statistic.
2. As the ANALYZE proceeds, it issues sinval messages due to the updates
it's making in pg_statistic. This is normal. There will be (at least)
one such message per column analyzed, and it sounds like your database
has many columns. Plus of course other catalog updates could be
occurring in other backends.
3. There is at least one other backend sitting idle and therefore not
reading sinval messages. So eventually the sinval queue gets 70% full
and the SIGUSR2 escape-hatch is triggered.
4. The idle backends (and eventually non-idle ones, too, whenever they
next reach the idle loop) try to do the NOTIFY thing, and get blocked
trying to acquire AccessExclusiveLock on pg_listener. They will now be
stuck until the ANALYZE completes.
As a quick-hack fix, I think it would do to reduce the locks taken on
pg_listener in async.c from AccessExclusiveLock to ExclusiveLock. This
would serve the purpose of serializing async.c processing without
creating a conflict against ANALYZE's AccessShareLock.
Some other things we ought to think about for the future:
* Is it really a good idea for database-wide ANALYZE to run as a single
transaction? Holding all those locks is a recipe for deadlocks, even
if they're as inoffensive as AccessShareLocks normally are.
* Can we use something less heavyweight than ProcessIncomingNotify
to deal with the sinval-clearing problem? Not only is that routine
expensive, but it is a serialization bottleneck, which is exactly what
we *don't* want in something that all the backends are getting told to
do at the same time. I think the original motivation for that hack was
because we didn't have a spare signal number available to dedicate to
sinval response, but SIGUSR1 has been free for a couple releases now.
I'm very tempted to commandeer it for sinval.
regards, tom lane
I'm not sure how would one "send to the other backends."
Maybe write another file on disk, one for each remote
backend? Surely this can be done somehow. I've heard that
on linux-2.6 they are implementing "POSIX message queues"
(not sure what those are anyway); maybe we can do that on
platforms that support it, for performance.
Dunno if this is relevant, but if you want to go with message queues,
there is also SystemV message queues. Since postgresql already uses sysv
semaphores and shared memory, this would perhaps be portable to more
systems that pg supports today (though you'd still need some kind of
abstraction layer, since e.g. win32 does not have it).
(man msgsnd, msgrcv, msgctl, msgget
//Magnus
Import Notes
Resolved by subject fallback
On Mon, 3 May 2004, Alvaro Herrera wrote:
On Mon, May 03, 2004 at 02:14:18PM +1000, Gavin Sherry wrote:
It is implemented using shared memory. I got stuck when I considered the
situation where we rung out of shared memory. Some emails in the archive
suggested we just fire all listeners but I didn't like that.Can this be kept in backend local memory and then sent to the other
backends at transaction commit? If you run out of local memory you can
just spill to disk. (With shared memory this seems pretty hard to do.)I'm not sure how would one "send to the other backends." Maybe write
another file on disk, one for each remote backend? Surely this can be
done somehow. I've heard that on linux-2.6 they are implementing "POSIX
message queues" (not sure what those are anyway); maybe we can do that
on platforms that support it, for performance.
What happens in the (unlikely) event that we never find space in shared
memory? That's the problem that I am currently trying to solve.
We currently just fire all the triggers but is that a great idea?
Particularly if we support the passing of a message with a notify.
Gavin
* Is it really a good idea for database-wide ANALYZE to run as a single
transaction? Holding all those locks is a recipe for deadlocks, even
if they're as inoffensive as AccessShareLocks normally are.
Wasn't one idea behind that change also to not make the planner create a plan
from mixed old and new statistics ? I guess that could later be accomplished with
"begin work; analyze; commit work;" (with subtransactions) though.
Andreas
Import Notes
Resolved by subject fallback
"Zeugswetter Andreas SB SD" <ZeugswetterA@spardat.at> writes:
* Is it really a good idea for database-wide ANALYZE to run as a single
transaction? Holding all those locks is a recipe for deadlocks, even
if they're as inoffensive as AccessShareLocks normally are.
Wasn't one idea behind that change also to not make the planner create a plan
from mixed old and new statistics ?
I don't recall that that was part of the discussion. IIRC all we were
after was to let someone invoke ANALYZE from inside a BEGIN/COMMIT block.
A possible compromise is to hack ANALYZE so that if it is invoked when
*not* within a BEGIN block, it runs a separate transaction for each
table. This seems pretty crufty but might satisfy all the requirements.
I guess that could later be accomplished with
"begin work; analyze; commit work;" (with subtransactions) though.
AFAICS, locks taken by a (committed) subtransaction can't be released
until top-level commit anyhow. Otherwise they fail to perform one of
the essential functions of locking in an MVCC environment: to delay
another process until the changes you've made are visible to him.
regards, tom lane
I wrote:
2. As the ANALYZE proceeds, it issues sinval messages due to the updates
it's making in pg_statistic. This is normal.
Small correction: actually, backends only send sinval messages at
commit, so the ANALYZE will just be accumulating pending messages in its
private memory. Your observed symptom therefore can only occur if other
transactions running parallel to the ANALYZE perform sufficient catalog
updating activity to fill the sinval message queue. And there must also
be at least one long-term-idle backend, so that the queue doesn't get
drained.
I had been wondering why we'd not identified this problem before, but
that combination of factors is probably unusual enough to explain why
not.
regards, tom lane
At 01:30 AM 4/05/2004, Tom Lane wrote:
can only occur if other
transactions running parallel to the ANALYZE perform sufficient catalog
updating activity to fill the sinval message queue. And there must also
be at least one long-term-idle backend, so that the queue doesn't get
drained.
Sounds quite likely; usually seems to occur at 'shoulder' load times; lots
of updates still happening (several each second) and a server process pool
that is larger than necessary to handle the load.
I'll replace all:
heap_openr(ListenerRelationName, AccessExclusiveLock);
with
heap_openr(ListenerRelationName, ExclusiveLock);
and see how it goes.
Thanks for the help.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp.mit.edu:11371 |/