Deadlock with pg_dump?

Started by Chris Campbellabout 19 years ago44 messages
#1Chris Campbell
chris@bignerdranch.com
1 attachment(s)

We're getting deadlock error messages in the production database logs
during times of inactivity, where the only other thing using the
database (we think) is the every-15-minutes pg_dump process. There
are still database connections up-and-running from unused Hibernate
Java processes, but they Shouldn't Be doing anything (and shouldn't
be holding locks, etc).

The deadlock error message looks like this:

ERROR: deadlock detected
DETAIL: Process 1120 waits for ShareLock on transaction 5847116;
blocked by process 1171.
Process 1171 waits for ExclusiveLock on tuple (6549,28) of relation
37637 of database 37574; blocked by process 1120.

Relation 37636 is the users table (schema attached).

Process 1120 was running an UPDATE query and changing a single row in
the users table. The users table does have foreign keys to 4 other
tables. Is it possible that those foreign key constraints acquire
locks in a different order than pg_dump (as it's SELECTing from the
tables), and it's hitting at *just* the right time to cause a deadlock?

I've tried to reproduce it on a test machine by running pgbench
(after adding foreign keys to the pgbench tables) and pg_dump in
tight loops in two concurrent shell scripts, but no deadlock.

Any ideas on how to track this down?

Under what conditions does a process acquire a ShareLock on another
transaction?

Thanks!

- Chris

Attachments:

schema.txttext/plain; name=schema.txt; x-unix-mode=0777Download
#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Chris Campbell (#1)
Re: Deadlock with pg_dump?

Chris Campbell <chris@bignerdranch.com> writes:

ERROR: deadlock detected
DETAIL: Process 1120 waits for ShareLock on transaction 5847116;
blocked by process 1171.
Process 1171 waits for ExclusiveLock on tuple (6549,28) of relation
37637 of database 37574; blocked by process 1120.

Relation 37636 is the users table (schema attached).

Process 1120 was running an UPDATE query and changing a single row in
the users table.

And what was 1171 doing? I really doubt that either of these could have
been pg_dump.

Given that you appear to be running 8.1 (tut-tut for not saying), it
really shouldn't be a foreign key problem either. I'm betting these
are just flat out conflicting updates of the same row(s).

regards, tom lane

#3Chris Campbell
chris@bignerdranch.com
In reply to: Tom Lane (#2)
Re: Deadlock with pg_dump?

On Oct 26, 2006, at 17:21, Tom Lane wrote:

And what was 1171 doing? I really doubt that either of these could
have
been pg_dump.

I know that process 1120 is a Java client (Hibernate) running an
UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was
pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish
hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread
throughout the day. That's not definitive evidence, of course, but
it's certainly curious.

Given that you appear to be running 8.1 (tut-tut for not saying), it
really shouldn't be a foreign key problem either. I'm betting these
are just flat out conflicting updates of the same row(s).

Yeah, 8.1.3. Sorry about the omission.

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

Thanks!

- Chris

#4Jim C. Nasby
jim@nasby.net
In reply to: Chris Campbell (#3)
Re: Deadlock with pg_dump?

On Thu, Oct 26, 2006 at 06:11:59PM -0400, Chris Campbell wrote:

On Oct 26, 2006, at 17:21, Tom Lane wrote:

And what was 1171 doing? I really doubt that either of these could
have
been pg_dump.

I know that process 1120 is a Java client (Hibernate) running an
UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was
pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish
hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread
throughout the day. That's not definitive evidence, of course, but
it's certainly curious.

I seem to remember something funny about hibernate and locking, though I
can't recall any details right now... but searching the archives might
provide insight.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Chris Campbell (#3)
Re: Deadlock with pg_dump?

Chris Campbell <chris@bignerdranch.com> writes:

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.
Still, if you're seeing multiple occurrences per day, that would
probably let you build up a good picture of all the involved operations
over a couple of days.

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

Unless your applications are issuing actual LOCK TABLE commands, it's
really hard to see how pg_dump could be involved. It doesn't take
anything stronger than AccessShareLock, and shouldn't be holding any
tuple-level locks at all.

regards, tom lane

#6Alvaro Herrera
alvherre@commandprompt.com
In reply to: Tom Lane (#5)
Re: Deadlock with pg_dump?

Tom Lane wrote:

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

I think it default to panic because it's the way to disable the feature,
which was the easiest sell when the feature was introduced. I'm all for
lowering it to error.

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

#7Chris Campbell
chris@bignerdranch.com
In reply to: Tom Lane (#5)
Re: Deadlock with pg_dump?

On Oct 26, 2006, at 18:45, Tom Lane wrote:

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.

Would it be possible (in 8.3, say) to log the conflicting backend's
current statement (from pg_stat_activity, perhaps)? I guess the
conflicting backend would currently be waiting for a lock, so its
current query (before releasing the lock) is the one we want.

Thanks!

- Chris

#8Albe Laurenz
all@adv.magwien.gv.at
In reply to: Chris Campbell (#7)
Re: Deadlock with pg_dump?

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

To avoid spamming the logs with every failed SQL statement?

Yours,
Laurenz Albe

#9Csaba Nagy
nagy@ecircle-ag.com
In reply to: Albe Laurenz (#8)
Re: Deadlock with pg_dump?

On Fri, 2006-10-27 at 09:23, Albe Laurenz wrote:

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

To avoid spamming the logs with every failed SQL statement?

And it would be hurting applications where query failure is taken as a
valid path (as inserting first and update if failing)...

Cheers,
Csaba.

#10Andrew Dunstan
andrew@dunslane.net
In reply to: Csaba Nagy (#9)
Re: Deadlock with pg_dump?

Csaba Nagy wrote:

On Fri, 2006-10-27 at 09:23, Albe Laurenz wrote:

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

To avoid spamming the logs with every failed SQL statement?

And it would be hurting applications where query failure is taken as a
valid path (as inserting first and update if failing)...

Both of these are arguments in favor of being able to alter the level,
which would still exist. They are not good arguments for not having error
as the default level.

cheers

andrew

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Albe Laurenz (#8)
Re: Deadlock with pg_dump?

"Albe Laurenz" <all@adv.magwien.gv.at> writes:

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

To avoid spamming the logs with every failed SQL statement?

Certainly there are people who will turn it off, but that's why it's
configurable. I've had to answer "how do I find out what's causing
error message FOO" often enough that I'm starting to think logging error
statements is a more useful default than not logging 'em ...

regards, tom lane

#12Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#5)
Re: Deadlock with pg_dump?

On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:

Chris Campbell <chris@bignerdranch.com> writes:

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

For production systems where we expect fewer ERRORs, each one is
important, so this would be a good default.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Chris Campbell (#3)
Re: Deadlock with pg_dump?

Chris Campbell <chris@bignerdranch.com> writes:

On Oct 26, 2006, at 17:21, Tom Lane wrote:

And what was 1171 doing? I really doubt that either of these could
have been pg_dump.

I know that process 1120 is a Java client (Hibernate) running an
UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was
pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish
hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread
throughout the day. That's not definitive evidence, of course, but
it's certainly curious.

FWIW, we recently found a deadlock risk that's likely to be triggered in
situations involving multiple updates and foreign keys. I still don't
see any connection to pg_dump, but otherwise it seems like it might be
related to your problem. Try this patch:
http://archives.postgresql.org/pgsql-committers/2006-11/msg00121.php

regards, tom lane

#14Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#12)
Re: Deadlock with pg_dump?

Simon Riggs wrote:

On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:

Chris Campbell <chris@bignerdranch.com> writes:

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

Old email, but I don't see how our current output is not good enough?

test=> lock a;
ERROR: deadlock detected
DETAIL: Process 6855 waits for AccessExclusiveLock on relation 16394 of
database 16384; blocked by process 6795.
Process 6795 waits for AccessExclusiveLock on relation 16396 of database
16384; blocked by process 6855.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#15Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#11)
Re: Deadlock with pg_dump?

Tom Lane wrote:

"Albe Laurenz" <all@adv.magwien.gv.at> writes:

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

To avoid spamming the logs with every failed SQL statement?

Certainly there are people who will turn it off, but that's why it's
configurable. I've had to answer "how do I find out what's causing
error message FOO" often enough that I'm starting to think logging error
statements is a more useful default than not logging 'em ...

Are we ready to set 'log_min_error_statement = error' by default for
8.3?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#16Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#15)
Re: Deadlock with pg_dump?

Bruce Momjian <bruce@momjian.us> writes:

Are we ready to set 'log_min_error_statement = error' by default for
8.3?

We already did that in 8.2.

regards, tom lane

#17Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#16)
Re: Deadlock with pg_dump?

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Are we ready to set 'log_min_error_statement = error' by default for
8.3?

We already did that in 8.2.

Oh, interesting. Oops again.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#18Simon Riggs
simon@2ndquadrant.com
In reply to: Bruce Momjian (#14)
Re: Deadlock with pg_dump?

On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:

Simon Riggs wrote:

On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:

Chris Campbell <chris@bignerdranch.com> writes:

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

Old email, but I don't see how our current output is not good enough?

test=> lock a;
ERROR: deadlock detected
DETAIL: Process 6855 waits for AccessExclusiveLock on relation 16394 of
database 16384; blocked by process 6795.
Process 6795 waits for AccessExclusiveLock on relation 16396 of database
16384; blocked by process 6855.

This detects deadlocks, but it doesn't detect lock waits.

When I wrote that it was previous experience driving me. Recent client
experience has highlighted the clear need for this. We had a lock wait
of 50 hours because of an RI check; thats the kind of thing I'd like to
show up in the logs somewhere.

Lock wait detection can be used to show up synchronisation points that
have been inadvertently designed into an application, so its a useful
tool in investigating performance issues.

I have a patch implementing the logging as agreed with Tom, will post to
patches later tonight.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#19Simon Riggs
simon@2ndquadrant.com
In reply to: Simon Riggs (#18)
1 attachment(s)
Re: [HACKERS] Deadlock with pg_dump?

On Mon, 2007-02-19 at 19:38 +0000, Simon Riggs wrote:

On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:

Simon Riggs wrote:

On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:

Chris Campbell <chris@bignerdranch.com> writes:

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

Old email, but I don't see how our current output is not good enough?

test=> lock a;
ERROR: deadlock detected
DETAIL: Process 6855 waits for AccessExclusiveLock on relation 16394 of
database 16384; blocked by process 6795.
Process 6795 waits for AccessExclusiveLock on relation 16396 of database
16384; blocked by process 6855.

This detects deadlocks, but it doesn't detect lock waits.

When I wrote that it was previous experience driving me. Recent client
experience has highlighted the clear need for this. We had a lock wait
of 50 hours because of an RI check; thats the kind of thing I'd like to
show up in the logs somewhere.

Lock wait detection can be used to show up synchronisation points that
have been inadvertently designed into an application, so its a useful
tool in investigating performance issues.

I have a patch implementing the logging as agreed with Tom, will post to
patches later tonight.

Patch for discussion, includes doc entries at top of patch, so its
fairly clear how it works.

Output is an INFO message, to allow this to trigger
log_min_error_statement when it generates a message, to allow us to see
the SQL statement that is waiting. This allows it to generate a message
prior to the statement completing, which is important because it may not
ever complete, in some cases, so simply logging a list of pids won't
always tell you what the SQL was that was waiting.

Other approaches are possible...

Comments?

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

Attachments:

log_lock_waits.v1.patchtext/x-patch; charset=UTF-8; name=log_lock_waits.v1.patchDownload
Index: doc/src/sgml/config.sgml
===================================================================
RCS file: /projects/cvsroot/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.108
diff -c -r1.108 config.sgml
*** doc/src/sgml/config.sgml	1 Feb 2007 00:28:16 -0000	1.108
--- doc/src/sgml/config.sgml	6 Feb 2007 12:31:49 -0000
***************
*** 2936,2941 ****
--- 2936,2965 ----
        </listitem>
       </varlistentry>
  
+      <varlistentry id="guc-log-lock-waits" xreflabel="log_lock_waits">
+       <term><varname>log_lock_waits</varname> (<type>boolean</type>)</term>
+       <indexterm>
+        <primary><varname>log_lock_waits</> configuration parameter</primary>
+       </indexterm>
+       <listitem>
+        <para>
+ 		Controls whether log messages are produced when a statement is forced
+ 		to wait when trying to acquire locks on database objects. The threshold
+ 		time is the value of the <xref linkend="guc-deadlock-timeout"> parameter.
+ 		The log messages generated are intended for use during specific 
+ 		investigations into application performance issues and subsequent tuning.
+ 		It is designed for use in conjunction with <varname>log_min_error_statement</>.
+ 		Log messages indicating long lock waits might indicate problems with
+ 		applications accessing the database or possibly disconnection issues. 
+ 		If no such problem exist it might indicate that <varname>deadlock_timeout</>
+ 		could be set higher. Log messages might also indicate that certain
+ 		deadlocks have been avoided. In those cases, decreasing the value of 
+ 		<varname>deadlock_timeout</> might resolve lock wait situations faster,
+ 		thereby reducing contention. By default, this form of logging is <literal>off</>.
+        </para>
+       </listitem>
+      </varlistentry>
+ 
       </variablelist>
      </sect2>
     </sect1>
Index: src/backend/storage/lmgr/deadlock.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/storage/lmgr/deadlock.c,v
retrieving revision 1.44
diff -c -r1.44 deadlock.c
*** src/backend/storage/lmgr/deadlock.c	5 Jan 2007 22:19:38 -0000	1.44
--- src/backend/storage/lmgr/deadlock.c	6 Feb 2007 12:31:55 -0000
***************
*** 187,199 ****
   * deadlock.  If resolution is impossible, return TRUE --- the caller
   * is then expected to abort the given proc's transaction.
   *
!  * Caller must already have locked all partitions of the lock tables.
   *
   * On failure, deadlock details are recorded in deadlockDetails[] for
   * subsequent printing by DeadLockReport().  That activity is separate
   * because we don't want to do it while holding all those LWLocks.
   */
! bool
  DeadLockCheck(PGPROC *proc)
  {
  	int			i,
--- 187,200 ----
   * deadlock.  If resolution is impossible, return TRUE --- the caller
   * is then expected to abort the given proc's transaction.
   *
!  * Caller must already have locked all partitions of the lock tables,
!  * so standard error logging/reporting code is handled by caller.
   *
   * On failure, deadlock details are recorded in deadlockDetails[] for
   * subsequent printing by DeadLockReport().  That activity is separate
   * because we don't want to do it while holding all those LWLocks.
   */
! DeadlockState
  DeadLockCheck(PGPROC *proc)
  {
  	int			i,
***************
*** 204,209 ****
--- 205,215 ----
  	nPossibleConstraints = 0;
  	nWaitOrders = 0;
  
+ #ifdef LOCK_DEBUG
+ 	if (Debug_deadlocks)
+ 		DumpAllLocks();
+ #endif
+ 
  	/* Search for deadlocks and possible fixes */
  	if (DeadLockCheckRecurse(proc))
  	{
***************
*** 217,223 ****
  		if (!FindLockCycle(proc, possibleConstraints, &nSoftEdges))
  			elog(FATAL, "deadlock seems to have disappeared");
  
! 		return true;			/* cannot find a non-deadlocked state */
  	}
  
  	/* Apply any needed rearrangements of wait queues */
--- 223,229 ----
  		if (!FindLockCycle(proc, possibleConstraints, &nSoftEdges))
  			elog(FATAL, "deadlock seems to have disappeared");
  
! 		return DS_HARD_DEADLOCK;	/* cannot find a non-deadlocked state */
  	}
  
  	/* Apply any needed rearrangements of wait queues */
***************
*** 249,255 ****
  		/* See if any waiters for the lock can be woken up now */
  		ProcLockWakeup(GetLocksMethodTable(lock), lock);
  	}
! 	return false;
  }
  
  /*
--- 255,265 ----
  		/* See if any waiters for the lock can be woken up now */
  		ProcLockWakeup(GetLocksMethodTable(lock), lock);
  	}
! 
! 	if (nWaitOrders > 0)
! 		return DS_SOFT_DEADLOCK;
! 	else
! 		return DS_DEADLOCK_NOT_FOUND;
  }
  
  /*
***************
*** 896,902 ****
  }
  
  /*
!  * Report a detected deadlock, with available details.
   */
  void
  DeadLockReport(void)
--- 906,912 ----
  }
  
  /*
!  * Report a detected DS_HARD_DEADLOCK, with available details.
   */
  void
  DeadLockReport(void)
Index: src/backend/storage/lmgr/proc.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/storage/lmgr/proc.c,v
retrieving revision 1.183
diff -c -r1.183 proc.c
*** src/backend/storage/lmgr/proc.c	16 Jan 2007 13:28:56 -0000	1.183
--- src/backend/storage/lmgr/proc.c	6 Feb 2007 12:31:55 -0000
***************
*** 48,53 ****
--- 48,54 ----
  /* GUC variables */
  int			DeadlockTimeout = 1000;
  int			StatementTimeout = 0;
+ bool		log_lock_waits = false;
  
  /* Pointer to this process's PGPROC struct, if any */
  PGPROC	   *MyProc = NULL;
***************
*** 979,984 ****
--- 980,986 ----
  CheckDeadLock(void)
  {
  	int			i;
+ 	DeadlockState deadlock_state = DS_DEADLOCK_NOT_FOUND;
  
  	/*
  	 * Acquire exclusive lock on the entire shared lock data structures. Must
***************
*** 1004,1063 ****
  	 * This is quicker than checking our semaphore's state, since no kernel
  	 * call is needed, and it is safe because we hold the lock partition lock.
  	 */
! 	if (MyProc->links.prev == INVALID_OFFSET ||
! 		MyProc->links.next == INVALID_OFFSET)
! 		goto check_done;
! 
! #ifdef LOCK_DEBUG
! 	if (Debug_deadlocks)
! 		DumpAllLocks();
! #endif
! 
! 	if (!DeadLockCheck(MyProc))
  	{
! 		/* No deadlock, so keep waiting */
! 		goto check_done;
! 	}
! 
! 	/*
! 	 * Oops.  We have a deadlock.
! 	 *
! 	 * Get this process out of wait state.	(Note: we could do this more
! 	 * efficiently by relying on lockAwaited, but use this coding to preserve
! 	 * the flexibility to kill some other transaction than the one detecting
! 	 * the deadlock.)
! 	 *
! 	 * RemoveFromWaitQueue sets MyProc->waitStatus to STATUS_ERROR, so
! 	 * ProcSleep will report an error after we return from the signal handler.
! 	 */
! 	Assert(MyProc->waitLock != NULL);
! 	RemoveFromWaitQueue(MyProc, LockTagHashCode(&(MyProc->waitLock->tag)));
  
! 	/*
! 	 * Unlock my semaphore so that the interrupted ProcSleep() call can
! 	 * finish.
! 	 */
! 	PGSemaphoreUnlock(&MyProc->sem);
  
! 	/*
! 	 * We're done here.  Transaction abort caused by the error that ProcSleep
! 	 * will raise will cause any other locks we hold to be released, thus
! 	 * allowing other processes to wake up; we don't need to do that here.
! 	 * NOTE: an exception is that releasing locks we hold doesn't consider the
! 	 * possibility of waiters that were blocked behind us on the lock we just
! 	 * failed to get, and might now be wakable because we're not in front of
! 	 * them anymore.  However, RemoveFromWaitQueue took care of waking up any
! 	 * such processes.
! 	 */
  
  	/*
  	 * Release locks acquired at head of routine.  Order is not critical, so
  	 * do it back-to-front to avoid waking another CheckDeadLock instance
  	 * before it can get all the locks.
  	 */
- check_done:
  	for (i = NUM_LOCK_PARTITIONS; --i >= 0;)
  		LWLockRelease(FirstLockMgrLock + i);
  }
  
  
--- 1006,1082 ----
  	 * This is quicker than checking our semaphore's state, since no kernel
  	 * call is needed, and it is safe because we hold the lock partition lock.
  	 */
! 	if (MyProc->links.prev != INVALID_OFFSET &&
! 		MyProc->links.next != INVALID_OFFSET)
! 		deadlock_state = DeadLockCheck(MyProc);
! 	
! 	if (deadlock_state == DS_HARD_DEADLOCK)
  	{
! 		/*
! 		 * Oops.  We have a deadlock.
! 		 *
! 		 * Get this process out of wait state.	(Note: we could do this more
! 		 * efficiently by relying on lockAwaited, but use this coding to preserve
! 		 * the flexibility to kill some other transaction than the one detecting
! 		 * the deadlock.)
! 		 *
! 		 * RemoveFromWaitQueue sets MyProc->waitStatus to STATUS_ERROR, so
! 		 * ProcSleep will report an error after we return from the signal handler.
! 		 */
! 		Assert(MyProc->waitLock != NULL);
! 		RemoveFromWaitQueue(MyProc, LockTagHashCode(&(MyProc->waitLock->tag)));
  
! 		/*
! 		 * Unlock my semaphore so that the interrupted ProcSleep() call can
! 		 * finish.
! 		 */
! 		PGSemaphoreUnlock(&MyProc->sem);
  
! 		/*
! 		 * We're done here.  Transaction abort caused by the error that ProcSleep
! 		 * will raise will cause any other locks we hold to be released, thus
! 		 * allowing other processes to wake up; we don't need to do that here.
! 		 * NOTE: an exception is that releasing locks we hold doesn't consider the
! 		 * possibility of waiters that were blocked behind us on the lock we just
! 		 * failed to get, and might now be wakable because we're not in front of
! 		 * them anymore.  However, RemoveFromWaitQueue took care of waking up any
! 		 * such processes.
! 		 */
! 	}
  
  	/*
  	 * Release locks acquired at head of routine.  Order is not critical, so
  	 * do it back-to-front to avoid waking another CheckDeadLock instance
  	 * before it can get all the locks.
  	 */
  	for (i = NUM_LOCK_PARTITIONS; --i >= 0;)
  		LWLockRelease(FirstLockMgrLock + i);
+ 
+ 	/*
+ 	 * Issue any log messages requested.
+ 	 *
+ 	 * Deadlock ERROR messages are issued as part of transaction abort, so 
+ 	 * these messages should not raise error states intentionally.
+ 	 */
+ 	if (log_lock_waits)
+ 	{
+ 		switch (deadlock_state)
+ 		{
+ 			case DS_SOFT_DEADLOCK:
+ 				ereport(INFO,
+ 					(errmsg("deadlock avoided by rearrangement of lock order")));
+ 				break;
+ 			case DS_DEADLOCK_NOT_FOUND:
+ 				ereport(INFO,
+ 					(errmsg("statement waiting for lock for at least %d ms",
+ 								DeadlockTimeout)));
+ 				break;
+ 			case DS_HARD_DEADLOCK:
+ 				break;	/* ERROR message handled during abort */
+ 			default:
+ 				break;
+ 		}
+ 	}
  }
  
  
Index: src/backend/utils/misc/guc.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/utils/misc/guc.c,v
retrieving revision 1.372
diff -c -r1.372 guc.c
*** src/backend/utils/misc/guc.c	1 Feb 2007 19:10:28 -0000	1.372
--- src/backend/utils/misc/guc.c	6 Feb 2007 12:32:01 -0000
***************
*** 807,812 ****
--- 807,820 ----
  #endif
  
  	{
+ 		{"log_lock_waits", PGC_SIGHUP, LOGGING_WHAT,
+ 			gettext_noop("Logs long lock wait events."),
+ 			NULL
+ 		},
+ 		&log_lock_waits,
+ 		false, NULL, NULL
+ 	},
+ 	{
  		{"log_hostname", PGC_SIGHUP, LOGGING_WHAT,
  			gettext_noop("Logs the host name in the connection logs."),
  			gettext_noop("By default, connection logs only show the IP address "
Index: src/backend/utils/misc/postgresql.conf.sample
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v
retrieving revision 1.207
diff -c -r1.207 postgresql.conf.sample
*** src/backend/utils/misc/postgresql.conf.sample	25 Jan 2007 15:05:15 -0000	1.207
--- src/backend/utils/misc/postgresql.conf.sample	6 Feb 2007 12:32:01 -0000
***************
*** 335,341 ****
  					# e.g. '<%u%%%d> '
  #log_statement = 'none'			# none, ddl, mod, all
  #log_hostname = off
! 
  #log_temp_files = -1			# Log temporary files equal or larger
  					# than the specified number of kilobytes.
  					# -1 disables;  0 logs all temp files
--- 335,341 ----
  					# e.g. '<%u%%%d> '
  #log_statement = 'none'			# none, ddl, mod, all
  #log_hostname = off
! #log_lock_waits = off			# Log lock waits longer than deadlock_timeout
  #log_temp_files = -1			# Log temporary files equal or larger
  					# than the specified number of kilobytes.
  					# -1 disables;  0 logs all temp files
Index: src/include/storage/lock.h
===================================================================
RCS file: /projects/cvsroot/pgsql/src/include/storage/lock.h,v
retrieving revision 1.103
diff -c -r1.103 lock.h
*** src/include/storage/lock.h	5 Jan 2007 22:19:58 -0000	1.103
--- src/include/storage/lock.h	6 Feb 2007 12:32:07 -0000
***************
*** 394,399 ****
--- 394,406 ----
  	LOCKACQUIRE_ALREADY_HELD	/* incremented count for lock already held */
  } LockAcquireResult;
  
+ /* Deadlock states identified by DeadlockCheck() */
+ typedef enum
+ {
+ 	DS_DEADLOCK_NOT_FOUND,		/* no deadlock found within database server */
+ 	DS_SOFT_DEADLOCK,			/* deadlock, but lock queues rearrangeable */
+ 	DS_HARD_DEADLOCK			/* deadlock, no way out but ERROR */
+ } DeadlockState;
  
  /*
   * The lockmgr's shared hash tables are partitioned to reduce contention.
***************
*** 442,448 ****
  extern void lock_twophase_postabort(TransactionId xid, uint16 info,
  						void *recdata, uint32 len);
  
! extern bool DeadLockCheck(PGPROC *proc);
  extern void DeadLockReport(void);
  extern void RememberSimpleDeadLock(PGPROC *proc1,
  					   LOCKMODE lockmode,
--- 449,455 ----
  extern void lock_twophase_postabort(TransactionId xid, uint16 info,
  						void *recdata, uint32 len);
  
! extern DeadlockState DeadLockCheck(PGPROC *proc);
  extern void DeadLockReport(void);
  extern void RememberSimpleDeadLock(PGPROC *proc1,
  					   LOCKMODE lockmode,
Index: src/include/storage/proc.h
===================================================================
RCS file: /projects/cvsroot/pgsql/src/include/storage/proc.h,v
retrieving revision 1.93
diff -c -r1.93 proc.h
*** src/include/storage/proc.h	16 Jan 2007 13:28:57 -0000	1.93
--- src/include/storage/proc.h	6 Feb 2007 12:32:07 -0000
***************
*** 127,132 ****
--- 127,133 ----
  /* configurable options */
  extern int	DeadlockTimeout;
  extern int	StatementTimeout;
+ extern bool	log_lock_waits;
  
  extern volatile bool cancel_from_timeout;
  
#20Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#19)
Re: [HACKERS] Deadlock with pg_dump?

I am a little concerned about a log_* setting that is INFO. I understand
why you used INFO (for log_min_error_messages), but INFO is inconsistent
with the log* prefix, and by default INFO doesn't appear in the log
file.

So, by default, the INFO is going to go to the user terminal, and not to
the logfile.

Ideas?

---------------------------------------------------------------------------

Simon Riggs wrote:

On Mon, 2007-02-19 at 19:38 +0000, Simon Riggs wrote:

On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:

Simon Riggs wrote:

On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:

Chris Campbell <chris@bignerdranch.com> writes:

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

Old email, but I don't see how our current output is not good enough?

test=> lock a;
ERROR: deadlock detected
DETAIL: Process 6855 waits for AccessExclusiveLock on relation 16394 of
database 16384; blocked by process 6795.
Process 6795 waits for AccessExclusiveLock on relation 16396 of database
16384; blocked by process 6855.

This detects deadlocks, but it doesn't detect lock waits.

When I wrote that it was previous experience driving me. Recent client
experience has highlighted the clear need for this. We had a lock wait
of 50 hours because of an RI check; thats the kind of thing I'd like to
show up in the logs somewhere.

Lock wait detection can be used to show up synchronisation points that
have been inadvertently designed into an application, so its a useful
tool in investigating performance issues.

I have a patch implementing the logging as agreed with Tom, will post to
patches later tonight.

Patch for discussion, includes doc entries at top of patch, so its
fairly clear how it works.

Output is an INFO message, to allow this to trigger
log_min_error_statement when it generates a message, to allow us to see
the SQL statement that is waiting. This allows it to generate a message
prior to the statement completing, which is important because it may not
ever complete, in some cases, so simply logging a list of pids won't
always tell you what the SQL was that was waiting.

Other approaches are possible...

Comments?

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

[ Attachment, skipping... ]

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#21Simon Riggs
simon@2ndquadrant.com
In reply to: Bruce Momjian (#20)
Re: [HACKERS] Deadlock with pg_dump?

On Mon, 2007-02-26 at 13:34 -0500, Bruce Momjian wrote:

I am a little concerned about a log_* setting that is INFO. I understand
why you used INFO (for log_min_error_messages), but INFO is inconsistent
with the log* prefix, and by default INFO doesn't appear in the log
file.

Yeh, LOG would be most appropriate, but thats not possible.

log_min_messages allows only DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1,
INFO, NOTICE and WARNING for non-error states.

Possibly DEBUG1?

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#21)
Re: [HACKERS] Deadlock with pg_dump?

"Simon Riggs" <simon@2ndquadrant.com> writes:

Yeh, LOG would be most appropriate, but thats not possible.

You have not given any good reason for that.

log_min_messages allows only DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1,
INFO, NOTICE and WARNING for non-error states.

I don't think you understand quite how the log message priority works...

regards, tom lane

#23Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#22)
Re: [HACKERS] Deadlock with pg_dump?

On Mon, 2007-02-26 at 14:11 -0500, Tom Lane wrote:

"Simon Riggs" <simon@2ndquadrant.com> writes:

Yeh, LOG would be most appropriate, but thats not possible.

You have not given any good reason for that.

The idea of the patch is that it generates a log message which then
invokes log_min_error_statement so that the SQL statement is displayed.
LOG is not on the list of options there, otherwise I would use it.

The reason for behaving like this is so that a message is generated
while the statement is still waiting, rather than at the end. As I
mentioned in the submission, you may not like that behaviour; I'm in two
minds myself, but I'm trying to get to the stage of having useful
information come out of the server when we have long lock waits.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#24Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#23)
Re: [HACKERS] Deadlock with pg_dump?

"Simon Riggs" <simon@2ndquadrant.com> writes:

The idea of the patch is that it generates a log message which then
invokes log_min_error_statement so that the SQL statement is displayed.
LOG is not on the list of options there, otherwise I would use it.

As I said, you don't understand how the logging priority control works.
LOG *is* the appropriate level for stuff intended to go to the server log.

regards, tom lane

#25Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#24)
Re: [HACKERS] Deadlock with pg_dump?

On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:

"Simon Riggs" <simon@2ndquadrant.com> writes:

The idea of the patch is that it generates a log message which then
invokes log_min_error_statement so that the SQL statement is displayed.
LOG is not on the list of options there, otherwise I would use it.

As I said, you don't understand how the logging priority control works.
LOG *is* the appropriate level for stuff intended to go to the server log.

Please look at the definition of log_min_error_statement, so you
understand where I'm coming from.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#26Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#25)
Re: [HACKERS] Deadlock with pg_dump?

"Simon Riggs" <simon@2ndquadrant.com> writes:

On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:

"Simon Riggs" <simon@2ndquadrant.com> writes:

The idea of the patch is that it generates a log message which then
invokes log_min_error_statement so that the SQL statement is displayed.
LOG is not on the list of options there, otherwise I would use it.

As I said, you don't understand how the logging priority control works.
LOG *is* the appropriate level for stuff intended to go to the server log.

Please look at the definition of log_min_error_statement, so you
understand where I'm coming from.

I *have* read the definition of log_min_error_statement. (The SGML docs
are wrong btw, as a quick look at the code shows that LOG is an accepted
value.)

The real issue here is that send_message_to_server_log just does

if (edata->elevel >= log_min_error_statement && debug_query_string != NULL)

to determine whether to log the statement, whereas arguably it should be
using a test like is_log_level_output --- that is, the priority ordering
for log_min_error_statement should be like log_min_messages not like
client_min_messages. We've discussed that before in another thread, but
it looks like nothing's been done yet. In any case, if you're unhappy
with the code's choice of whether to emit the STATEMENT part of a log
message, some changes here are what's indicated, not bizarre choices of
elevel for individual messages.

regards, tom lane

#27Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#26)
Re: [HACKERS] Deadlock with pg_dump?

On Mon, 2007-02-26 at 14:52 -0500, Tom Lane wrote:

"Simon Riggs" <simon@2ndquadrant.com> writes:

On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:

"Simon Riggs" <simon@2ndquadrant.com> writes:

The idea of the patch is that it generates a log message which then
invokes log_min_error_statement so that the SQL statement is displayed.
LOG is not on the list of options there, otherwise I would use it.

As I said, you don't understand how the logging priority control works.
LOG *is* the appropriate level for stuff intended to go to the server log.

Please look at the definition of log_min_error_statement, so you
understand where I'm coming from.

I *have* read the definition of log_min_error_statement. (The SGML docs
are wrong btw, as a quick look at the code shows that LOG is an accepted
value.)

OK, I should have looked passed the manual.

The real issue here is that send_message_to_server_log just does

if (edata->elevel >= log_min_error_statement && debug_query_string != NULL)

to determine whether to log the statement, whereas arguably it should be
using a test like is_log_level_output --- that is, the priority ordering
for log_min_error_statement should be like log_min_messages not like
client_min_messages. We've discussed that before in another thread, but
it looks like nothing's been done yet.

Hopefully not with me? Don't remember that.

In any case, if you're unhappy
with the code's choice of whether to emit the STATEMENT part of a log
message, some changes here are what's indicated, not bizarre choices of
elevel for individual messages.

Well, I would have chosen LOG if I thought it was available.

I'll do some more to the patch.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#28Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#21)
Re: [HACKERS] Deadlock with pg_dump?

Simon Riggs wrote:

On Mon, 2007-02-26 at 13:34 -0500, Bruce Momjian wrote:

I am a little concerned about a log_* setting that is INFO. I understand
why you used INFO (for log_min_error_messages), but INFO is inconsistent
with the log* prefix, and by default INFO doesn't appear in the log
file.

Yeh, LOG would be most appropriate, but thats not possible.

log_min_messages allows only DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1,
INFO, NOTICE and WARNING for non-error states.

Possibly DEBUG1?

This highlights a problem we have often had with LOG output where we
also want the query.

I think there are two possible approaches. First, we could add a new
bitmap value like LOG_STATEMENT to ereport when we want the statement
with the log line:

ereport (LOG | LOG_STATEMENT, ...)

(or a new LOG_WITH_STATEMENT log level) and a new GUC like
"log_include_statement" that would control the output of statements for
certain GUC parameters, and we document with GUC values it controls.

A simpler idea would be to unconditionally include the query in the
errdetail() of the actual LOG ereport.

This is not the first GUC that has needed this. We had this issue with
"log_temp_files", which we just added, and the only suggested solution
was to use log_statement = 'all'. Either of these ideas above would be
useful for this as well.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#28)
Re: [HACKERS] Deadlock with pg_dump?

Bruce Momjian <bruce@momjian.us> writes:

This is not the first GUC that has needed this.

Exactly. I think that we simply made a mistake in the initial
implementation of log_min_error_statement: we failed to think about
whether it should use client or server priority ordering, and the
easy-to-code behavior was the wrong one.

regards, tom lane

#30Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#29)
1 attachment(s)
Re: [HACKERS] Deadlock with pg_dump?

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

This is not the first GUC that has needed this.

Exactly. I think that we simply made a mistake in the initial
implementation of log_min_error_statement: we failed to think about
whether it should use client or server priority ordering, and the
easy-to-code behavior was the wrong one.

I have coded up the following patch which places LOG just above ERROR in
log_min_error_statement.

It also adds the new log level LOG_NO_STATEMENT, which is used only for
internal use by ereport() calls that should not print the statement.
This is particularly useful so log_statemant and
log_min_duration_statement do not print the statement twice.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

Attachments:

/pgpatches/log_mintext/x-diffDownload
Index: doc/src/sgml/config.sgml
===================================================================
RCS file: /cvsroot/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.112
diff -c -c -r1.112 config.sgml
*** doc/src/sgml/config.sgml	16 Feb 2007 16:37:29 -0000	1.112
--- doc/src/sgml/config.sgml	1 Mar 2007 01:56:33 -0000
***************
*** 2507,2518 ****
          <literal>DEBUG2</literal>, <literal>DEBUG1</literal>,
          <literal>INFO</literal>, <literal>NOTICE</literal>,
          <literal>WARNING</literal>, <literal>ERROR</literal>,
!         <literal>FATAL</literal>, and <literal>PANIC</literal>.
!         The default is <literal>ERROR</literal>, which means statements
!         causing errors, fatal errors, or panics will be logged.
!         To effectively turn off logging of failing statements,
!         set this parameter to <literal>PANIC</literal>.
!         Only superusers can change this setting.
         </para>
        </listitem>
       </varlistentry>
--- 2507,2518 ----
          <literal>DEBUG2</literal>, <literal>DEBUG1</literal>,
          <literal>INFO</literal>, <literal>NOTICE</literal>,
          <literal>WARNING</literal>, <literal>ERROR</literal>,
!         <literal>LOG</literal>, <literal>FATAL</literal>, and
!         <literal>PANIC</literal>.  The default is <literal>ERROR</literal>,
!         which means statements causing errors, log messages, fatal errors,
!         or panics will be logged.  To turn off logging of statements for
!         all levela, set this parameter to <literal>PANIC</literal>.  Only
!         superusers can change this setting.
         </para>
        </listitem>
       </varlistentry>
Index: src/backend/tcop/fastpath.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/tcop/fastpath.c,v
retrieving revision 1.95
diff -c -c -r1.95 fastpath.c
*** src/backend/tcop/fastpath.c	5 Jan 2007 22:19:39 -0000	1.95
--- src/backend/tcop/fastpath.c	1 Mar 2007 01:56:34 -0000
***************
*** 328,334 ****
  	/* Log as soon as we have the function OID and name */
  	if (log_statement == LOGSTMT_ALL)
  	{
! 		ereport(LOG,
  				(errmsg("fastpath function call: \"%s\" (OID %u)",
  						fip->fname, fid)));
  		was_logged = true;
--- 328,334 ----
  	/* Log as soon as we have the function OID and name */
  	if (log_statement == LOGSTMT_ALL)
  	{
! 		ereport(LOG_NO_STATEMENT,
  				(errmsg("fastpath function call: \"%s\" (OID %u)",
  						fip->fname, fid)));
  		was_logged = true;
Index: src/backend/tcop/postgres.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/tcop/postgres.c,v
retrieving revision 1.525
diff -c -c -r1.525 postgres.c
*** src/backend/tcop/postgres.c	20 Feb 2007 17:32:16 -0000	1.525
--- src/backend/tcop/postgres.c	1 Mar 2007 01:56:35 -0000
***************
*** 821,827 ****
  	/* Log immediately if dictated by log_statement */
  	if (check_log_statement(parsetree_list))
  	{
! 		ereport(LOG,
  				(errmsg("statement: %s", query_string),
  				 errdetail_execute(parsetree_list)));
  		was_logged = true;
--- 821,827 ----
  	/* Log immediately if dictated by log_statement */
  	if (check_log_statement(parsetree_list))
  	{
! 		ereport(LOG_NO_STATEMENT,
  				(errmsg("statement: %s", query_string),
  				 errdetail_execute(parsetree_list)));
  		was_logged = true;
***************
*** 1023,1029 ****
  					(errmsg("duration: %s ms", msec_str)));
  			break;
  		case 2:
! 			ereport(LOG,
  					(errmsg("duration: %s ms  statement: %s",
  							msec_str, query_string),
  					 errdetail_execute(parsetree_list)));
--- 1023,1029 ----
  					(errmsg("duration: %s ms", msec_str)));
  			break;
  		case 2:
! 			ereport(LOG_NO_STATEMENT,
  					(errmsg("duration: %s ms  statement: %s",
  							msec_str, query_string),
  					 errdetail_execute(parsetree_list)));
***************
*** 1284,1290 ****
  					(errmsg("duration: %s ms", msec_str)));
  			break;
  		case 2:
! 			ereport(LOG,
  					(errmsg("duration: %s ms  parse %s: %s",
  							msec_str,
  							*stmt_name ? stmt_name : "<unnamed>",
--- 1284,1290 ----
  					(errmsg("duration: %s ms", msec_str)));
  			break;
  		case 2:
! 			ereport(LOG_NO_STATEMENT,
  					(errmsg("duration: %s ms  parse %s: %s",
  							msec_str,
  							*stmt_name ? stmt_name : "<unnamed>",
***************
*** 1638,1644 ****
  					(errmsg("duration: %s ms", msec_str)));
  			break;
  		case 2:
! 			ereport(LOG,
  					(errmsg("duration: %s ms  bind %s%s%s: %s",
  							msec_str,
  							*stmt_name ? stmt_name : "<unnamed>",
--- 1638,1644 ----
  					(errmsg("duration: %s ms", msec_str)));
  			break;
  		case 2:
! 			ereport(LOG_NO_STATEMENT,
  					(errmsg("duration: %s ms  bind %s%s%s: %s",
  							msec_str,
  							*stmt_name ? stmt_name : "<unnamed>",
***************
*** 1768,1774 ****
  	/* Log immediately if dictated by log_statement */
  	if (check_log_statement(portal->stmts))
  	{
! 		ereport(LOG,
  				(errmsg("%s %s%s%s%s%s",
  						execute_is_fetch ?
  						_("execute fetch from") :
--- 1768,1774 ----
  	/* Log immediately if dictated by log_statement */
  	if (check_log_statement(portal->stmts))
  	{
! 		ereport(LOG_NO_STATEMENT,
  				(errmsg("%s %s%s%s%s%s",
  						execute_is_fetch ?
  						_("execute fetch from") :
***************
*** 1849,1855 ****
  					(errmsg("duration: %s ms", msec_str)));
  			break;
  		case 2:
! 			ereport(LOG,
  					(errmsg("duration: %s ms  %s %s%s%s%s%s",
  							msec_str,
  							execute_is_fetch ?
--- 1849,1855 ----
  					(errmsg("duration: %s ms", msec_str)));
  			break;
  		case 2:
! 			ereport(LOG_NO_STATEMENT,
  					(errmsg("duration: %s ms  %s %s%s%s%s%s",
  							msec_str,
  							execute_is_fetch ?
Index: src/backend/utils/error/elog.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/utils/error/elog.c,v
retrieving revision 1.182
diff -c -c -r1.182 elog.c
*** src/backend/utils/error/elog.c	11 Feb 2007 11:59:26 -0000	1.182
--- src/backend/utils/error/elog.c	1 Mar 2007 01:56:35 -0000
***************
*** 1629,1635 ****
  	/*
  	 * If the user wants the query that generated this error logged, do it.
  	 */
! 	if (edata->elevel >= log_min_error_statement && debug_query_string != NULL)
  	{
  		log_line_prefix(&buf);
  		appendStringInfoString(&buf, _("STATEMENT:  "));
--- 1629,1636 ----
  	/*
  	 * If the user wants the query that generated this error logged, do it.
  	 */
! 	if (is_log_level_output(edata->elevel, log_min_error_statement) &&
! 		edata->elevel != LOG_NO_STATEMENT && debug_query_string != NULL)
  	{
  		log_line_prefix(&buf);
  		appendStringInfoString(&buf, _("STATEMENT:  "));
***************
*** 1653,1658 ****
--- 1654,1660 ----
  				syslog_level = LOG_DEBUG;
  				break;
  			case LOG:
+ 			case LOG_NO_STATEMENT:
  			case COMMERROR:
  			case INFO:
  				syslog_level = LOG_INFO;
***************
*** 1968,1973 ****
--- 1970,1976 ----
  			prefix = _("DEBUG");
  			break;
  		case LOG:
+ 		case LOG_NO_STATEMENT:
  		case COMMERROR:
  			prefix = _("LOG");
  			break;
***************
*** 2062,2068 ****
  	 *	Complicated because LOG is sorted out-of-order here, between
  	 *	ERROR and FATAL.
  	 */
! 	if (elevel == LOG || elevel == COMMERROR)
  	{
  		if (log_min_level == LOG || log_min_level <= ERROR)
  			return true;
--- 2065,2071 ----
  	 *	Complicated because LOG is sorted out-of-order here, between
  	 *	ERROR and FATAL.
  	 */
! 	if (elevel == LOG || elevel == LOG_NO_STATEMENT || elevel == COMMERROR)
  	{
  		if (log_min_level == LOG || log_min_level <= ERROR)
  			return true;
Index: src/backend/utils/misc/postgresql.conf.sample
===================================================================
RCS file: /cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v
retrieving revision 1.209
diff -c -c -r1.209 postgresql.conf.sample
*** src/backend/utils/misc/postgresql.conf.sample	16 Feb 2007 17:07:00 -0000	1.209
--- src/backend/utils/misc/postgresql.conf.sample	1 Mar 2007 01:56:37 -0000
***************
*** 289,295 ****
  
  #log_error_verbosity = default		# terse, default, or verbose messages
  
! #log_min_error_statement = error	# Values in order of increasing severity:
  				 	#   debug5
  					#   debug4
  					#   debug3
--- 289,295 ----
  
  #log_error_verbosity = default		# terse, default, or verbose messages
  
! #log_min_error_statement = error	# Values in order of decreasing detail:
  				 	#   debug5
  					#   debug4
  					#   debug3
***************
*** 299,304 ****
--- 299,305 ----
  					#   notice
  					#   warning
  					#   error
+ 					#   log
  					#   fatal
  					#   panic (effectively off)
  
Index: src/include/utils/elog.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/utils/elog.h,v
retrieving revision 1.83
diff -c -c -r1.83 elog.h
*** src/include/utils/elog.h	5 Jan 2007 22:19:59 -0000	1.83
--- src/include/utils/elog.h	1 Mar 2007 01:56:38 -0000
***************
*** 25,53 ****
  #define DEBUG1		14			/* used by GUC debug_* variables */
  #define LOG			15			/* Server operational messages; sent only to
  								 * server log by default. */
! #define COMMERROR	16			/* Client communication problems; same as LOG
  								 * for server reporting, but never sent to
  								 * client. */
! #define INFO		17			/* Informative messages that are always sent
  								 * to client;  is not affected by
  								 * client_min_messages */
! #define NOTICE		18			/* Helpful messages to users about query
  								 * operation;  sent to client and server log
  								 * by default. */
! #define WARNING		19			/* Warnings.  NOTICE is for expected messages
  								 * like implicit sequence creation by SERIAL.
  								 * WARNING is for unexpected messages. */
! #define ERROR		20			/* user error - abort transaction; return to
  								 * known state */
  /* Save ERROR value in PGERROR so it can be restored when Win32 includes
   * modify it.  We have to use a constant rather than ERROR because macros
   * are expanded only when referenced outside macros.
   */
  #ifdef WIN32
! #define PGERROR		20
  #endif
! #define FATAL		21			/* fatal error - abort process */
! #define PANIC		22			/* take down the other backends with me */
  
   /* #define DEBUG DEBUG1 */	/* Backward compatibility with pre-7.3 */
  
--- 25,55 ----
  #define DEBUG1		14			/* used by GUC debug_* variables */
  #define LOG			15			/* Server operational messages; sent only to
  								 * server log by default. */
! #define LOG_NO_STATEMENT 16		/* Like LOG, but avoids log_min_error_statement */
! 
! #define COMMERROR	17			/* Client communication problems; same as LOG
  								 * for server reporting, but never sent to
  								 * client. */
! #define INFO		18			/* Informative messages that are always sent
  								 * to client;  is not affected by
  								 * client_min_messages */
! #define NOTICE		19			/* Helpful messages to users about query
  								 * operation;  sent to client and server log
  								 * by default. */
! #define WARNING		20			/* Warnings.  NOTICE is for expected messages
  								 * like implicit sequence creation by SERIAL.
  								 * WARNING is for unexpected messages. */
! #define ERROR		21			/* user error - abort transaction; return to
  								 * known state */
  /* Save ERROR value in PGERROR so it can be restored when Win32 includes
   * modify it.  We have to use a constant rather than ERROR because macros
   * are expanded only when referenced outside macros.
   */
  #ifdef WIN32
! #define PGERROR		21
  #endif
! #define FATAL		22			/* fatal error - abort process */
! #define PANIC		23			/* take down the other backends with me */
  
   /* #define DEBUG DEBUG1 */	/* Backward compatibility with pre-7.3 */
  
#31Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#30)
Re: [HACKERS] Deadlock with pg_dump?

Bruce Momjian <bruce@momjian.us> writes:

I have coded up the following patch which places LOG just above ERROR in
log_min_error_statement.

LOG_NO_STATEMENT? What *are* you thinking? The kindest word I can find
for this is "baroque".

What I had in mind was a one-line patch:

if (edata->elevel >= log_min_error_statement && debug_query_string != NULL)

becomes

if (is_log_level_output(edata->elevel, log_min_error_statement) && debug_query_string != NULL)

regards, tom lane

#32Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#31)
Re: [HACKERS] Deadlock with pg_dump?

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

I have coded up the following patch which places LOG just above ERROR in
log_min_error_statement.

LOG_NO_STATEMENT? What *are* you thinking? The kindest word I can find
for this is "baroque".

What I had in mind was a one-line patch:

if (edata->elevel >= log_min_error_statement && debug_query_string != NULL)

becomes

if (is_log_level_output(edata->elevel, log_min_error_statement) && debug_query_string != NULL)

What are *you* thinking? Yes, that patch has that line, but
log_statement and log_min_duration_statement is going to trigger
log_min_error_statement so you are going to get the statement printed
twice. LOG_NO_STATEMENT fixes that.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#33Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#32)
Re: [HACKERS] Deadlock with pg_dump?

Bruce Momjian <bruce@momjian.us> writes:

What are *you* thinking? Yes, that patch has that line, but
log_statement and log_min_duration_statement is going to trigger
log_min_error_statement so you are going to get the statement printed
twice.

What's wrong with that? If a statement triggers two different log
entries, and both are subject to being annotated with the statement text
according to log_min_error_statement, I would expect them both to be
annotated. Doing otherwise will probably break automated log analysis
tools.

regards, tom lane

#34Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#33)
Re: [HACKERS] Deadlock with pg_dump?

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

What are *you* thinking? Yes, that patch has that line, but
log_statement and log_min_duration_statement is going to trigger
log_min_error_statement so you are going to get the statement printed
twice.

What's wrong with that? If a statement triggers two different log
entries, and both are subject to being annotated with the statement text
according to log_min_error_statement, I would expect them both to be
annotated. Doing otherwise will probably break automated log analysis
tools.

Are people going to be happy that log_statement and
log_min_duration_statement output the statement twice?

test=> SHOW log_min_error_statement;
log_min_error_statement
-------------------------
error
(1 row)

test=> SET log_statement = 'all';
SET
test=> SELECT 1;
?column?
----------
1
(1 row)

Server log has:

LOG: statement: SELECT 1;
STATEMENT: SELECT 1;

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#35Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#34)
Re: [HACKERS] Deadlock with pg_dump?

Bruce Momjian <bruce@momjian.us> writes:

Are people going to be happy that log_statement and
log_min_duration_statement output the statement twice?

If those are the only cases you're worried about, a far simpler solution
is to clear debug_query_string before instead of after emitting those
log messages.

regards, tom lane

#36Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#35)
Re: [HACKERS] Deadlock with pg_dump?

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Are people going to be happy that log_statement and
log_min_duration_statement output the statement twice?

If those are the only cases you're worried about, a far simpler solution
is to clear debug_query_string before instead of after emitting those
log messages.

I am concerned about setting debug_query_string to NULL, calling
ereport(), and then resetting it might cause problems because of cases
where ereport might want to access debug_query_string for other uses,
for cases where ereport doesn't return to the reset code (but I assume
that is handled), and for cases like pgmonitor that would stop a
backend, read debug_query_string, and disconnect.

I can create a global variable to control this, but the new elog level
seemed cleaner.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#37Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#36)
Re: [HACKERS] Deadlock with pg_dump?

Bruce Momjian <bruce@momjian.us> writes:

I can create a global variable to control this, but the new elog level
seemed cleaner.

What I don't like about the proposed patch is that it's nonorthogonal.
I see no reason to suppose that LOG is the only possible elevel for
which it might be interesting to suppress the STATEMENT: field.

Perhaps the best thing would be to define an additional ereport
auxiliary function, say errprintstmt(bool), that could set a flag
in the current elog stack entry to control suppression of STATEMENT.
This would mean you couldn't determine the behavior when using elog(),
but that's not supposed to be used for user-facing messages anyway.

regards, tom lane

#38Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#37)
Re: [HACKERS] Deadlock with pg_dump?

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

I can create a global variable to control this, but the new elog level
seemed cleaner.

What I don't like about the proposed patch is that it's nonorthogonal.
I see no reason to suppose that LOG is the only possible elevel for
which it might be interesting to suppress the STATEMENT: field.

True.

Perhaps the best thing would be to define an additional ereport
auxiliary function, say errprintstmt(bool), that could set a flag
in the current elog stack entry to control suppression of STATEMENT.
This would mean you couldn't determine the behavior when using elog(),
but that's not supposed to be used for user-facing messages anyway.

One idea I had was to set the high-bit of elevel to control whether we
want to suppress statement logging, but I think errprintstmt() might be
best. I don't understand the ereport stack well enough to add this
functionality, though. What should I look for?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#39Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#38)
Re: [HACKERS] Deadlock with pg_dump?

Bruce Momjian <bruce@momjian.us> writes:

Tom Lane wrote:

Perhaps the best thing would be to define an additional ereport
auxiliary function, say errprintstmt(bool), that could set a flag
in the current elog stack entry to control suppression of STATEMENT.
This would mean you couldn't determine the behavior when using elog(),
but that's not supposed to be used for user-facing messages anyway.

One idea I had was to set the high-bit of elevel to control whether we
want to suppress statement logging, but I think errprintstmt() might be
best. I don't understand the ereport stack well enough to add this
functionality, though. What should I look for?

It wouldn't really be any different from errcode(), but if you want
I'll do it.

regards, tom lane

#40Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#39)
Re: [HACKERS] Deadlock with pg_dump?

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Tom Lane wrote:

Perhaps the best thing would be to define an additional ereport
auxiliary function, say errprintstmt(bool), that could set a flag
in the current elog stack entry to control suppression of STATEMENT.
This would mean you couldn't determine the behavior when using elog(),
but that's not supposed to be used for user-facing messages anyway.

One idea I had was to set the high-bit of elevel to control whether we
want to suppress statement logging, but I think errprintstmt() might be
best. I don't understand the ereport stack well enough to add this
functionality, though. What should I look for?

It wouldn't really be any different from errcode(), but if you want
I'll do it.

If you would just add the infrastructure I can add the LOG part.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#41Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#40)
Re: [HACKERS] Deadlock with pg_dump?

Bruce Momjian <bruce@momjian.us> writes:

Tom Lane wrote:

It wouldn't really be any different from errcode(), but if you want
I'll do it.

If you would just add the infrastructure I can add the LOG part.

OK, I applied a patch that covers the same territory as your patch of
Wednesday evening.  I didn't do anything about Simon's original patch
--- I assume that needs some rework now.

regards, tom lane

#42Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#41)
Re: [HACKERS] Deadlock with pg_dump?

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Tom Lane wrote:

It wouldn't really be any different from errcode(), but if you want
I'll do it.

If you would just add the infrastructure I can add the LOG part.

OK, I applied a patch that covers the same territory as your patch of
Wednesday evening.  I didn't do anything about Simon's original patch
--- I assume that needs some rework now.

Thanks. I will rework Simon's.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#43Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#19)
Re: Deadlock with pg_dump?

I will rework this before application to use LOG level.

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---------------------------------------------------------------------------

Simon Riggs wrote:

On Mon, 2007-02-19 at 19:38 +0000, Simon Riggs wrote:

On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:

Simon Riggs wrote:

On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:

Chris Campbell <chris@bignerdranch.com> writes:

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

Old email, but I don't see how our current output is not good enough?

test=> lock a;
ERROR: deadlock detected
DETAIL: Process 6855 waits for AccessExclusiveLock on relation 16394 of
database 16384; blocked by process 6795.
Process 6795 waits for AccessExclusiveLock on relation 16396 of database
16384; blocked by process 6855.

This detects deadlocks, but it doesn't detect lock waits.

When I wrote that it was previous experience driving me. Recent client
experience has highlighted the clear need for this. We had a lock wait
of 50 hours because of an RI check; thats the kind of thing I'd like to
show up in the logs somewhere.

Lock wait detection can be used to show up synchronisation points that
have been inadvertently designed into an application, so its a useful
tool in investigating performance issues.

I have a patch implementing the logging as agreed with Tom, will post to
patches later tonight.

Patch for discussion, includes doc entries at top of patch, so its
fairly clear how it works.

Output is an INFO message, to allow this to trigger
log_min_error_statement when it generates a message, to allow us to see
the SQL statement that is waiting. This allows it to generate a message
prior to the statement completing, which is important because it may not
ever complete, in some cases, so simply logging a list of pids won't
always tell you what the SQL was that was waiting.

Other approaches are possible...

Comments?

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

[ Attachment, skipping... ]

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

#44Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#19)
1 attachment(s)
Re: Deadlock with pg_dump?

I have applied a modified version of this patch, attached. I trimmed
down the description of log_lock_waits to be more concise, and moved the
idea of using this to tune deadlock_timeout to the deadlock_timeout
section of the manual.

---------------------------------------------------------------------------

Simon Riggs wrote:

On Mon, 2007-02-19 at 19:38 +0000, Simon Riggs wrote:

On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:

Simon Riggs wrote:

On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:

Chris Campbell <chris@bignerdranch.com> writes:

Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock occurred?
Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

Old email, but I don't see how our current output is not good enough?

test=> lock a;
ERROR: deadlock detected
DETAIL: Process 6855 waits for AccessExclusiveLock on relation 16394 of
database 16384; blocked by process 6795.
Process 6795 waits for AccessExclusiveLock on relation 16396 of database
16384; blocked by process 6855.

This detects deadlocks, but it doesn't detect lock waits.

When I wrote that it was previous experience driving me. Recent client
experience has highlighted the clear need for this. We had a lock wait
of 50 hours because of an RI check; thats the kind of thing I'd like to
show up in the logs somewhere.

Lock wait detection can be used to show up synchronisation points that
have been inadvertently designed into an application, so its a useful
tool in investigating performance issues.

I have a patch implementing the logging as agreed with Tom, will post to
patches later tonight.

Patch for discussion, includes doc entries at top of patch, so its
fairly clear how it works.

Output is an INFO message, to allow this to trigger
log_min_error_statement when it generates a message, to allow us to see
the SQL statement that is waiting. This allows it to generate a message
prior to the statement completing, which is important because it may not
ever complete, in some cases, so simply logging a list of pids won't
always tell you what the SQL was that was waiting.

Other approaches are possible...

Comments?

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

[ Attachment, skipping... ]

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +

Attachments:

/rtmp/difftext/x-diffDownload
Index: doc/src/sgml/config.sgml
===================================================================
RCS file: /cvsroot/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.113
diff -c -c -r1.113 config.sgml
*** doc/src/sgml/config.sgml	2 Mar 2007 23:37:22 -0000	1.113
--- doc/src/sgml/config.sgml	3 Mar 2007 18:41:13 -0000
***************
*** 2946,2951 ****
--- 2946,2966 ----
        </listitem>
       </varlistentry>
  
+      <varlistentry id="guc-log-lock-waits" xreflabel="log_lock_waits">
+       <term><varname>log_lock_waits</varname> (<type>boolean</type>)</term>
+       <indexterm>
+        <primary><varname>log_lock_waits</> configuration parameter</primary>
+       </indexterm>
+       <listitem>
+        <para>
+         Controls whether a log message is produced when a statement waits
+         longer than <xref linkend="guc-deadlock-timeout"> to acquire a
+         lock.  This is useful in determining if lock waits are causing
+         poor performance.  The default is <literal>off</>.
+        </para>
+       </listitem>
+      </varlistentry>
+ 
       <varlistentry id="guc-log-temp-files" xreflabel="log_temp_files">
        <term><varname>log_temp_files</varname> (<type>integer</type>)</term>
        <indexterm>
***************
*** 3980,3996 ****
          This is the amount of time, in milliseconds, to wait on a lock
          before checking to see if there is a deadlock condition. The
          check for deadlock is relatively slow, so the server doesn't run
!         it every time it waits for a lock. We (optimistically?) assume
          that deadlocks are not common in production applications and
          just wait on the lock for a while before starting the check for a
          deadlock. Increasing this value reduces the amount of time
          wasted in needless deadlock checks, but slows down reporting of
          real deadlock errors. The default is one second (<literal>1s</>),
          which is probably about the smallest value you would want in
!         practice. On a heavily loaded server you might want to raise it.
          Ideally the setting should exceed your typical transaction time,
!         so as to improve the odds that a lock will be released before
!         the waiter decides to check for deadlock.
         </para>
        </listitem>
       </varlistentry>
--- 3995,4012 ----
          This is the amount of time, in milliseconds, to wait on a lock
          before checking to see if there is a deadlock condition. The
          check for deadlock is relatively slow, so the server doesn't run
!         it every time it waits for a lock. We optimistically assume
          that deadlocks are not common in production applications and
          just wait on the lock for a while before starting the check for a
          deadlock. Increasing this value reduces the amount of time
          wasted in needless deadlock checks, but slows down reporting of
          real deadlock errors. The default is one second (<literal>1s</>),
          which is probably about the smallest value you would want in
!         practice.  Set <xref linkend="guc-log-lock-waits"> to log deadlock
!         checks.  On a heavily loaded server you might want to raise it.
          Ideally the setting should exceed your typical transaction time,
!         so as to improve the odds that a lock will be released before the
!         waiter decides to check for deadlock.
         </para>
        </listitem>
       </varlistentry>
Index: src/backend/storage/lmgr/deadlock.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/storage/lmgr/deadlock.c,v
retrieving revision 1.44
diff -c -c -r1.44 deadlock.c
*** src/backend/storage/lmgr/deadlock.c	5 Jan 2007 22:19:38 -0000	1.44
--- src/backend/storage/lmgr/deadlock.c	3 Mar 2007 18:41:13 -0000
***************
*** 187,199 ****
   * deadlock.  If resolution is impossible, return TRUE --- the caller
   * is then expected to abort the given proc's transaction.
   *
!  * Caller must already have locked all partitions of the lock tables.
   *
   * On failure, deadlock details are recorded in deadlockDetails[] for
   * subsequent printing by DeadLockReport().  That activity is separate
   * because we don't want to do it while holding all those LWLocks.
   */
! bool
  DeadLockCheck(PGPROC *proc)
  {
  	int			i,
--- 187,200 ----
   * deadlock.  If resolution is impossible, return TRUE --- the caller
   * is then expected to abort the given proc's transaction.
   *
!  * Caller must already have locked all partitions of the lock tables,
!  * so standard error logging/reporting code is handled by caller.
   *
   * On failure, deadlock details are recorded in deadlockDetails[] for
   * subsequent printing by DeadLockReport().  That activity is separate
   * because we don't want to do it while holding all those LWLocks.
   */
! DeadlockState
  DeadLockCheck(PGPROC *proc)
  {
  	int			i,
***************
*** 204,209 ****
--- 205,215 ----
  	nPossibleConstraints = 0;
  	nWaitOrders = 0;
  
+ #ifdef LOCK_DEBUG
+ 	if (Debug_deadlocks)
+ 		DumpAllLocks();
+ #endif
+ 
  	/* Search for deadlocks and possible fixes */
  	if (DeadLockCheckRecurse(proc))
  	{
***************
*** 217,223 ****
  		if (!FindLockCycle(proc, possibleConstraints, &nSoftEdges))
  			elog(FATAL, "deadlock seems to have disappeared");
  
! 		return true;			/* cannot find a non-deadlocked state */
  	}
  
  	/* Apply any needed rearrangements of wait queues */
--- 223,229 ----
  		if (!FindLockCycle(proc, possibleConstraints, &nSoftEdges))
  			elog(FATAL, "deadlock seems to have disappeared");
  
! 		return DS_HARD_DEADLOCK;	/* cannot find a non-deadlocked state */
  	}
  
  	/* Apply any needed rearrangements of wait queues */
***************
*** 249,255 ****
  		/* See if any waiters for the lock can be woken up now */
  		ProcLockWakeup(GetLocksMethodTable(lock), lock);
  	}
! 	return false;
  }
  
  /*
--- 255,265 ----
  		/* See if any waiters for the lock can be woken up now */
  		ProcLockWakeup(GetLocksMethodTable(lock), lock);
  	}
! 
! 	if (nWaitOrders > 0)
! 		return DS_SOFT_DEADLOCK;
! 	else
! 		return DS_DEADLOCK_NOT_FOUND;
  }
  
  /*
***************
*** 896,902 ****
  }
  
  /*
!  * Report a detected deadlock, with available details.
   */
  void
  DeadLockReport(void)
--- 906,912 ----
  }
  
  /*
!  * Report a detected DS_HARD_DEADLOCK, with available details.
   */
  void
  DeadLockReport(void)
Index: src/backend/storage/lmgr/proc.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/storage/lmgr/proc.c,v
retrieving revision 1.184
diff -c -c -r1.184 proc.c
*** src/backend/storage/lmgr/proc.c	15 Feb 2007 23:23:23 -0000	1.184
--- src/backend/storage/lmgr/proc.c	3 Mar 2007 18:41:14 -0000
***************
*** 48,53 ****
--- 48,54 ----
  /* GUC variables */
  int			DeadlockTimeout = 1000;
  int			StatementTimeout = 0;
+ bool		log_lock_waits = false;
  
  /* Pointer to this process's PGPROC struct, if any */
  PGPROC	   *MyProc = NULL;
***************
*** 979,984 ****
--- 980,986 ----
  CheckDeadLock(void)
  {
  	int			i;
+ 	DeadlockState deadlock_state = DS_DEADLOCK_NOT_FOUND;
  
  	/*
  	 * Acquire exclusive lock on the entire shared lock data structures. Must
***************
*** 1004,1063 ****
  	 * This is quicker than checking our semaphore's state, since no kernel
  	 * call is needed, and it is safe because we hold the lock partition lock.
  	 */
! 	if (MyProc->links.prev == INVALID_OFFSET ||
! 		MyProc->links.next == INVALID_OFFSET)
! 		goto check_done;
! 
! #ifdef LOCK_DEBUG
! 	if (Debug_deadlocks)
! 		DumpAllLocks();
! #endif
! 
! 	if (!DeadLockCheck(MyProc))
  	{
! 		/* No deadlock, so keep waiting */
! 		goto check_done;
! 	}
! 
! 	/*
! 	 * Oops.  We have a deadlock.
! 	 *
! 	 * Get this process out of wait state.	(Note: we could do this more
! 	 * efficiently by relying on lockAwaited, but use this coding to preserve
! 	 * the flexibility to kill some other transaction than the one detecting
! 	 * the deadlock.)
! 	 *
! 	 * RemoveFromWaitQueue sets MyProc->waitStatus to STATUS_ERROR, so
! 	 * ProcSleep will report an error after we return from the signal handler.
! 	 */
! 	Assert(MyProc->waitLock != NULL);
! 	RemoveFromWaitQueue(MyProc, LockTagHashCode(&(MyProc->waitLock->tag)));
  
! 	/*
! 	 * Unlock my semaphore so that the interrupted ProcSleep() call can
! 	 * finish.
! 	 */
! 	PGSemaphoreUnlock(&MyProc->sem);
  
! 	/*
! 	 * We're done here.  Transaction abort caused by the error that ProcSleep
! 	 * will raise will cause any other locks we hold to be released, thus
! 	 * allowing other processes to wake up; we don't need to do that here.
! 	 * NOTE: an exception is that releasing locks we hold doesn't consider the
! 	 * possibility of waiters that were blocked behind us on the lock we just
! 	 * failed to get, and might now be wakable because we're not in front of
! 	 * them anymore.  However, RemoveFromWaitQueue took care of waking up any
! 	 * such processes.
! 	 */
  
  	/*
  	 * Release locks acquired at head of routine.  Order is not critical, so
  	 * do it back-to-front to avoid waking another CheckDeadLock instance
  	 * before it can get all the locks.
  	 */
- check_done:
  	for (i = NUM_LOCK_PARTITIONS; --i >= 0;)
  		LWLockRelease(FirstLockMgrLock + i);
  }
  
  
--- 1006,1082 ----
  	 * This is quicker than checking our semaphore's state, since no kernel
  	 * call is needed, and it is safe because we hold the lock partition lock.
  	 */
! 	if (MyProc->links.prev != INVALID_OFFSET &&
! 		MyProc->links.next != INVALID_OFFSET)
! 		deadlock_state = DeadLockCheck(MyProc);
! 	
! 	if (deadlock_state == DS_HARD_DEADLOCK)
  	{
! 		/*
! 		 * Oops.  We have a deadlock.
! 		 *
! 		 * Get this process out of wait state.	(Note: we could do this more
! 		 * efficiently by relying on lockAwaited, but use this coding to preserve
! 		 * the flexibility to kill some other transaction than the one detecting
! 		 * the deadlock.)
! 		 *
! 		 * RemoveFromWaitQueue sets MyProc->waitStatus to STATUS_ERROR, so
! 		 * ProcSleep will report an error after we return from the signal handler.
! 		 */
! 		Assert(MyProc->waitLock != NULL);
! 		RemoveFromWaitQueue(MyProc, LockTagHashCode(&(MyProc->waitLock->tag)));
  
! 		/*
! 		 * Unlock my semaphore so that the interrupted ProcSleep() call can
! 		 * finish.
! 		 */
! 		PGSemaphoreUnlock(&MyProc->sem);
  
! 		/*
! 		 * We're done here.  Transaction abort caused by the error that ProcSleep
! 		 * will raise will cause any other locks we hold to be released, thus
! 		 * allowing other processes to wake up; we don't need to do that here.
! 		 * NOTE: an exception is that releasing locks we hold doesn't consider the
! 		 * possibility of waiters that were blocked behind us on the lock we just
! 		 * failed to get, and might now be wakable because we're not in front of
! 		 * them anymore.  However, RemoveFromWaitQueue took care of waking up any
! 		 * such processes.
! 		 */
! 	}
  
  	/*
  	 * Release locks acquired at head of routine.  Order is not critical, so
  	 * do it back-to-front to avoid waking another CheckDeadLock instance
  	 * before it can get all the locks.
  	 */
  	for (i = NUM_LOCK_PARTITIONS; --i >= 0;)
  		LWLockRelease(FirstLockMgrLock + i);
+ 
+ 	/*
+ 	 * Issue any log messages requested.
+ 	 *
+ 	 * Deadlock ERROR messages are issued as part of transaction abort, so 
+ 	 * these messages should not raise error states intentionally.
+ 	 */
+ 	if (log_lock_waits)
+ 	{
+ 		switch (deadlock_state)
+ 		{
+ 			case DS_SOFT_DEADLOCK:
+ 				ereport(LOG,
+ 					(errmsg("deadlock avoided by rearranging lock order")));
+ 				break;
+ 			case DS_DEADLOCK_NOT_FOUND:
+ 				ereport(LOG,
+ 					(errmsg("statement waiting for lock for at least %d ms",
+ 								DeadlockTimeout)));
+ 				break;
+ 			case DS_HARD_DEADLOCK:
+ 				break;	/* ERROR message handled during abort */
+ 			default:
+ 				break;
+ 		}
+ 	}
  }
  
  
Index: src/backend/utils/misc/guc.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/utils/misc/guc.c,v
retrieving revision 1.377
diff -c -c -r1.377 guc.c
*** src/backend/utils/misc/guc.c	23 Feb 2007 21:36:18 -0000	1.377
--- src/backend/utils/misc/guc.c	3 Mar 2007 18:41:16 -0000
***************
*** 812,817 ****
--- 812,825 ----
  #endif
  
  	{
+ 		{"log_lock_waits", PGC_SIGHUP, LOGGING_WHAT,
+ 			gettext_noop("Logs long lock wait events."),
+ 			NULL
+ 		},
+ 		&log_lock_waits,
+ 		false, NULL, NULL
+ 	},
+ 	{
  		{"log_hostname", PGC_SIGHUP, LOGGING_WHAT,
  			gettext_noop("Logs the host name in the connection logs."),
  			gettext_noop("By default, connection logs only show the IP address "
Index: src/backend/utils/misc/postgresql.conf.sample
===================================================================
RCS file: /cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v
retrieving revision 1.210
diff -c -c -r1.210 postgresql.conf.sample
*** src/backend/utils/misc/postgresql.conf.sample	2 Mar 2007 23:37:23 -0000	1.210
--- src/backend/utils/misc/postgresql.conf.sample	3 Mar 2007 18:41:16 -0000
***************
*** 340,346 ****
  					# e.g. '<%u%%%d> '
  #log_statement = 'none'			# none, ddl, mod, all
  #log_hostname = off
! 
  #log_temp_files = -1			# Log temporary files equal or larger
  					# than the specified number of kilobytes.
  					# -1 disables;  0 logs all temp files
--- 340,346 ----
  					# e.g. '<%u%%%d> '
  #log_statement = 'none'			# none, ddl, mod, all
  #log_hostname = off
! #log_lock_waits = off			# Log lock waits longer than deadlock_timeout
  #log_temp_files = -1			# Log temporary files equal or larger
  					# than the specified number of kilobytes.
  					# -1 disables;  0 logs all temp files
Index: src/include/storage/lock.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/storage/lock.h,v
retrieving revision 1.103
diff -c -c -r1.103 lock.h
*** src/include/storage/lock.h	5 Jan 2007 22:19:58 -0000	1.103
--- src/include/storage/lock.h	3 Mar 2007 18:41:17 -0000
***************
*** 394,399 ****
--- 394,406 ----
  	LOCKACQUIRE_ALREADY_HELD	/* incremented count for lock already held */
  } LockAcquireResult;
  
+ /* Deadlock states identified by DeadlockCheck() */
+ typedef enum
+ {
+ 	DS_DEADLOCK_NOT_FOUND,		/* no deadlock found within database server */
+ 	DS_SOFT_DEADLOCK,			/* deadlock, but lock queues rearrangeable */
+ 	DS_HARD_DEADLOCK			/* deadlock, no way out but ERROR */
+ } DeadlockState;
  
  /*
   * The lockmgr's shared hash tables are partitioned to reduce contention.
***************
*** 442,448 ****
  extern void lock_twophase_postabort(TransactionId xid, uint16 info,
  						void *recdata, uint32 len);
  
! extern bool DeadLockCheck(PGPROC *proc);
  extern void DeadLockReport(void);
  extern void RememberSimpleDeadLock(PGPROC *proc1,
  					   LOCKMODE lockmode,
--- 449,455 ----
  extern void lock_twophase_postabort(TransactionId xid, uint16 info,
  						void *recdata, uint32 len);
  
! extern DeadlockState DeadLockCheck(PGPROC *proc);
  extern void DeadLockReport(void);
  extern void RememberSimpleDeadLock(PGPROC *proc1,
  					   LOCKMODE lockmode,
Index: src/include/storage/proc.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/storage/proc.h,v
retrieving revision 1.94
diff -c -c -r1.94 proc.h
*** src/include/storage/proc.h	15 Feb 2007 23:23:23 -0000	1.94
--- src/include/storage/proc.h	3 Mar 2007 18:41:17 -0000
***************
*** 127,132 ****
--- 127,133 ----
  /* configurable options */
  extern int	DeadlockTimeout;
  extern int	StatementTimeout;
+ extern bool	log_lock_waits;
  
  extern volatile bool cancel_from_timeout;