Patch: Show queries of processes holding a lock
Hi, there!
I created patch improving the log messages generated by
log_lock_waits.
Sample output (log_lock_waits=on required):
session 1:
CREATE TABLE foo (val integer);
INSERT INTO foo (val) VALUES (1);
BEGIN;
UPDATE foo SET val = 3;
session 2:
BEGIN;
UPDATE TABLE foo SET val = 2;
Output w/o patch:
LOG: process 3133043 still waiting for ShareLock on transaction 758
after 1000.239 ms
DETAIL: Process holding the lock: 3132855. Wait queue: 3133043.
CONTEXT: while updating tuple (0,7) in relation "foo"
STATEMENT: update foo SET val = 2;
Output with path
LOG: process 3133043 still waiting for ShareLock on transaction 758
after 1000.239 ms
DETAIL: Process holding the lock: 3132855. Wait queue: 3133043.
Process 3132855: update foo SET val = 3;
CONTEXT: while updating tuple (0,7) in relation "foo"
STATEMENT: update foo SET val = 2;
As you can see information about query that holds the lock goes into log.
If this approach proves unacceptable, we can make the log_lock_waits
parameter as an enum
and display the query if the log_lock_waits=verbose (for example).
What do you think?
Regards,
--
Orlov Alexey
Attachments:
v-1-0001-Show-queries-in-log_lock_wait_log.patchtext/x-patch; charset=US-ASCII; name=v-1-0001-Show-queries-in-log_lock_wait_log.patchDownload
From a317cd40303975c90c2daf222deb7dc80a44aa11 Mon Sep 17 00:00:00 2001
From: Alexey Orlov <aporlov@tender.pro>
Date: Mon, 30 Sep 2024 22:11:25 +0300
Subject: [PATCH v=1] Show queries in log_lock_wait_log
---
src/backend/storage/lmgr/proc.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 0d8162a2cc..8c187bd6dc 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -1514,7 +1514,8 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
{
StringInfoData buf,
lock_waiters_sbuf,
- lock_holders_sbuf;
+ lock_holders_sbuf,
+ lock_queries_sbuf;
const char *modename;
long secs;
int usecs;
@@ -1528,6 +1529,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
initStringInfo(&buf);
initStringInfo(&lock_waiters_sbuf);
initStringInfo(&lock_holders_sbuf);
+ initStringInfo(&lock_queries_sbuf);
DescribeLockTag(&buf, &locallock->tag.lock);
modename = GetLockmodeName(locallock->tag.lock.locktag_lockmethodid,
@@ -1572,6 +1574,15 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
}
else
{
+ if (myWaitStatus == PROC_WAIT_STATUS_WAITING)
+ {
+ appendStringInfoChar(&lock_queries_sbuf, '\n');
+
+ appendStringInfo(&lock_queries_sbuf,
+ _("Process %d: %s"),
+ curproclock->tag.myProc->pid,
+ pgstat_get_backend_current_activity(curproclock->tag.myProc->pid, false));
+ }
if (first_holder)
{
appendStringInfo(&lock_holders_sbuf, "%d",
@@ -1616,9 +1627,9 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
ereport(LOG,
(errmsg("process %d still waiting for %s on %s after %ld.%03d ms",
MyProcPid, modename, buf.data, msecs, usecs),
- (errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.",
- "Processes holding the lock: %s. Wait queue: %s.",
- lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data))));
+ (errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.%s",
+ "Processes holding the lock: %s. Wait queue: %s.%s",
+ lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data, lock_queries_sbuf.data))));
else if (myWaitStatus == PROC_WAIT_STATUS_OK)
ereport(LOG,
(errmsg("process %d acquired %s on %s after %ld.%03d ms",
@@ -1654,6 +1665,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
pfree(buf.data);
pfree(lock_holders_sbuf.data);
pfree(lock_waiters_sbuf.data);
+ pfree(lock_queries_sbuf.data);
}
} while (myWaitStatus == PROC_WAIT_STATUS_WAITING);
--
2.44.0
Hi Alexey Orlov
Thank you for your work on this path,The lock information is recorded
in detail,Easy to trace the lock competition at that time there is a
detailed lock competition log,But I have a concern,Frequent calls to this
function (pgstat_get_backend_current_activity) in heavy lock contention or
high concurrency environments may cause performance degradation, especially
when processes frequently enter and exit lock waits. Can you add a guc
parameter to turn this feature on or off?After all communities for this
parameter( log_lock_waits )default values set to on many people concern (
https://commitfest.postgresql.org/49/4718/)
Thanks
Alexey Orlov <aporlov@gmail.com> 于2024年10月1日周二 16:04写道:
Show quoted text
Hi, there!
I created patch improving the log messages generated by
log_lock_waits.Sample output (log_lock_waits=on required):
session 1:
CREATE TABLE foo (val integer);
INSERT INTO foo (val) VALUES (1);
BEGIN;
UPDATE foo SET val = 3;session 2:
BEGIN;
UPDATE TABLE foo SET val = 2;Output w/o patch:
LOG: process 3133043 still waiting for ShareLock on transaction 758
after 1000.239 ms
DETAIL: Process holding the lock: 3132855. Wait queue: 3133043.
CONTEXT: while updating tuple (0,7) in relation "foo"
STATEMENT: update foo SET val = 2;Output with path
LOG: process 3133043 still waiting for ShareLock on transaction 758
after 1000.239 ms
DETAIL: Process holding the lock: 3132855. Wait queue: 3133043.
Process 3132855: update foo SET val = 3;
CONTEXT: while updating tuple (0,7) in relation "foo"
STATEMENT: update foo SET val = 2;As you can see information about query that holds the lock goes into log.
If this approach proves unacceptable, we can make the log_lock_waits
parameter as an enum
and display the query if the log_lock_waits=verbose (for example).What do you think?
Regards,
--
Orlov Alexey
On Tue, Oct 1, 2024 at 11:45 AM wenhui qiu <qiuwenhuifx@gmail.com> wrote:
Hi Alexey Orlov
Thank you for your work on this path,The lock information is recorded in detail,Easy to trace the lock competition at that time there is a detailed lock competition log,But I have a concern,Frequent calls to this function (pgstat_get_backend_current_activity) in heavy lock contention or high concurrency environments may cause performance degradation, especially when processes frequently enter and exit lock waits. Can you add a guc parameter to turn this feature on or off?After all communities for this parameter( log_lock_waits )default values set to on many people concern (https://commitfest.postgresql.org/49/4718/)Thanks
Yeah, agree, thank you. I just think changing the parameter type would
be nice too.
typedef enum
{
LOG_LOCK_WAIT_NONE = 0,
LOG_LOCK_WAIT_TERSE,
LOG_LOCK_WAIT_VERBOSE,
} LogLockWaitVerbosity;
LOG_LOCK_WAIT_NONE is "off", LOG_LOCK_WAIT_TERSE is "on",
LOG_LOCK_WAIT_VERBOSE enables writing the query to to the log.
I've attached a slightly modified patch to use the new log_lock_wait values.
--
Regards,
Alexey Orlov!
Attachments:
v-2-0001-Show-queries-in-log_lock_wait.patchtext/x-patch; charset=US-ASCII; name=v-2-0001-Show-queries-in-log_lock_wait.patchDownload
From 0e1835146df63f104f6d4460ffb357a93732eefa Mon Sep 17 00:00:00 2001
From: Alexey Orlov <aporlov@tender.pro>
Date: Wed, 2 Oct 2024 12:08:13 +0300
Subject: [PATCH v=2] Show queries in log_lock_wait
---
src/backend/storage/lmgr/proc.c | 22 ++++++++++++++-----
src/backend/utils/misc/guc_tables.c | 33 +++++++++++++++++++++--------
src/include/storage/proc.h | 9 +++++++-
3 files changed, 49 insertions(+), 15 deletions(-)
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 0d8162a2cc..60b749b140 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -61,7 +61,7 @@ int LockTimeout = 0;
int IdleInTransactionSessionTimeout = 0;
int TransactionTimeout = 0;
int IdleSessionTimeout = 0;
-bool log_lock_waits = false;
+int log_lock_waits = LOG_LOCK_WAIT_NONE;
/* Pointer to this process's PGPROC struct, if any */
PGPROC *MyProc = NULL;
@@ -1514,7 +1514,8 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
{
StringInfoData buf,
lock_waiters_sbuf,
- lock_holders_sbuf;
+ lock_holders_sbuf,
+ lock_queries_sbuf;
const char *modename;
long secs;
int usecs;
@@ -1528,6 +1529,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
initStringInfo(&buf);
initStringInfo(&lock_waiters_sbuf);
initStringInfo(&lock_holders_sbuf);
+ initStringInfo(&lock_queries_sbuf);
DescribeLockTag(&buf, &locallock->tag.lock);
modename = GetLockmodeName(locallock->tag.lock.locktag_lockmethodid,
@@ -1572,6 +1574,15 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
}
else
{
+ if (log_lock_waits == LOG_LOCK_WAIT_VERBOSE && myWaitStatus == PROC_WAIT_STATUS_WAITING)
+ {
+ appendStringInfoChar(&lock_queries_sbuf, '\n');
+
+ appendStringInfo(&lock_queries_sbuf,
+ _("Process %d: %s"),
+ curproclock->tag.myProc->pid,
+ pgstat_get_backend_current_activity(curproclock->tag.myProc->pid, false));
+ }
if (first_holder)
{
appendStringInfo(&lock_holders_sbuf, "%d",
@@ -1616,9 +1627,9 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
ereport(LOG,
(errmsg("process %d still waiting for %s on %s after %ld.%03d ms",
MyProcPid, modename, buf.data, msecs, usecs),
- (errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.",
- "Processes holding the lock: %s. Wait queue: %s.",
- lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data))));
+ (errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.%s",
+ "Processes holding the lock: %s. Wait queue: %s.%s",
+ lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data, lock_queries_sbuf.data))));
else if (myWaitStatus == PROC_WAIT_STATUS_OK)
ereport(LOG,
(errmsg("process %d acquired %s on %s after %ld.%03d ms",
@@ -1654,6 +1665,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
pfree(buf.data);
pfree(lock_holders_sbuf.data);
pfree(lock_waiters_sbuf.data);
+ pfree(lock_queries_sbuf.data);
}
} while (myWaitStatus == PROC_WAIT_STATUS_WAITING);
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 686309db58..1eddaaf0e5 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -474,6 +474,20 @@ static const struct config_enum_entry wal_compression_options[] = {
{NULL, 0, false}
};
+static const struct config_enum_entry log_lock_waits_options[] = {
+ {"verbose", LOG_LOCK_WAIT_VERBOSE, false},
+ {"terse", LOG_LOCK_WAIT_TERSE, false},
+ {"on", LOG_LOCK_WAIT_TERSE, false},
+ {"off", LOG_LOCK_WAIT_NONE, false},
+ {"true", LOG_LOCK_WAIT_TERSE, true},
+ {"false", LOG_LOCK_WAIT_NONE, true},
+ {"yes", LOG_LOCK_WAIT_TERSE, true},
+ {"no", LOG_LOCK_WAIT_NONE, true},
+ {"1", LOG_LOCK_WAIT_TERSE, true},
+ {"0", LOG_LOCK_WAIT_NONE, true},
+ {NULL, 0, false}
+};
+
/*
* Options for enum values stored in other modules
*/
@@ -1552,15 +1566,6 @@ struct config_bool ConfigureNamesBool[] =
},
#endif
- {
- {"log_lock_waits", PGC_SUSET, LOGGING_WHAT,
- gettext_noop("Logs long lock waits."),
- NULL
- },
- &log_lock_waits,
- false,
- NULL, NULL, NULL
- },
{
{"log_recovery_conflict_waits", PGC_SIGHUP, LOGGING_WHAT,
gettext_noop("Logs standby recovery conflict waits."),
@@ -5022,6 +5027,16 @@ struct config_enum ConfigureNamesEnum[] =
NULL, NULL, NULL
},
+ {
+ {"log_lock_waits", PGC_SUSET, LOGGING_WHAT,
+ gettext_noop("Logs long lock waits."),
+ NULL
+ },
+ &log_lock_waits,
+ LOG_LOCK_WAIT_NONE, log_lock_waits_options,
+ NULL, NULL, NULL
+ },
+
{
{"wal_level", PGC_POSTMASTER, WAL_SETTINGS,
gettext_noop("Sets the level of information written to the WAL."),
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ebcf0ad403..fca95196e4 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -132,6 +132,13 @@ typedef enum
PROC_WAIT_STATUS_ERROR,
} ProcWaitStatus;
+typedef enum
+{
+ LOG_LOCK_WAIT_NONE = 0,
+ LOG_LOCK_WAIT_TERSE,
+ LOG_LOCK_WAIT_VERBOSE,
+} LogLockWaitVerbosity;
+
/*
* Each backend has a PGPROC struct in shared memory. There is also a list of
* currently-unused PGPROC structs that will be reallocated to new backends.
@@ -456,7 +463,7 @@ extern PGDLLIMPORT int LockTimeout;
extern PGDLLIMPORT int IdleInTransactionSessionTimeout;
extern PGDLLIMPORT int TransactionTimeout;
extern PGDLLIMPORT int IdleSessionTimeout;
-extern PGDLLIMPORT bool log_lock_waits;
+extern PGDLLIMPORT int log_lock_waits;
#ifdef EXEC_BACKEND
extern PGDLLIMPORT slock_t *ProcStructLock;
--
2.44.0
On Tue, 1 Oct 2024 at 21:04, Alexey Orlov <aporlov@gmail.com> wrote:
session 1:
CREATE TABLE foo (val integer);
INSERT INTO foo (val) VALUES (1);
BEGIN;
UPDATE foo SET val = 3;session 2:
BEGIN;
UPDATE TABLE foo SET val = 2;LOG: process 3133043 still waiting for ShareLock on transaction 758
after 1000.239 ms
DETAIL: Process holding the lock: 3132855. Wait queue: 3133043.
Process 3132855: update foo SET val = 3;
CONTEXT: while updating tuple (0,7) in relation "foo"
STATEMENT: update foo SET val = 2;
What do you think?
Can you explain why the last query executed by the blocking process is
relevant output to show? Are you just hoping that the last statement
to execute is the one that obtained the lock? Wouldn't it be confusing
if the last query to execute wasn't the query which obtained the lock?
David
On Thu, Oct 3, 2024 at 3:58 AM David Rowley <dgrowleyml@gmail.com> wrote:
On Tue, 1 Oct 2024 at 21:04, Alexey Orlov <aporlov@gmail.com> wrote:
session 1:
CREATE TABLE foo (val integer);
INSERT INTO foo (val) VALUES (1);
BEGIN;
UPDATE foo SET val = 3;session 2:
BEGIN;
UPDATE TABLE foo SET val = 2;LOG: process 3133043 still waiting for ShareLock on transaction 758
after 1000.239 ms
DETAIL: Process holding the lock: 3132855. Wait queue: 3133043.
Process 3132855: update foo SET val = 3;
CONTEXT: while updating tuple (0,7) in relation "foo"
STATEMENT: update foo SET val = 2;What do you think?
Can you explain why the last query executed by the blocking process is
relevant output to show? Are you just hoping that the last statement
to execute is the one that obtained the lock? Wouldn't it be confusing
if the last query to execute wasn't the query which obtained the lock?David
Thanks for the review! I completely agree with you, relying on chance is wrong.
What if I do a small check? I’ll check whether the command string has
changed during the DeadLockTimeout.
And if so we will see in the log:
Process holding the lock: 1057195. Wait queue: 1057550.
Process 1057195: <command string has been changed>
I have shared updated patch[3]
--
Regards,
Alexey Orlov!
Attachments:
v-3-0001-Show-queries-in-log_lock_wait_log.patchtext/x-patch; charset=US-ASCII; name=v-3-0001-Show-queries-in-log_lock_wait_log.patchDownload
From 2ee0866d06d78f29b971c97a399b11261bd61861 Mon Sep 17 00:00:00 2001
From: Alexey Orlov <aporlov@tender.pro>
Date: Thu, 3 Oct 2024 12:02:47 +0300
Subject: [PATCH v=3] Show queries in log_lock_wait_log
---
src/backend/storage/lmgr/deadlock.c | 2 +-
src/backend/storage/lmgr/proc.c | 22 ++++++++++----
src/backend/utils/activity/backend_status.c | 4 ++-
src/backend/utils/misc/guc_tables.c | 33 +++++++++++++++------
src/include/storage/proc.h | 9 +++++-
src/include/utils/backend_status.h | 2 +-
6 files changed, 54 insertions(+), 18 deletions(-)
diff --git a/src/backend/storage/lmgr/deadlock.c b/src/backend/storage/lmgr/deadlock.c
index fcb874d234..96c96b3dab 100644
--- a/src/backend/storage/lmgr/deadlock.c
+++ b/src/backend/storage/lmgr/deadlock.c
@@ -1122,7 +1122,7 @@ DeadLockReport(void)
appendStringInfo(&logbuf,
_("Process %d: %s"),
info->pid,
- pgstat_get_backend_current_activity(info->pid, false));
+ pgstat_get_backend_current_activity(info->pid, false, false));
}
pgstat_report_deadlock();
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 0d8162a2cc..09a4430eff 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -61,7 +61,7 @@ int LockTimeout = 0;
int IdleInTransactionSessionTimeout = 0;
int TransactionTimeout = 0;
int IdleSessionTimeout = 0;
-bool log_lock_waits = false;
+int log_lock_waits = LOG_LOCK_WAIT_NONE;
/* Pointer to this process's PGPROC struct, if any */
PGPROC *MyProc = NULL;
@@ -1514,7 +1514,8 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
{
StringInfoData buf,
lock_waiters_sbuf,
- lock_holders_sbuf;
+ lock_holders_sbuf,
+ lock_queries_sbuf;
const char *modename;
long secs;
int usecs;
@@ -1528,6 +1529,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
initStringInfo(&buf);
initStringInfo(&lock_waiters_sbuf);
initStringInfo(&lock_holders_sbuf);
+ initStringInfo(&lock_queries_sbuf);
DescribeLockTag(&buf, &locallock->tag.lock);
modename = GetLockmodeName(locallock->tag.lock.locktag_lockmethodid,
@@ -1572,6 +1574,15 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
}
else
{
+ if (log_lock_waits == LOG_LOCK_WAIT_VERBOSE && myWaitStatus == PROC_WAIT_STATUS_WAITING)
+ {
+ appendStringInfoChar(&lock_queries_sbuf, '\n');
+
+ appendStringInfo(&lock_queries_sbuf,
+ _("Process %d: %s"),
+ curproclock->tag.myProc->pid,
+ pgstat_get_backend_current_activity(curproclock->tag.myProc->pid, false, true));
+ }
if (first_holder)
{
appendStringInfo(&lock_holders_sbuf, "%d",
@@ -1616,9 +1627,9 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
ereport(LOG,
(errmsg("process %d still waiting for %s on %s after %ld.%03d ms",
MyProcPid, modename, buf.data, msecs, usecs),
- (errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.",
- "Processes holding the lock: %s. Wait queue: %s.",
- lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data))));
+ (errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.%s",
+ "Processes holding the lock: %s. Wait queue: %s.%s",
+ lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data, lock_queries_sbuf.data))));
else if (myWaitStatus == PROC_WAIT_STATUS_OK)
ereport(LOG,
(errmsg("process %d acquired %s on %s after %ld.%03d ms",
@@ -1654,6 +1665,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable, bool dontWait)
pfree(buf.data);
pfree(lock_holders_sbuf.data);
pfree(lock_waiters_sbuf.data);
+ pfree(lock_queries_sbuf.data);
}
} while (myWaitStatus == PROC_WAIT_STATUS_WAITING);
diff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c
index 34a55e2177..43080b1a2c 100644
--- a/src/backend/utils/activity/backend_status.c
+++ b/src/backend/utils/activity/backend_status.c
@@ -882,7 +882,7 @@ pgstat_read_current_status(void)
* ----------
*/
const char *
-pgstat_get_backend_current_activity(int pid, bool checkUser)
+pgstat_get_backend_current_activity(int pid, bool checkUser, bool checkDeadLockTimeout)
{
PgBackendStatus *beentry;
int i;
@@ -929,6 +929,8 @@ pgstat_get_backend_current_activity(int pid, bool checkUser)
return "<insufficient privilege>";
else if (*(beentry->st_activity_raw) == '\0')
return "<command string not enabled>";
+ else if (checkDeadLockTimeout && !TimestampDifferenceExceeds(beentry->st_activity_start_timestamp, GetCurrentTimestamp(), DeadlockTimeout))
+ return "<command string has been changed>";
else
{
/* this'll leak a bit of memory, but that seems acceptable */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 686309db58..1eddaaf0e5 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -474,6 +474,20 @@ static const struct config_enum_entry wal_compression_options[] = {
{NULL, 0, false}
};
+static const struct config_enum_entry log_lock_waits_options[] = {
+ {"verbose", LOG_LOCK_WAIT_VERBOSE, false},
+ {"terse", LOG_LOCK_WAIT_TERSE, false},
+ {"on", LOG_LOCK_WAIT_TERSE, false},
+ {"off", LOG_LOCK_WAIT_NONE, false},
+ {"true", LOG_LOCK_WAIT_TERSE, true},
+ {"false", LOG_LOCK_WAIT_NONE, true},
+ {"yes", LOG_LOCK_WAIT_TERSE, true},
+ {"no", LOG_LOCK_WAIT_NONE, true},
+ {"1", LOG_LOCK_WAIT_TERSE, true},
+ {"0", LOG_LOCK_WAIT_NONE, true},
+ {NULL, 0, false}
+};
+
/*
* Options for enum values stored in other modules
*/
@@ -1552,15 +1566,6 @@ struct config_bool ConfigureNamesBool[] =
},
#endif
- {
- {"log_lock_waits", PGC_SUSET, LOGGING_WHAT,
- gettext_noop("Logs long lock waits."),
- NULL
- },
- &log_lock_waits,
- false,
- NULL, NULL, NULL
- },
{
{"log_recovery_conflict_waits", PGC_SIGHUP, LOGGING_WHAT,
gettext_noop("Logs standby recovery conflict waits."),
@@ -5022,6 +5027,16 @@ struct config_enum ConfigureNamesEnum[] =
NULL, NULL, NULL
},
+ {
+ {"log_lock_waits", PGC_SUSET, LOGGING_WHAT,
+ gettext_noop("Logs long lock waits."),
+ NULL
+ },
+ &log_lock_waits,
+ LOG_LOCK_WAIT_NONE, log_lock_waits_options,
+ NULL, NULL, NULL
+ },
+
{
{"wal_level", PGC_POSTMASTER, WAL_SETTINGS,
gettext_noop("Sets the level of information written to the WAL."),
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index ebcf0ad403..fca95196e4 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -132,6 +132,13 @@ typedef enum
PROC_WAIT_STATUS_ERROR,
} ProcWaitStatus;
+typedef enum
+{
+ LOG_LOCK_WAIT_NONE = 0,
+ LOG_LOCK_WAIT_TERSE,
+ LOG_LOCK_WAIT_VERBOSE,
+} LogLockWaitVerbosity;
+
/*
* Each backend has a PGPROC struct in shared memory. There is also a list of
* currently-unused PGPROC structs that will be reallocated to new backends.
@@ -456,7 +463,7 @@ extern PGDLLIMPORT int LockTimeout;
extern PGDLLIMPORT int IdleInTransactionSessionTimeout;
extern PGDLLIMPORT int TransactionTimeout;
extern PGDLLIMPORT int IdleSessionTimeout;
-extern PGDLLIMPORT bool log_lock_waits;
+extern PGDLLIMPORT int log_lock_waits;
#ifdef EXEC_BACKEND
extern PGDLLIMPORT slock_t *ProcStructLock;
diff --git a/src/include/utils/backend_status.h b/src/include/utils/backend_status.h
index 97874300c3..f0fba3b0f6 100644
--- a/src/include/utils/backend_status.h
+++ b/src/include/utils/backend_status.h
@@ -319,7 +319,7 @@ extern void pgstat_report_query_id(uint64 query_id, bool force);
extern void pgstat_report_tempfile(size_t filesize);
extern void pgstat_report_appname(const char *appname);
extern void pgstat_report_xact_timestamp(TimestampTz tstamp);
-extern const char *pgstat_get_backend_current_activity(int pid, bool checkUser);
+extern const char *pgstat_get_backend_current_activity(int pid, bool checkUser, bool checkDeadLockTimeout);
extern const char *pgstat_get_crashed_backend_activity(int pid, char *buffer,
int buflen);
extern uint64 pgstat_get_my_query_id(void);
--
2.44.0
I know this was a long time ago, do you still have interest in pursuing this patch Alexey?
Having this capability would be a big help when debugging locking issues after the fact.
Re. The fact that the last query isn’t necessarily the one holding the lock - yes that’s annoying but even so, knowing which was the last query executed by the blocking process would help as it allows one to narrow down what job/process is causing the locking.
Show quoted text
On 3 Oct 2024 at 11:33 +0200, Alexey Orlov <aporlov@gmail.com>, wrote:
On Thu, Oct 3, 2024 at 3:58 AM David Rowley <dgrowleyml@gmail.com> wrote:
On Tue, 1 Oct 2024 at 21:04, Alexey Orlov <aporlov@gmail.com> wrote:
session 1:
CREATE TABLE foo (val integer);
INSERT INTO foo (val) VALUES (1);
BEGIN;
UPDATE foo SET val = 3;session 2:
BEGIN;
UPDATE TABLE foo SET val = 2;LOG: process 3133043 still waiting for ShareLock on transaction 758
after 1000.239 ms
DETAIL: Process holding the lock: 3132855. Wait queue: 3133043.
Process 3132855: update foo SET val = 3;
CONTEXT: while updating tuple (0,7) in relation "foo"
STATEMENT: update foo SET val = 2;What do you think?
Can you explain why the last query executed by the blocking process is
relevant output to show? Are you just hoping that the last statement
to execute is the one that obtained the lock? Wouldn't it be confusing
if the last query to execute wasn't the query which obtained the lock?David
Thanks for the review! I completely agree with you, relying on chance is wrong.
What if I do a small check? I’ll check whether the command string has
changed during the DeadLockTimeout.
And if so we will see in the log:Process holding the lock: 1057195. Wait queue: 1057550.
Process 1057195: <command string has been changed>I have shared updated patch[3]
--
Regards,
Alexey Orlov!