setting up a trace through extended stored procedures
This question is for the hardcore SQL Server gurus out there.
In summary, the question is pertaining to setting up a custom trace using
extended stored procedures in production. The idea is to find the source of
a deadlock that occurs on occasional basis. My question deals mainly with
the following stored procedures.
xp_trace_setqueryhistory
xp_trace_flushqueryhistory
xp_trace_addnewqueue
Here is the detail: Every other day or so, the insert to this one table
fails in production environment. Unfortunately, the error handling code
doesn't trap the error number so I am not sure what error is occurring and
causing the insert to fail. I have tried and have been unsuccessful in
reproducing the problem in the development environment.
I am guessing that it is due to a dead lock and would like to find the
source of the problem through SQL Profiler. Since I am not at the site, I
can't just fire up SQL Profiler and try to trace the source of the problem.
After some research, I found a way to set up a custom trace that will log
the output of the profiler to a log file (or can output to a table)....
please see xp_trace_addnewqueue in BOL.
My concern is that the log file may get too big and may take up all the disk
space if it ran for too long. I also saw another extended procedure that
flushes the last 100 events from the trace to a log file.... but that is
only for a particular trace (please see xp_trace_setqueryhistory and
xp_trace_flushqueryhistory).
My question therefore is, can I make the trace only dump the last... say
5,000 events to the file when the deadlock occurs. I could setup an Alert
for the error number for the deadlock. This alert could then execute a
similar command as xp_trace_flushqueryhistory showing me only the last 5,000
events at the time the error occurs.
Any help would be appreciated.
Thank you.
What about setting up a job, that resets the trace every
hour?
Now why didn't I think of that? Thanks!!
"Robert Lummert" <rl@w4u.com> wrote in message
news:3C7A0A6A.9010407@w4u.com...
Show quoted text
What about setting up a job, that resets the trace every
hour?
I don't have Books Online in front of me right now, and don't remember the
syntax, but... you can set maximum sizes for a trace file. You can set this
up from the Profiler GUI and then save the trace definition to see the
correct syntax. I know this isn't a lot of detail, but I think it should
help you. I'll try to post more detail when I get to work and have BOL
handy...
--
Brian Moran
SQL Server MVP
"Adam" <nospam@nospam.com> wrote in message
news:Lkhe8.104765$s43.24721268@typhoon.columbus.rr.com...
This question is for the hardcore SQL Server gurus out there.
In summary, the question is pertaining to setting up a custom trace using
extended stored procedures in production. The idea is to find the source
of
a deadlock that occurs on occasional basis. My question deals mainly with
the following stored procedures.
xp_trace_setqueryhistory
xp_trace_flushqueryhistory
xp_trace_addnewqueueHere is the detail: Every other day or so, the insert to this one table
fails in production environment. Unfortunately, the error handling code
doesn't trap the error number so I am not sure what error is occurring and
causing the insert to fail. I have tried and have been unsuccessful in
reproducing the problem in the development environment.I am guessing that it is due to a dead lock and would like to find the
source of the problem through SQL Profiler. Since I am not at the site, I
can't just fire up SQL Profiler and try to trace the source of the
problem.
After some research, I found a way to set up a custom trace that will log
the output of the profiler to a log file (or can output to a table)....
please see xp_trace_addnewqueue in BOL.My concern is that the log file may get too big and may take up all the
disk
space if it ran for too long. I also saw another extended procedure that
flushes the last 100 events from the trace to a log file.... but that is
only for a particular trace (please see xp_trace_setqueryhistory and
xp_trace_flushqueryhistory).My question therefore is, can I make the trace only dump the last... say
5,000 events to the file when the deadlock occurs. I could setup an Alert
for the error number for the deadlock. This alert could then execute a
similar command as xp_trace_flushqueryhistory showing me only the last
5,000
Show quoted text
events at the time the error occurs.
Any help would be appreciated.
Thank you.
Brian,
I was excited to see your response and immediately went to SQL Profiler to
poke around. I didn't see any option to set the file size (using version
7.0), but did find a way to limit the number of rows to be displayed to the
console. I clicked on Tools, Options and then clicked on the Display tab and
was able to set the value in front of the text saying "Number of lines of
text data displayed in the window".
After removing the SQL Profiler from the filter and starting the trace
however, I unfortunately did not see any calls made to limit the number of
events/transactions to track. What am I missing? I only saw a series of the
following statement with different input parameter values, defining
different events to trace.
declare @P1 int
declare @P2 int
declare @P3 int
set @P1=5
set @P2=10
set @P3=1
exec xp_trace_seteventclassrequired @P1, @P2, @P3
Thanks for your help so far.
BTW: I enjoy your articles from SQL Server magazine. It was a pleasant
surprise to see your name here as well.
Adam
"Brian Moran" <brianm@crosstier.com> wrote in message
news:#sQLpfsvBHA.2468@tkmsftngp04...
I don't have Books Online in front of me right now, and don't remember the
syntax, but... you can set maximum sizes for a trace file. You can set
this
up from the Profiler GUI and then save the trace definition to see the
correct syntax. I know this isn't a lot of detail, but I think it should
help you. I'll try to post more detail when I get to work and have BOL
handy...--
Brian Moran
SQL Server MVP"Adam" <nospam@nospam.com> wrote in message
news:Lkhe8.104765$s43.24721268@typhoon.columbus.rr.com...This question is for the hardcore SQL Server gurus out there.
In summary, the question is pertaining to setting up a custom trace
using
extended stored procedures in production. The idea is to find the source
of
a deadlock that occurs on occasional basis. My question deals mainly
with
the following stored procedures.
xp_trace_setqueryhistory
xp_trace_flushqueryhistory
xp_trace_addnewqueueHere is the detail: Every other day or so, the insert to this one table
fails in production environment. Unfortunately, the error handling code
doesn't trap the error number so I am not sure what error is occurring
and
causing the insert to fail. I have tried and have been unsuccessful in
reproducing the problem in the development environment.I am guessing that it is due to a dead lock and would like to find the
source of the problem through SQL Profiler. Since I am not at the site,
I
can't just fire up SQL Profiler and try to trace the source of the
problem.
After some research, I found a way to set up a custom trace that will
log
the output of the profiler to a log file (or can output to a table)....
please see xp_trace_addnewqueue in BOL.My concern is that the log file may get too big and may take up all the
disk
space if it ran for too long. I also saw another extended procedure that
flushes the last 100 events from the trace to a log file.... but that is
only for a particular trace (please see xp_trace_setqueryhistory and
xp_trace_flushqueryhistory).My question therefore is, can I make the trace only dump the last... say
5,000 events to the file when the deadlock occurs. I could setup an
Alert
Show quoted text
for the error number for the deadlock. This alert could then execute a
similar command as xp_trace_flushqueryhistory showing me only the last5,000
events at the time the error occurs.
Any help would be appreciated.
Thank you.