On-demand running query plans using auto_explain and signals
Hello,
The other day I've rediscovered the exciting idea of using signals to dump
a backend's currently executed query plan, which, it turns out, was already
proposed by Pavel and Simon in these threads:
/messages/by-id/CAFj8pRA-DuzkmDtu52CiUgb0P7TVri_B8LtjMJfWcnr1LPts6w@mail.gmail.com
/messages/by-id/CAFj8pRDEo24joEg4UFRDYeFADFTw-jw_=t=kPwOyDW=v=g1Fhg@mail.gmail.com
Unfortunately, the latest one is missing an actual patch (or the attachment
was scrubbed) and I'm really surprised that the idea didn't take off then.
While evaluating it myself I've decided to simply patch auto_explain module
which is now in contrib, so presumably should be available to a broad
audience. Proof-of-concept patch against master is attached (a patch for
an older branch like 9.0 requires trivial manual effort to adapt).
While I can see the value of in-core support for this, realistically this
is unlikely to be included in 9.5, but an ad hoc patch could be useful long
before that happens.
What this patch does is simply dump the plan of the query being run by the
executor to the log when signaled with SIGUSR1. The auto_explain module
must be loaded to the backend beforehand of course,
session_preload_libraries seems like the place to do that cluster-wide.
Probably using SIGUSR2 would be more appropriate, but I'm not sure if there
are other extensions out there that might be already using it for some
other reason (well, I do not know that for SIGUSR1 either). Looking at the
current state of affairs in procsignal_sigusr1_handler() makes me believe
it should be pretty safe to catch the signal like I do. Or is that not the
case?
The current_query_desc probably needs to be a stack-like structure in order
to keep track of the nested queries correctly, but it works in the simplest
cases.
What would be even more useful is including stats from the running query in
the explain output, if you're a willing to pay for a (hopefully small)
overhead. Unfortunately, that doesn't work out of the box: if you enable
the auto_explain.log_analyze and friends in the .conf-file, you either get
all zero counts, or if you're really unlucky, an error from InstrEndLoop():
ERROR: InstrEndLoop called on running node
The latest patch in this area I could found is this one:
/messages/by-id/87wsn82lda.fsf@oxford.xeocode.com
From what I can see, there's no way around this problem short of hacking
InstrEndLoop... Did anything change in this area since 2008 possibly? I
would really love to have a way to make this work with existing un-patched
servers.
Cheers!
--
Alex
Attachments:
explain-on-signal.patchtext/x-patch; charset=US-ASCII; name=explain-on-signal.patchDownload+78-39
On 2015-08-29 17:33:22 +0200, Shulgin, Oleksandr wrote:
Probably using SIGUSR2 would be more appropriate, but I'm not sure if there
are other extensions out there that might be already using it for some
other reason (well, I do not know that for SIGUSR1 either). Looking at the
current state of affairs in procsignal_sigusr1_handler() makes me believe
it should be pretty safe to catch the signal like I do. Or is that not the
case?
You can catch signals, but you're not allowed to do a lot from
them. Anything allocating memory, acquiring locks, etc. is out - these
functions aren't reentrant. If you can guarantee that you're not
interrupting any relevant code you can bend those rules, but that's
obviously not the case here.
Check out the list of async-signal-safe functions at http://man7.org/linux/man-pages/man7/signal.7.html
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, Aug 29, 2015 at 5:44 PM, Andres Freund <andres@anarazel.de> wrote:
On 2015-08-29 17:33:22 +0200, Shulgin, Oleksandr wrote:
Probably using SIGUSR2 would be more appropriate, but I'm not sure if
there
are other extensions out there that might be already using it for some
other reason (well, I do not know that for SIGUSR1 either). Looking atthe
current state of affairs in procsignal_sigusr1_handler() makes me believe
it should be pretty safe to catch the signal like I do. Or is that notthe
case?
You can catch signals, but you're not allowed to do a lot from
them. Anything allocating memory, acquiring locks, etc. is out - these
functions aren't reentrant. If you can guarantee that you're not
interrupting any relevant code you can bend those rules, but that's
obviously not the case here.Check out the list of async-signal-safe functions at
http://man7.org/linux/man-pages/man7/signal.7.html
Good point. There's still hope to set a flag and process it later on.
Will have to check if it's possible to stay in the scope of a loaded module
though.
2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>
:
On Sat, Aug 29, 2015 at 5:44 PM, Andres Freund <andres@anarazel.de> wrote:
On 2015-08-29 17:33:22 +0200, Shulgin, Oleksandr wrote:
Probably using SIGUSR2 would be more appropriate, but I'm not sure if
there
are other extensions out there that might be already using it for some
other reason (well, I do not know that for SIGUSR1 either). Looking atthe
current state of affairs in procsignal_sigusr1_handler() makes me
believe
it should be pretty safe to catch the signal like I do. Or is that not
the
case?
You can catch signals, but you're not allowed to do a lot from
them. Anything allocating memory, acquiring locks, etc. is out - these
functions aren't reentrant. If you can guarantee that you're not
interrupting any relevant code you can bend those rules, but that's
obviously not the case here.Check out the list of async-signal-safe functions at
http://man7.org/linux/man-pages/man7/signal.7.htmlGood point. There's still hope to set a flag and process it later on.
Will have to check if it's possible to stay in the scope of a loaded module
though.
I had a workable prototype - and It was implemented very similar as
handling CANCEL
Pavel
On 2015-08-29 18:27:59 +0200, Pavel Stehule wrote:
2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>
Good point. There's still hope to set a flag and process it later on.
Will have to check if it's possible to stay in the scope of a loaded module
though.
I had a workable prototype - and It was implemented very similar as
handling CANCEL
Where did you put the handling of that kind of interrupt? Directly into
ProcessInterrupts()?
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2015-08-29 18:36 GMT+02:00 Andres Freund <andres@anarazel.de>:
On 2015-08-29 18:27:59 +0200, Pavel Stehule wrote:
2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr <
oleksandr.shulgin@zalando.de>
Good point. There's still hope to set a flag and process it later on.
Will have to check if it's possible to stay in the scope of a loadedmodule
though.
I had a workable prototype - and It was implemented very similar as
handling CANCELWhere did you put the handling of that kind of interrupt? Directly into
ProcessInterrupts()?
Probably. I don't remember it well, but it need hack code - it cannot be
used from extension.
Pavel
Show quoted text
Andres
On Aug 29, 2015 7:31 PM, "Pavel Stehule" <pavel.stehule@gmail.com> wrote:
2015-08-29 18:36 GMT+02:00 Andres Freund <andres@anarazel.de>:
On 2015-08-29 18:27:59 +0200, Pavel Stehule wrote:
2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr <
oleksandr.shulgin@zalando.de>
Good point. There's still hope to set a flag and process it later
on.
Will have to check if it's possible to stay in the scope of a loaded
module
though.
I had a workable prototype - and It was implemented very similar as
handling CANCELWhere did you put the handling of that kind of interrupt? Directly into
ProcessInterrupts()?Probably. I don't remember it well, but it need hack code - it cannot be
used from extension.
Do you still have the code somewhere around? Did it see production use?
Thanks!
--
Alex
2015-08-30 10:30 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>
:
On Aug 29, 2015 7:31 PM, "Pavel Stehule" <pavel.stehule@gmail.com> wrote:
2015-08-29 18:36 GMT+02:00 Andres Freund <andres@anarazel.de>:
On 2015-08-29 18:27:59 +0200, Pavel Stehule wrote:
2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr <
oleksandr.shulgin@zalando.de>
Good point. There's still hope to set a flag and process it later
on.
Will have to check if it's possible to stay in the scope of a
loaded module
though.
I had a workable prototype - and It was implemented very similar as
handling CANCELWhere did you put the handling of that kind of interrupt? Directly into
ProcessInterrupts()?Probably. I don't remember it well, but it need hack code - it cannot be
used from extension.
Do you still have the code somewhere around? Did it see production use?
I am not sure I am able to find it - I'll try. We didn't use it on
production.
Show quoted text
Thanks!
--
Alex
2015-08-30 10:34 GMT+02:00 Pavel Stehule <pavel.stehule@gmail.com>:
2015-08-30 10:30 GMT+02:00 Shulgin, Oleksandr <
oleksandr.shulgin@zalando.de>:On Aug 29, 2015 7:31 PM, "Pavel Stehule" <pavel.stehule@gmail.com> wrote:
2015-08-29 18:36 GMT+02:00 Andres Freund <andres@anarazel.de>:
On 2015-08-29 18:27:59 +0200, Pavel Stehule wrote:
2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr <
oleksandr.shulgin@zalando.de>
Good point. There's still hope to set a flag and process it later
on.
Will have to check if it's possible to stay in the scope of a
loaded module
though.
I had a workable prototype - and It was implemented very similar as
handling CANCELWhere did you put the handling of that kind of interrupt? Directly into
ProcessInterrupts()?Probably. I don't remember it well, but it need hack code - it cannot
be used from extension.
Do you still have the code somewhere around? Did it see production use?
I sent it to mailing list year ago
/messages/by-id/CAFj8pRAXcS9B8ABgiM-zauVgGqDhPZOaRz5YSp1_Nhv9HP8nKw@mail.gmail.com
Regards
Pavel
Show quoted text
I am not sure I am able to find it - I'll try. We didn't use it on
production.Thanks!
--
Alex
Do you still have the code somewhere around? Did it see production use?
I sent it to mailing list year ago
/messages/by-id/CAFj8pRAXcS9B8ABgiM-zauVgGqDhPZOaRz5YSp1_Nhv9HP8nKw@mail.gmail.com
Ah, thanks! Somehow I've missed this mail. You didn't add the patch to a
commitfest back then I think?
--
Alex
2015-08-31 11:30 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>
:
Do you still have the code somewhere around? Did it see production use?
I sent it to mailing list year ago
/messages/by-id/CAFj8pRAXcS9B8ABgiM-zauVgGqDhPZOaRz5YSp1_Nhv9HP8nKw@mail.gmail.com
Ah, thanks! Somehow I've missed this mail. You didn't add the patch to a
commitfest back then I think?
I had no time to finish this patch - there is few issues in signal handling
and returning back result - but still I want it :) - and what I know -
almost all other SQL db has similar functionality.
Show quoted text
--
Alex
On Mon, Aug 31, 2015 at 12:35 PM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:
/messages/by-id/CAFj8pRAXcS9B8ABgiM-zauVgGqDhPZOaRz5YSp1_Nhv9HP8nKw@mail.gmail.com
Ah, thanks! Somehow I've missed this mail. You didn't add the patch to
a commitfest back then I think?I had no time to finish this patch - there is few issues in signal
handling and returning back result - but still I want it :) - and what I
know - almost all other SQL db has similar functionality.
I've updated the patch for the current master and also added some
unexpected parameters handling, so attached is a v2.
I'd say we should hide the so-designed pg_cmdstatus() interface behind more
friendly calls like pg_explain_backend() and pg_backend_progress() to give
some naming examples, to remove the need for magic numbers in the second
arg.
What I've found missing in this approach is the insight into nested
executor runs, so that if you're running a "SELECT my_func()", you only see
this outer query in the pg_cmdstatus() output. With the auto_explain
approach, by hooking into executor I was able to capture the nested queries
and their plans as well.
It's conceptually trivial to add some code to use the Executor hooks here,
but I don't see any precedent for this except for contrib modules
(auto_explain and pg_stat_statements), I'm just not sure if that would be
OK-ish.
And when we solve that, there is another problem of having a sane interface
to query the nested plans. For a psql user, probably the most interesting
would be the topmost (level=1) and the innermost (e.g. level=-1) plans. We
might also want to provide a full nesting of plans in a structured format
like JSON or... *cough* XML, for programs to consume and display nicely
with folding and stuff.
And the most interesting would be making instrumentation work with all of
the above.
I'm adding this to the next CF.
--
Alex
Attachments:
explain-pid-v2.patchtext/x-patch; charset=US-ASCII; name=explain-pid-v2.patchDownload+546-1
Hi
2015-08-31 19:09 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>
:
On Mon, Aug 31, 2015 at 12:35 PM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:/messages/by-id/CAFj8pRAXcS9B8ABgiM-zauVgGqDhPZOaRz5YSp1_Nhv9HP8nKw@mail.gmail.com
Ah, thanks! Somehow I've missed this mail. You didn't add the patch to
a commitfest back then I think?I had no time to finish this patch - there is few issues in signal
handling and returning back result - but still I want it :) - and what I
know - almost all other SQL db has similar functionality.I've updated the patch for the current master and also added some
unexpected parameters handling, so attached is a v2.
Thank you very much
I'd say we should hide the so-designed pg_cmdstatus() interface behind
more friendly calls like pg_explain_backend() and pg_backend_progress() to
give some naming examples, to remove the need for magic numbers in the
second arg.
I had similar idea - this is good enough for start, but target interface
iis based on integration with EXPLAIN statement
some like EXPLAIN PROCESS or EXPLAIN PID or EXPLAIN VERBOSE PID ..
What I've found missing in this approach is the insight into nested
executor runs, so that if you're running a "SELECT my_func()", you only see
this outer query in the pg_cmdstatus() output. With the auto_explain
approach, by hooking into executor I was able to capture the nested queries
and their plans as well.
I understand - originally I didn't think about nested queries, but it is
good idea and probably not a problem:
Not for XML and JSON where we can describe nesting simply
It is little bit harder for plain text - but we can use similar format that
is used for subplans or some like
top query:
SELECT fx()
nested (1. level) query:
SELECT ....
It's conceptually trivial to add some code to use the Executor hooks here,
but I don't see any precedent for this except for contrib modules
(auto_explain and pg_stat_statements), I'm just not sure if that would be
OK-ish.And when we solve that, there is another problem of having a sane
interface to query the nested plans. For a psql user, probably the most
interesting would be the topmost (level=1) and the innermost (e.g.
level=-1) plans. We might also want to provide a full nesting of plans in
a structured format like JSON or... *cough* XML, for programs to consume
and display nicely with folding and stuff.And the most interesting would be making instrumentation work with all of
the above.
the important functionality is drawing complete text of query - it was my
original motivation, because I had not way how to get complete query before
its finishing
Probably the communication between processes should be more complex :( -
the SHM queue should be used there, because some plans can be terrible long.
The using shared write buffer (one for all) is too simply solution probably
- good for prototype, but not good for core.
I have a idea about communication:
1. caller prepare buffer, shm queue and signalize target process -
parameter is pid od caller
2. target process fills a write buffer and close queue
3. caller show data and close buffer, close queue
Now almost all code for communication is in upstream - the missing part is
injection one end of queue to any process dynamicaly.
Regards
Pavel
Show quoted text
I'm adding this to the next CF.
--
Alex
I'd say we should hide the so-designed pg_cmdstatus() interface behind
more friendly calls like pg_explain_backend() and pg_backend_progress() to
give some naming examples, to remove the need for magic numbers in the
second arg.I had similar idea - this is good enough for start, but target interface
iis based on integration with EXPLAIN statementsome like EXPLAIN PROCESS or EXPLAIN PID or EXPLAIN VERBOSE PID ..
Yes, that's another way to do it.
the important functionality is drawing complete text of query - it was my
original motivation, because I had not way how to get complete query before
its finishingProbably the communication between processes should be more complex :( -
the SHM queue should be used there, because some plans can be terrible long.The using shared write buffer (one for all) is too simply solution
probably - good for prototype, but not good for core.I have a idea about communication:
1. caller prepare buffer, shm queue and signalize target process -
parameter is pid od caller
2. target process fills a write buffer and close queue
3. caller show data and close buffer, close queueNow almost all code for communication is in upstream - the missing part is
injection one end of queue to any process dynamicaly.
I'm not familiar with the shared memory handling, but could we not allocate
just enough shared memory to fit the data we're going to write instead of
the fixed 8k? It's not that we cannot know the length of the resulting
plan text in advance.
I think we can remove buffer_is_free/buffer_holds_data and just use the
buffer != NULL to check if there's some data to be read (and buffer == NULL
to check if we can write).
--
Alex
2015-09-01 15:00 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>
:
I'd say we should hide the so-designed pg_cmdstatus() interface behind
more friendly calls like pg_explain_backend() and pg_backend_progress() to
give some naming examples, to remove the need for magic numbers in the
second arg.I had similar idea - this is good enough for start, but target interface
iis based on integration with EXPLAIN statementsome like EXPLAIN PROCESS or EXPLAIN PID or EXPLAIN VERBOSE PID ..
Yes, that's another way to do it.
the important functionality is drawing complete text of query - it was my
original motivation, because I had not way how to get complete query before
its finishingProbably the communication between processes should be more complex :( -
the SHM queue should be used there, because some plans can be terrible long.The using shared write buffer (one for all) is too simply solution
probably - good for prototype, but not good for core.I have a idea about communication:
1. caller prepare buffer, shm queue and signalize target process -
parameter is pid od caller
2. target process fills a write buffer and close queue
3. caller show data and close buffer, close queueNow almost all code for communication is in upstream - the missing part
is injection one end of queue to any process dynamicaly.I'm not familiar with the shared memory handling, but could we not
allocate just enough shared memory to fit the data we're going to write
instead of the fixed 8k? It's not that we cannot know the length of the
resulting plan text in advance.
the shared memory cannot be reused - (released) :(, so allocating enough
memory is not effective. More - in this moment it has not sense. Shared
memory queue can do almost all work.
Show quoted text
I think we can remove buffer_is_free/buffer_holds_data and just use the
buffer != NULL to check if there's some data to be read (and buffer == NULL
to check if we can write).--
Alex
I'm not familiar with the shared memory handling, but could we not
allocate just enough shared memory to fit the data we're going to write
instead of the fixed 8k? It's not that we cannot know the length of the
resulting plan text in advance.the shared memory cannot be reused - (released) :(, so allocating enough
memory is not effective. More - in this moment it has not sense. Shared
memory queue can do almost all work.
A-ha, I've discovered the shared memory message queue facility and I see
how we can use it.
But do we really need the slots mechanism? Would it not be OK to just let
the LWLock do the sequencing of concurrent requests? Given that we only
going to use one message queue per cluster, there's not much concurrency
you can gain by introducing slots I believe.
--
Alex
2015-09-01 17:20 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>
:
I'm not familiar with the shared memory handling, but could we not
allocate just enough shared memory to fit the data we're going to write
instead of the fixed 8k? It's not that we cannot know the length of the
resulting plan text in advance.the shared memory cannot be reused - (released) :(, so allocating enough
memory is not effective. More - in this moment it has not sense. Shared
memory queue can do almost all work.A-ha, I've discovered the shared memory message queue facility and I see
how we can use it.But do we really need the slots mechanism? Would it not be OK to just let
the LWLock do the sequencing of concurrent requests? Given that we only
going to use one message queue per cluster, there's not much concurrency
you can gain by introducing slots I believe.
I afraid of problems on production. When you have a queue related to any
process, then all problems should be off after end of processes. One
message queue per cluster needs restart cluster when some pathological
problems are - and you cannot restart cluster in production week, sometimes
weeks. The slots are more robust.
Pavel
Show quoted text
--
Alex
On Tue, Sep 1, 2015 at 7:02 PM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:
But do we really need the slots mechanism? Would it not be OK to just
let the LWLock do the sequencing of concurrent requests? Given that we
only going to use one message queue per cluster, there's not much
concurrency you can gain by introducing slots I believe.I afraid of problems on production. When you have a queue related to any
process, then all problems should be off after end of processes. One
message queue per cluster needs restart cluster when some pathological
problems are - and you cannot restart cluster in production week, sometimes
weeks. The slots are more robust.
Yes, but in your implementation the slots themselves don't have a
queue/buffer. Did you intend to have a message queue per slot?
What sort of pathological problems are you concerned of? The communicating
backends should just detach from the message queue properly and have some
timeout configured to prevent deadlocks. Other than that, I don't see how
having N slots really help the problem: in case of pathological problems
you will just deplete them all sooner or later.
--
Alex
2015-09-02 11:01 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>
:
On Tue, Sep 1, 2015 at 7:02 PM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:But do we really need the slots mechanism? Would it not be OK to just
let the LWLock do the sequencing of concurrent requests? Given that we
only going to use one message queue per cluster, there's not much
concurrency you can gain by introducing slots I believe.I afraid of problems on production. When you have a queue related to any
process, then all problems should be off after end of processes. One
message queue per cluster needs restart cluster when some pathological
problems are - and you cannot restart cluster in production week, sometimes
weeks. The slots are more robust.Yes, but in your implementation the slots themselves don't have a
queue/buffer. Did you intend to have a message queue per slot?
The message queue cannot be reused, so I expect one slot per caller to be
used passing parameters, - message queue will be created/released by demand
by caller.
What sort of pathological problems are you concerned of? The
communicating backends should just detach from the message queue properly
and have some timeout configured to prevent deadlocks. Other than that, I
don't see how having N slots really help the problem: in case of
pathological problems you will just deplete them all sooner or later.
I afraid of unexpected problems :) - any part of signal handling or
multiprocess communication is fragile. Slots are simple and simply attached
to any process without necessity to alloc/free some memory.
Pavel
Show quoted text
--
Alex
On Wed, Sep 2, 2015 at 11:16 AM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:
2015-09-02 11:01 GMT+02:00 Shulgin, Oleksandr <
oleksandr.shulgin@zalando.de>:On Tue, Sep 1, 2015 at 7:02 PM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:But do we really need the slots mechanism? Would it not be OK to just
let the LWLock do the sequencing of concurrent requests? Given that we
only going to use one message queue per cluster, there's not much
concurrency you can gain by introducing slots I believe.I afraid of problems on production. When you have a queue related to any
process, then all problems should be off after end of processes. One
message queue per cluster needs restart cluster when some pathological
problems are - and you cannot restart cluster in production week, sometimes
weeks. The slots are more robust.Yes, but in your implementation the slots themselves don't have a
queue/buffer. Did you intend to have a message queue per slot?The message queue cannot be reused, so I expect one slot per caller to be
used passing parameters, - message queue will be created/released by demand
by caller.
I don't believe a message queue cannot really be reused. What would stop
us from calling shm_mq_create() on the queue struct again?
To give you an idea, in my current prototype I have only the following
struct:
typedef struct {
LWLock *lock;
/*CmdStatusInfoSlot slots[CMDINFO_SLOTS];*/
pid_t target_pid;
pid_t sender_pid;
int request_type;
int result_code;
shm_mq buffer;
} CmdStatusInfo;
An instance of this is allocated on shared memory once, using BUFFER_SIZE
of 8k.
In pg_cmdstatus() I lock on the LWLock to check if target_pid is 0, then it
means nobody else is using this communication channel at the moment. If
that's the case, I set the pids and request_type and initialize the mq
buffer. Otherwise I just sleep and retry acquiring the lock (a timeout
should be added here probably).
What sort of pathological problems are you concerned of? The communicating
backends should just detach from the message queue properly and have some
timeout configured to prevent deadlocks. Other than that, I don't see how
having N slots really help the problem: in case of pathological problems
you will just deplete them all sooner or later.I afraid of unexpected problems :) - any part of signal handling or
multiprocess communication is fragile. Slots are simple and simply attached
to any process without necessity to alloc/free some memory.
Yes, but do slots solve the actual problem? If there is only one message
queue, you still have the same problem regardless of the number of slots
you decide to have.
--
Alex