Performance monitor
I have started coding a PostgreSQL performance monitor. It will be like
top, but allow you to click on a backend to see additional information.
It will be written in Tcl/Tk. I may ask to add something to 7.1 so when
a backend receives a special signal, it dumps a file in /tmp with some
backend status. It would be done similar to how we handle Cancel
signals.
How do people feel about adding a single handler to 7.1? Is it
something I can slip into the current CVS, or will it have to exist as a
patch to 7.1. Seems it would be pretty isolated unless someone sends
the signal, but it is clearly a feature addition.
We don't really have any way of doing process monitoring except ps, so I
think this is needed. I plan to have something done in the next week or
two.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Wed, 7 Mar 2001, Bruce Momjian wrote:
I have started coding a PostgreSQL performance monitor. It will be like
top, but allow you to click on a backend to see additional information.It will be written in Tcl/Tk. I may ask to add something to 7.1 so when
a backend receives a special signal, it dumps a file in /tmp with some
backend status. It would be done similar to how we handle Cancel
signals.How do people feel about adding a single handler to 7.1? Is it
something I can slip into the current CVS, or will it have to exist as a
patch to 7.1. Seems it would be pretty isolated unless someone sends
the signal, but it is clearly a feature addition.
Totally dead set against it ...
... the only hold up on RC1 right now was awaiting Vadim getting back so
that he and Tom could work out the WAL related issues ... adding a new
signal handler *definitely* counts as "adding a new feature" ...
How do people feel about adding a single handler to 7.1? Is it
something I can slip into the current CVS, or will it have to exist as a
patch to 7.1. Seems it would be pretty isolated unless someone sends
the signal, but it is clearly a feature addition.Totally dead set against it ...
... the only hold up on RC1 right now was awaiting Vadim getting back so
that he and Tom could work out the WAL related issues ... adding a new
signal handler *definitely* counts as "adding a new feature" ...
OK, I will distribute it as a patch.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
The Hermit Hacker <scrappy@hub.org> writes:
How do people feel about adding a single handler to 7.1?
Totally dead set against it ...
Ditto. Particularly a signal handler that performs I/O. That's going
to create all sorts of re-entrancy problems.
regards, tom lane
Bruce Momjian <pgman@candle.pha.pa.us> writes:
How do people feel about adding a single handler to 7.1? Is it
something I can slip into the current CVS, or will it have to exist as a
patch to 7.1. Seems it would be pretty isolated unless someone sends
the signal, but it is clearly a feature addition.
OK, I will distribute it as a patch.
Patch or otherwise, this approach seems totally unworkable. A signal
handler cannot do I/O safely, it cannot look at shared memory safely,
it cannot even look at the backend's own internal state safely. How's
it going to do any useful status reporting?
Firing up a separate backend process that looks at shared memory seems
like a more useful design in the long run. That will mean exporting
more per-backend status into shared memory, however, and that means that
this is not a trivial change.
regards, tom lane
Bruce Momjian <pgman@candle.pha.pa.us> writes:
How do people feel about adding a single handler to 7.1? Is it
something I can slip into the current CVS, or will it have to exist as a
patch to 7.1. Seems it would be pretty isolated unless someone sends
the signal, but it is clearly a feature addition.OK, I will distribute it as a patch.
Patch or otherwise, this approach seems totally unworkable. A signal
handler cannot do I/O safely, it cannot look at shared memory safely,
it cannot even look at the backend's own internal state safely. How's
it going to do any useful status reporting?
Why can't we do what we do with Cancel, where we set a flag and check it
at safe places?
Firing up a separate backend process that looks at shared memory seems
like a more useful design in the long run. That will mean exporting
more per-backend status into shared memory, however, and that means that
this is not a trivial change.
Right, that is a lot of work.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Patch or otherwise, this approach seems totally unworkable. A signal
handler cannot do I/O safely, it cannot look at shared memory safely,
it cannot even look at the backend's own internal state safely. How's
it going to do any useful status reporting?
Why can't we do what we do with Cancel, where we set a flag and check it
at safe places?
There's a lot of assumptions hidden in that phrase "safe places".
I don't think that everyplace we check for Cancel is going to be safe,
for example. Cancel is able to operate in places where the internal
state isn't completely self-consistent, because it knows we are just
going to clean up and throw away intermediate status anyhow if the
cancel occurs.
Also, if you are expecting the answers to come back in a short amount of
time, then you do have to be able to do the work in the signal handler
in cases where the backend is blocked on a lock or something like that.
So that introduces a set of issues about how you know when it's
appropriate to do that and how to be sure that the signal handler
doesn't screw things up when it tries to do the report in-line.
All in all, I do not see this as an easy task that you can whip out and
then release as a 7.1 patch without extensive testing. And given that,
I'd rather see it done with what I consider the right long-term approach,
rather than a dead-end hack. I think doing it in a signal handler is
ultimately going to be a dead-end hack.
regards, tom lane
All in all, I do not see this as an easy task that you can whip out and
then release as a 7.1 patch without extensive testing. And given that,
I'd rather see it done with what I consider the right long-term approach,
rather than a dead-end hack. I think doing it in a signal handler is
ultimately going to be a dead-end hack.
Well, the signal stuff will get me going at least.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
At 18:05 7/03/01 -0500, Bruce Momjian wrote:
All in all, I do not see this as an easy task that you can whip out and
then release as a 7.1 patch without extensive testing. And given that,
I'd rather see it done with what I consider the right long-term approach,
rather than a dead-end hack. I think doing it in a signal handler is
ultimately going to be a dead-end hack.Well, the signal stuff will get me going at least.
Didn't someone say this can't be done safely - or am I missing something?
ISTM that doing the work to put things in shared memory will be much more
profitable in the long run. You have previously advocated self-tuning
algorithms for performance - a prerequisite for these will be performance
data in shared memory.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
Hi all,
Wouldn't another approach be to write a C function that does the
necessary work, then just call it like any other C function?
i.e. Connect to the database and issue a "select
perf_stats('/tmp/stats-2001-03-08-01.txt')" ?
Or similar?
Sure, that means another database connection which would change the
resource count but it sounds like a more consistent approach.
Regards and best wishes,
Justin Clift
Philip Warner wrote:
Show quoted text
At 18:05 7/03/01 -0500, Bruce Momjian wrote:
All in all, I do not see this as an easy task that you can whip out and
then release as a 7.1 patch without extensive testing. And given that,
I'd rather see it done with what I consider the right long-term approach,
rather than a dead-end hack. I think doing it in a signal handler is
ultimately going to be a dead-end hack.Well, the signal stuff will get me going at least.
Didn't someone say this can't be done safely - or am I missing something?
ISTM that doing the work to put things in shared memory will be much more
profitable in the long run. You have previously advocated self-tuning
algorithms for performance - a prerequisite for these will be performance
data in shared memory.----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
At 11:33 8/03/01 +1100, Justin Clift wrote:
Hi all,
Wouldn't another approach be to write a C function that does the
necessary work, then just call it like any other C function?i.e. Connect to the database and issue a "select
perf_stats('/tmp/stats-2001-03-08-01.txt')" ?
I think Bruce wants per-backend data, and this approach would seem to only
get the data for the current backend.
Also, I really don't like the proposal to write files to /tmp. If we want a
perf tool, then we need to have something like 'top', which will
continuously update. With 40 backends, the idea of writing 40 file to /tmp
every second seems a little excessive to me.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
I like the idea of updating shared memory with the performance statistics,
current query execution information, etc., providing a function to fetch
those statistics, and perhaps providing a system view (i.e. pg_performance)
based upon such functions which can be queried by the administrator.
FWIW,
Mike Mascari
mascarm@mascari.com
-----Original Message-----
From: Philip Warner [SMTP:pjw@rhyme.com.au]
Sent: Wednesday, March 07, 2001 7:42 PM
To: Justin Clift
Cc: Bruce Momjian; Tom Lane; The Hermit Hacker; PostgreSQL-development
Subject: Re: [HACKERS] Performance monitor
At 11:33 8/03/01 +1100, Justin Clift wrote:
Hi all,
Wouldn't another approach be to write a C function that does the
necessary work, then just call it like any other C function?i.e. Connect to the database and issue a "select
perf_stats('/tmp/stats-2001-03-08-01.txt')" ?
I think Bruce wants per-backend data, and this approach would seem to only
get the data for the current backend.
Also, I really don't like the proposal to write files to /tmp. If we want a
perf tool, then we need to have something like 'top', which will
continuously update. With 40 backends, the idea of writing 40 file to /tmp
every second seems a little excessive to me.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
Import Notes
Resolved by subject fallback
At 19:59 7/03/01 -0500, Mike Mascari wrote:
I like the idea of updating shared memory with the performance statistics,
current query execution information, etc., providing a function to fetch
those statistics, and perhaps providing a system view (i.e. pg_performance)
based upon such functions which can be queried by the administrator.
This sounds like The Way to me. Although I worry that using a view (or
standard libpq methods) might be too expensive in high load situations
(this is not based on any knowledge of the likely costs, however!).
We do need to make this as cheap as possible since we don't want to distort
the stats, and it will often be used to diagnose perormance problems, and
we don't want to contribute to those problems.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
I think Bruce wants per-backend data, and this approach would seem to only
get the data for the current backend.Also, I really don't like the proposal to write files to /tmp. If we want a
perf tool, then we need to have something like 'top', which will
continuously update. With 40 backends, the idea of writing 40 file to /tmp
every second seems a little excessive to me.
My idea was to use 'ps' to gather most of the information, and just use
the internal stats when someone clicked on a backend and wanted more
information.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
At 18:05 7/03/01 -0500, Bruce Momjian wrote:
All in all, I do not see this as an easy task that you can whip out and
then release as a 7.1 patch without extensive testing. And given that,
I'd rather see it done with what I consider the right long-term approach,
rather than a dead-end hack. I think doing it in a signal handler is
ultimately going to be a dead-end hack.Well, the signal stuff will get me going at least.
Didn't someone say this can't be done safely - or am I missing something?
OK, I will write just the all-process display part, that doesn't need
any per-backend info because it gets it all from 'ps'. Then maybe
someone will come up with a nifty idea, or I will play with my local
copy to see how it can be done.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Mike Mascari's idea (er... his assembling of the other ideas) still
sounds like the Best Solution though.
:-)
+ Justin
+++
I like the idea of updating shared memory with the performance
statistics,
current query execution information, etc., providing a function to fetch
those statistics, and perhaps providing a system view (i.e.
pg_performance)
based upon such functions which can be queried by the administrator.
FWIW,
Mike Mascari
mascarm@mascari.com
+++
Bruce Momjian wrote:
Show quoted text
I think Bruce wants per-backend data, and this approach would seem to only
get the data for the current backend.Also, I really don't like the proposal to write files to /tmp. If we want a
perf tool, then we need to have something like 'top', which will
continuously update. With 40 backends, the idea of writing 40 file to /tmp
every second seems a little excessive to me.My idea was to use 'ps' to gather most of the information, and just use
the internal stats when someone clicked on a backend and wanted more
information.-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Yes, seems that is best. I will probably hack something up here so I
can do some testing of the app itself.
Mike Mascari's idea (er... his assembling of the other ideas) still
sounds like the Best Solution though.:-)
+ Justin
+++
I like the idea of updating shared memory with the performance
statistics,
current query execution information, etc., providing a function to fetch
those statistics, and perhaps providing a system view (i.e.
pg_performance)
based upon such functions which can be queried by the administrator.FWIW,
Mike Mascari
mascarm@mascari.com+++
Bruce Momjian wrote:
I think Bruce wants per-backend data, and this approach would seem to only
get the data for the current backend.Also, I really don't like the proposal to write files to /tmp. If we want a
perf tool, then we need to have something like 'top', which will
continuously update. With 40 backends, the idea of writing 40 file to /tmp
every second seems a little excessive to me.My idea was to use 'ps' to gather most of the information, and just use
the internal stats when someone clicked on a backend and wanted more
information.-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On 2001.03.07 22:06 Bruce Momjian wrote:
I think Bruce wants per-backend data, and this approach would seem to
only
get the data for the current backend.
Also, I really don't like the proposal to write files to /tmp. If we
want a
perf tool, then we need to have something like 'top', which will
continuously update. With 40 backends, the idea of writing 40 file to/tmp
every second seems a little excessive to me.
My idea was to use 'ps' to gather most of the information, and just use
the internal stats when someone clicked on a backend and wanted more
information.
My own experience is that parsing ps can be difficult if you want to be
portable and want more than basic information. Quite clearly, I could just
be dense, but if it helps, you can look at the configure.in in the CVS tree
at http://sourceforge.net/projects/netsaintplug (GPL, sorry. But if you
find anything worthwhile, and borrowing concepts results in similar code, I
won't complain).
I wouldn't be at all surprised if you found a better approach - my
configuration above, to my mind at least, is not pretty. I hope you do find
a better approach - I know I'll be peeking at your code to see.
--
Karl
I wouldn't be at all surprised if you found a better approach - my
configuration above, to my mind at least, is not pretty. I hope you do find
a better approach - I know I'll be peeking at your code to see.
Yes, I have an idea and hope it works.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Wed, Mar 07, 2001 at 10:06:38PM -0500, Bruce Momjian wrote:
I think Bruce wants per-backend data, and this approach would seem to only
get the data for the current backend.Also, I really don't like the proposal to write files to /tmp. If we want a
perf tool, then we need to have something like 'top', which will
continuously update. With 40 backends, the idea of writing 40 file to /tmp
every second seems a little excessive to me.My idea was to use 'ps' to gather most of the information, and just use
the internal stats when someone clicked on a backend and wanted more
information.
Are you sure about 'ps' stuff portability? I don't known how data you
want read from 'ps', but /proc utils are very OS specific and for example
on Linux within a few years was libproc several time overhauled.
I spent several years with /proc stuff (processes manager:
http://home.zf.jcu.cz/~zakkr/kim).
Karel
--
Karel Zak <zakkr@zf.jcu.cz>
http://home.zf.jcu.cz/~zakkr/
C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz