Speaking of pgstats

Started by Magnus Haganderalmost 20 years ago6 messages
#1Magnus Hagander
mha@sollentuna.net

While we're talking about pgstats... There was some talk a while back
about the whole bufferer/collector combination perhaps being unnecessary
as well, and that it might be a good idea to simplify it down to just a
collector. I'm not 100% sure what the end result of that discussion was,
thouhg, and I can't find it in the archives :-(

Anyway. I think this might help some of the win32 specific issues.
Considering we had a lot of problems getting it up and running, most
related to the "socket inheritance across two fork/exec steps", I still
think there might be problems lurking there that would simply go away in
a case like this. The overhead is also definitly larger on win32,
considering a process taskswitch is much more expensive, and considering
that we emulate the pipe using TCP...

So I'd be interested in giving this a shot, but before starting I'd like
to know if people think it's a worthwhile thing, or if it's likely to be
rejected out-of-hand. (Of course, it can always be rejected on
ipmlementation details, or on the fact that it wasn't a good idea, but
if it's already known that it's not a good idea I don't want to spend
time on it..)

The general idea would be to still use UDP backend->stats but get rid of
the pipe part (emulated by standard tcp sockets on win32), so we'd still
have the "lose packets instead of blocking when falling behind".

//Magnus

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Magnus Hagander (#1)
Re: Speaking of pgstats

"Magnus Hagander" <mha@sollentuna.net> writes:

While we're talking about pgstats... There was some talk a while back
about the whole bufferer/collector combination perhaps being unnecessary
as well, and that it might be a good idea to simplify it down to just a
collector. I'm not 100% sure what the end result of that discussion was,
thouhg, and I can't find it in the archives :-(

Yeah, I was thinking that same thing this morning. AFAIR we designed
the current structure "on paper" in a pghackers thread, and never did
any serious experimentation to prove that it was worth having the extra
process. I concur it's worth at least testing the simpler method.

The general idea would be to still use UDP backend->stats but get rid of
the pipe part (emulated by standard tcp sockets on win32), so we'd still
have the "lose packets instead of blocking when falling behind".

Right.

regards, tom lane

#3Agent M
agentm@themactionfaction.com
In reply to: Tom Lane (#2)
Re: Speaking of pgstats

The general idea would be to still use UDP backend->stats but get rid
of
the pipe part (emulated by standard tcp sockets on win32), so we'd
still
have the "lose packets instead of blocking when falling behind".

Right.

Please correct me if I am wrong, but using UDP logging on the same
computer is a red herring. Any non-blocking I/O would do, no? If the
buffer is full, then the non-blocking I/O send function will fail and
the message is skipped.

Has anyone observed UDP ever drop *written* packets on loopback?
Looking at the Darwin 8 sources, it appears that the loopback streams
all converge to the same stream code, which makes sense...

If a kernel is too busy to handle I/O, doesn't it have higher
priorities than switching to a user context?

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Agent M (#3)
Re: Speaking of pgstats

Agent M <agentm@themactionfaction.com> writes:

Please correct me if I am wrong, but using UDP logging on the same
computer is a red herring. Any non-blocking I/O would do, no? If the
buffer is full, then the non-blocking I/O send function will fail and
the message is skipped.

Uh, not entirely. We'd like the thing to drop complete messages, not
inject partial messages into the channel causing reader parsing errors.
This is one reason for liking UDP semantics better than pipe semantics.

regards, tom lane

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Magnus Hagander (#1)
Re: Speaking of pgstats

"Magnus Hagander" <mha@sollentuna.net> writes:

While we're talking about pgstats... There was some talk a while back
about the whole bufferer/collector combination perhaps being unnecessary
as well, and that it might be a good idea to simplify it down to just a
collector. I'm not 100% sure what the end result of that discussion was,
thouhg, and I can't find it in the archives :-(

After a bit of archives-digging, I think you must be remembering this
thread:
http://archives.postgresql.org/pgsql-hackers/2006-01/msg00074.php
which was considering not only abandoning the intermediate buffer
process, but abandoning the assumption that it's OK to drop messages
under load. We might or might be ready to go that far, but it's worth
re-reading and reflecting --- see particularly Jan's comment at
http://archives.postgresql.org/pgsql-hackers/2006-01/msg00088.php

regards, tom lane

#6Bruce Momjian
pgman@candle.pha.pa.us
In reply to: Tom Lane (#2)
Re: Speaking of pgstats

Tom Lane wrote:

"Magnus Hagander" <mha@sollentuna.net> writes:

While we're talking about pgstats... There was some talk a while back
about the whole bufferer/collector combination perhaps being unnecessary
as well, and that it might be a good idea to simplify it down to just a
collector. I'm not 100% sure what the end result of that discussion was,
thouhg, and I can't find it in the archives :-(

Yeah, I was thinking that same thing this morning. AFAIR we designed
the current structure "on paper" in a pghackers thread, and never did
any serious experimentation to prove that it was worth having the extra
process. I concur it's worth at least testing the simpler method.

My research is in the hold queue:

http://momjian.postgresql.org/cgi-bin/pgpatches_hold

Subject is "Stats collector performance improvement". I am waiting for
someone to confirm my tests on other platforms before moving forward,
but we really should do something for 8.2. If someone else wants to
work on it, go ahead. All my work is in those emails.

--
Bruce Momjian http://candle.pha.pa.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +