Multiple postgres.exe On Processes
Dear All,
I have fear that sufficient increase in number of postgres.exe after one or two logins on servers may down the speed of the server or even server. Is there any solution or technique to overcome this overhead.
Regards,
Abdul Rehman.
Abdul Rahman wrote:
I have fear that sufficient increase in number of postgres.exe after
one or two logins on servers may down the speed of the server or even
server. Is there any solution or technique to overcome this overhead.
each connection runs one process, plus the 3 master processes for the
postmaster, the writer, etc. with two logins, I'd expect to see 5
processes. with 102 concurrent logins, 105 processes. the bulk of the
memory and code is shared by these processes, with the exception of
things like per client work_mem buffers which by definition can't be
shared as they are used to service that connections requests.
what overhead are you talking about?
Hi,
I have fear that sufficient increase in number of postgres.exe after one
or two logins on servers may down the speed of the server or even server. Is
there any solution or technique to overcome this overhead.
Did you test this?
What OS are you using?
With PostgreSQL every connection gets a process, processes can be reused, so
unless you have a very low spec system, that should not be a problem.
PostgreSQL is designed as a client/server architecture, which requires
separate processes for each client and server, and various helper processes.
Many embedded architectures can support such requirements. However, if your
embedded architecture requires the database server to run inside the
application process, you cannot use Postgres and should select a
lighter-weight database solution.
I could not find the reason as to why this way has been chosen by the
developers
Hope this helps
Regards,
Serge Fonville
I have fear that sufficient increase in number of postgres.exe after one or
two logins on servers may down the speed of the server or even server. Is
there any solution or technique to overcome this overhead.
I did some more searching and foundIs PostgreSQL
multi-threaded?<http://archives.postgresql.org/pgsql-general/2000-05/msg00731.php>
in
the archives
Just adding:
processes. with 102 concurrent logins, 105 processes. the bulk of the
memory and code is shared by these processes, with the exception of things
like per client work_mem buffers which by definition can't be shared as they
please be advised that the default-view of TaskManager (XP) does NOT
show this memory as shared, but as "multiple multi megabyte
processes". In other words: the default view of TaskManager gives the
impression that shared memory is used by every process.
Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
EuroPython 2009 will take place in Birmingham - Stay tuned!
Dear All,
Thanks John R Pierce for replying fruitful text. I wold like to add some text in your reply from PostgreSQL document for further clarification. i.e.
Each connection runs one process, plus the 3 master processes for the
postmaster, the writer, etc. with two logins, I'd expect to see 5
processes. with 102 concurrent logins, 105 processes.[From Pierce]
The PostgreSQL server can handle multiple concurrent connections from clients. For that purpose it starts
(“forks”) a new process for each connection. From that point on, the client and the new server process
communicate without intervention by the original postgres process. Thus, the master server process is
always running, waiting for client connections, whereas client and associated server processes come and
go.[From PostgreSQL Document]
Thanks all.
Regards,
Abdul Rehman
On Thu, Feb 12, 2009 at 1:21 AM, Serge Fonville
<serge.fonville@gmail.com> wrote:
I could not find the reason as to why this way has been chosen by the
developers
Because separate processes are much more robust than multiple threads.
And on Linux, the difference in performance is minimal. Note some
OSes like Windows, and to a lesser extent, Solaris, have significant
overhead for forking processes, and run multi-threaded apps much
faster.
Since any real db in a heavy lifting situation is probably using a
connection pooler, then the cost of startup of a new process isn't a
big deal, because they're not getting started all the time anymore.