how much does a connection cost?

Started by Andy Krigeralmost 23 years ago3 messagesgeneral
Jump to latest
#1Andy Kriger
akriger@greaterthanone.com

The default config for max_connections is 32.

We have a server where we host web applications, each with its own
connection pool. For every additional webapp, I'm supposing that we are
placing more and more demands on those 32 connections (dividing a fixed
resource among increased clients).

So, it would seem to make sense to increase the max_connections whenever a
new webapp is created. However, what is the drawback to having more
connections available? Is it memory? disk space? file handles? Is there a
practical limit?

If anyone can lend their experience with this (and correct any
misconceptions ), I'd greatly appreciate it.

thx
andy

#2Bruno Wolff III
bruno@wolff.to
In reply to: Andy Kriger (#1)
Re: how much does a connection cost?

On Wed, May 28, 2003 at 19:18:45 -0400,
Andy Kriger <akriger@greaterthanone.com> wrote:

The default config for max_connections is 32.

We have a server where we host web applications, each with its own
connection pool. For every additional webapp, I'm supposing that we are
placing more and more demands on those 32 connections (dividing a fixed
resource among increased clients).

So, it would seem to make sense to increase the max_connections whenever a
new webapp is created. However, what is the drawback to having more
connections available? Is it memory? disk space? file handles? Is there a
practical limit?

It shouldn't be based on the web apps as much as what the web server
needs. For example if you run apache using the prefork mpm, you will
want to make the number of allowed postgres connections greater than
the number of allowed apache processes.

#3scott.marlowe
scott.marlowe@ihs.com
In reply to: Andy Kriger (#1)
Re: how much does a connection cost?

On Wed, 28 May 2003, Andy Kriger wrote:

The default config for max_connections is 32.

We have a server where we host web applications, each with its own
connection pool. For every additional webapp, I'm supposing that we are
placing more and more demands on those 32 connections (dividing a fixed
resource among increased clients).

So, it would seem to make sense to increase the max_connections whenever a
new webapp is created. However, what is the drawback to having more
connections available? Is it memory? disk space? file handles? Is there a
practical limit?

If anyone can lend their experience with this (and correct any
misconceptions ), I'd greatly appreciate it.

Setting max_connections higher costs very little.

Until the actual number of connections starts to climb, then it can cost
more.

Since most people running postgresql have already increased the
shared_buffer setting, (which has to be 2* the max connections) you can
usually just increase max_connections with no problem.

As the number of connections in use starts to climb, the delta is the
difference between the size of each child process and the amount of the
child process that is shared memory.

9:31am up 58 days, 23:08, 3 users, load average: 0.72, 0.71, 0.67
211 processes: 210 sleeping, 1 running, 0 zombie, 0 stopped
CPU0 states: 0.1% user, 0.1% system, 0.0% nice, 99.0% idle
CPU1 states: 0.2% user, 4.2% system, 0.0% nice, 94.1% idle
Mem: 1543980K av, 1538204K used, 5776K free, 266492K shrd, 12564K buff
Swap: 2048208K av, 470028K used, 1578180K free 592840K cached

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
19912 postgres 9 0 4764 4764 4684 S 0.0 0.3 1:15 postmaster
19913 postgres 9 0 5744 5744 4704 S 0.0 0.3 0:10 postmaster
19914 postgres 9 0 5380 5380 4696 S 0.0 0.3 0:11 postmaster
818 postgres 9 0 39764 38M 38256 S 0.0 2.5 0:01 postmaster

for most of these processes, the actual delta is pretty small (1 meg or
less) so if I had more connections running at the same time, it wouldn't
be a big issue.

The nice thing about postgresql is that the shared memory maximum is
pretty much fixed, so the per-backend costs don't rise very fast as they
do in setups where each backend would create it's own cache in memory.

What can cost you in terms of space used is if you have sort_mem set too
high and all your backends start doing large sorts. Your machine can have
a catastrophic swap storm that will send it into the performance dumper.