Pre-allocation of shared memory ...

Started by Hans-Jürgen Schönigalmost 23 years ago64 messageshackers
Jump to latest
#1Hans-Jürgen Schönig
postgres@cybertec.at

There is a problem which occurs from time to time and which is a bit
nasty in business environments.
When the shared memory is eaten up by some application such as Apache
PostgreSQL will refuse to do what it should do because there is no
memory around. To many people this looks like a problem relatd to
stability. Also, it influences availability of the database itself.

I was thinking of a solution which might help to get around this problem:
If we had a flag to tell PostgreSQL that XXX Megs of shared memory
should be preallocated by PostgreSQL. The database would the sure that
there is always enough memory around. The problem is that PostgreSQL had
to care more about memory consumption.

Of course, the best solution is to put PostgreSQL on a separate machine
but many people don't do it so we have to live with memory leaks caused
by other software (we have just seen a nasty one in mod_perl).

Does it make sense?

Regards,

Hans

--
Cybertec Geschwinde u Schoenig
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/2952/30706; +43/664/233 90 75
www.cybertec.at, www.postgresql.at, kernel.cybertec.at

#2Bruce Momjian
bruce@momjian.us
In reply to: Hans-Jürgen Schönig (#1)
Re: Pre-allocation of shared memory ...

We already pre-allocate all shared memory and resources on postmaster
start.

---------------------------------------------------------------------------

Hans-J���rgen Sch���nig wrote:

There is a problem which occurs from time to time and which is a bit
nasty in business environments.
When the shared memory is eaten up by some application such as Apache
PostgreSQL will refuse to do what it should do because there is no
memory around. To many people this looks like a problem relatd to
stability. Also, it influences availability of the database itself.

I was thinking of a solution which might help to get around this problem:
If we had a flag to tell PostgreSQL that XXX Megs of shared memory
should be preallocated by PostgreSQL. The database would the sure that
there is always enough memory around. The problem is that PostgreSQL had
to care more about memory consumption.

Of course, the best solution is to put PostgreSQL on a separate machine
but many people don't do it so we have to live with memory leaks caused
by other software (we have just seen a nasty one in mod_perl).

Does it make sense?

Regards,

Hans

--
Cybertec Geschwinde u Schoenig
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/2952/30706; +43/664/233 90 75
www.cybertec.at, www.postgresql.at, kernel.cybertec.at

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#3Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Bruce Momjian (#2)
Re: Pre-allocation of shared memory ...

Bruce Momjian wrote:

We already pre-allocate all shared memory and resources on postmaster
start.

I guess we allocate memory when a backend starts, don't we?
Or do we allocate when the instance starts?

I have two explanations for the following behaviour:

a. a bug
b. not enough shared memory

WARNING: Message from PostgreSQL backend:
The Postmaster has informed me that some other backend
died abnormally and possibly corrupted shared memory.
I have rolled back the current transaction and am
going to terminate your database system connection and exit.
Please reconnect to the database system and repeat your query.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

The problem is that this only happens with mod_perl and Apache on the
same machine so I thought it has to do with a known memory leak in
mod_perl/Apache. I happens after about two weeks (it seems to occur
regularily).

Are you suggesting pre-acquiring resources like oracle does? Like you start a
database instance, 350MB memory is gone types?

One thing I love about postgresql is that it does not do any such silly thing.
I agree in the case you suggest, it makes sense.

If at all postgresql goes that way, I would like to see it configurable. I
would rather remove an app. from a machine rather than letting it stamp on
other apps feet.

Shridhar. Yes, when preallocating some memory it has to be configurable
(default = off).

--
Cybertec Geschwinde u Schoenig
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/2952/30706; +43/664/233 90 75
www.cybertec.at, www.postgresql.at, kernel.cybertec.at

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hans-Jürgen Schönig (#3)
Re: Pre-allocation of shared memory ...

=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at> writes:

I have two explanations for the following behaviour:
a. a bug
b. not enough shared memory

WARNING: Message from PostgreSQL backend:
The Postmaster has informed me that some other backend
died abnormally and possibly corrupted shared memory.

Is this a Linux machine? If so, the true explanation is probably (c):
the kernel is kill 9'ing randomly-chosen database processes whenever
it starts to feel low on memory. I would suggest checking the
postmaster log to determine the signal number the failed backends are
dying with. The client-side message does not give nearly enough info
to debug such problems.

There is also possibility (d): you have some bad RAM that is located in
an address range that doesn't get used until the machine is under full
load. But if the backends are dying with signal 9 then I'll take the
kernel-kill theory.

AFAIK the only good way around this problem is to use another OS with a
more rational design for handling low-memory situations. No other Unix
does anything remotely as brain-dead as what Linux does. Or bug your
favorite Linux kernel hacker to fix the kernel.

regards, tom lane

#5Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#4)
Re: Pre-allocation of shared memory ...

Tom Lane wrote:

=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at> writes:

I have two explanations for the following behaviour:
a. a bug
b. not enough shared memory

WARNING: Message from PostgreSQL backend:
The Postmaster has informed me that some other backend
died abnormally and possibly corrupted shared memory.

Is this a Linux machine? If so, the true explanation is probably (c):
the kernel is kill 9'ing randomly-chosen database processes whenever
it starts to feel low on memory. I would suggest checking the
postmaster log to determine the signal number the failed backends are
dying with. The client-side message does not give nearly enough info
to debug such problems.

There is also possibility (d): you have some bad RAM that is located in
an address range that doesn't get used until the machine is under full
load. But if the backends are dying with signal 9 then I'll take the
kernel-kill theory.

AFAIK the only good way around this problem is to use another OS with a
more rational design for handling low-memory situations. No other Unix
does anything remotely as brain-dead as what Linux does. Or bug your
favorite Linux kernel hacker to fix the kernel.

Is there no sysctl way to disable such kills?

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#6Doug McNaught
doug@mcnaught.org
In reply to: Bruce Momjian (#5)
Re: Pre-allocation of shared memory ...

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Tom Lane wrote:

AFAIK the only good way around this problem is to use another OS with a
more rational design for handling low-memory situations. No other Unix
does anything remotely as brain-dead as what Linux does. Or bug your
favorite Linux kernel hacker to fix the kernel.

Is there no sysctl way to disable such kills?

The -ac kernel patches from Alan Cox have a sysctl to control memory
overcommit--you can set it to track memory usage and fail allocations
when memory runs out, rather than the random kill behavior. I'm not
sure whether those have made it into the stock kernel yet, but the
vendor kernels (such as Red Hat's) might have it too.

-Doug

#7Alvaro Herrera
alvherre@dcc.uchile.cl
In reply to: Doug McNaught (#6)
Re: Pre-allocation of shared memory ...

On Wed, Jun 11, 2003 at 07:35:20PM -0400, Doug McNaught wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Is there no sysctl way to disable such kills?

The -ac kernel patches from Alan Cox have a sysctl to control memory
overcommit--you can set it to track memory usage and fail allocations
when memory runs out, rather than the random kill behavior. I'm not
sure whether those have made it into the stock kernel yet, but the
vendor kernels (such as Red Hat's) might have it too.

Yeah, I see it in the Mandrake kernel. But it's not in stock 2.4.19, so
you can't assume everybody has it.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"�Qu� importan los a�os? Lo que realmente importa es comprobar que
a fin de cuentas la mejor edad de la vida es estar vivo" (Mafalda)

#8Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Bruce Momjian (#5)
Re: Pre-allocation of shared memory ...

Yeah, I see it in the Mandrake kernel. But it's not in stock 2.4.19, so
you can't assume everybody has it.

We had this problem on a recent version of good old Slackware.
I think we also had it on RedHat 8 or so.

Doing this kind of killing is definitely a bad habit. I thought it had
it had to do with something else so my proposal for pre-allocation seems
to be pretty obsolete ;).

Thanks a lot.

Hans

--
Cybertec Geschwinde u Schoenig
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/2952/30706; +43/664/233 90 75
www.cybertec.at, www.postgresql.at, kernel.cybertec.at

#9Andrew Dunstan
andrew@dunslane.net
In reply to: Hans-Jürgen Schönig (#8)
Re: Pre-allocation of shared memory ...

On this machine (RH9, kernel 2.4.20-18.9) the docs say (in
/usr/src/linux-2.4/Documentation/vm/overcommit-accounting ):

-----------------
The Linux kernel supports four overcommit handling modes

0 - Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage

1 - No overcommit handling. Appropriate for some scientific
applications

2 - (NEW) strict overcommit. The total address space commit
for the system is not permitted to exceed swap + half ram.
In almost all situations this means a process will not be
killed while accessing pages but only by malloc failures
that are reported back by the kernel mmap/brk code.

3 - (NEW) paranoid overcommit The total address space commit
for the system is not permitted to exceed swap. The machine
will never kill a process accessing pages it has mapped
except due to a bug (ie report it!)
----------------------

So maybe

sysctl -w vm.overcommit_memory=3

is what's needed? I guess you might pay a performance hit for doing that,
though.

andrew

Show quoted text

Yeah, I see it in the Mandrake kernel. But it's not in stock 2.4.19,
so you can't assume everybody has it.

We had this problem on a recent version of good old Slackware.
I think we also had it on RedHat 8 or so.

Doing this kind of killing is definitely a bad habit. I thought it had
it had to do with something else so my proposal for pre-allocation
seems to be pretty obsolete ;).

Thanks a lot.

Hans

#10Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#9)
Re: Pre-allocation of shared memory ...

A couple of points:

. It is probably a good idea to put do this via /etc/sysctl.conf, which will
be called earlyish by init scripts (on RH9 it is in the network startup
file, for some reason).

. The setting is not available on all kernel versions AFAIK. The admin needs
to check the docs. I have no idea when this went into the kernel, and no
time to spend finding out. Even if we knew, it might have gone into vendor
kernels at other odd times - there are often times when the vendors are in
advance of the officially released kernels.

Andrew

Bruce wrote:

Show quoted text

OK, new text is:

<para>
Linux has poor default memory overcommit behavior. Rather than
failing if it can not reserve enough memory, it returns success,
but later fails when the memory can't be mapped and terminates
the application with <literal>kill -9</>. To prevent
unpredictable process termination, use:
<programlisting>
sysctl -w vm.overcommit_memory=3
</programlisting>
Note, you will need enough swap space to cover all your memory
needs.
</para>
</listitem>
</varlistentry>

---------------------------------------------------------------------------

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, doc patch attached and applied. Improvements?

I think it would be worth spending another sentence to tell people
exactly what the symptom looks like, ie, backends dying with signal 9.

regards, tom lane

--
Bruce Momjian                        |  http://candle.pha.pa.us
pgman@candle.pha.pa.us               |  (610) 359-1001
+  If your life is a hard drive,     |  13 Roberts Road
+  Christ can be your backup.        |  Newtown Square, Pennsylvania
19073

---------------------------(end of
broadcast)--------------------------- TIP 2: you can get off all lists
at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

#11Jon Lapham
lapham@extracta.com.br
In reply to: Tom Lane (#4)
Re: Pre-allocation of shared memory ...

Tom Lane wrote:

Is this a Linux machine? If so, the true explanation is probably (c):
the kernel is kill 9'ing randomly-chosen database processes whenever
it starts to feel low on memory. I would suggest checking the
postmaster log to determine the signal number the failed backends are
dying with. The client-side message does not give nearly enough info
to debug such problems.

AFAIK the only good way around this problem is to use another OS with a
more rational design for handling low-memory situations. No other Unix
does anything remotely as brain-dead as what Linux does. Or bug your
favorite Linux kernel hacker to fix the kernel.

Tom-

Just curious. What would a rationally designed OS do in an out of
memory situation?

It seems like from the discussions I've read about the subject there
really is no rational solution to this irrational problem.

Some solutions such as "suspend process, write image to file" and
"increase swap space" assume available disk space, which is obviously
not guaranteed to be avaliable.

--
-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---
Jon Lapham <lapham@extracta.com.br> Rio de Janeiro, Brasil
Work: Extracta Mol�culas Naturais SA http://www.extracta.com.br/
Web: http://www.jandr.org/
***-*--*----*-------*------------*--------------------*---------------

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jon Lapham (#11)
Re: Pre-allocation of shared memory ...

Jon Lapham <lapham@extracta.com.br> writes:

Just curious. What would a rationally designed OS do in an out of
memory situation?

Fail malloc() requests.

The sysctl docs that Andrew Dunstan just provided give some insight into
the problem: the default behavior of Linux is to promise more virtual
memory than it can actually deliver. That is, it allows malloc to
succeed even when it's not going to be able to actually provide the
address space when push comes to shove. When called to stand and
deliver, the kernel has no way to report failure (other than perhaps a
software-induced SIGSEGV, which would hardly be an improvement). So it
kills the process instead. Unfortunately, the process that happens to
be in the line of fire at this point could be any process, not only the
one that made unreasonable memory demands.

This is perhaps an okay behavior for desktop systems being run by
people who are accustomed to Microsoft-like reliability. But to make it
the default is brain-dead, and to make it the only available behavior
(as seems to have been true until very recently) defies belief. The
setting now called "paranoid overcommit" is IMHO the *only* acceptable
one for any sort of server system. With anything else, you risk having
critical userspace daemons killed through no fault of their own.

regards, tom lane

#13Jon Lapham
lapham@extracta.com.br
In reply to: Tom Lane (#12)
Re: Pre-allocation of shared memory ...

Tom Lane wrote:

[snip]
The
setting now called "paranoid overcommit" is IMHO the *only* acceptable
one for any sort of server system. With anything else, you risk having
critical userspace daemons killed through no fault of their own.

Wow. Thanks for the info. I found the documentation you are referring
to in Documentation/vm/overcommit-accounting (on a stock RH9 machine).

It seems that the overcommit policy is set via the sysctl
`vm.overcommit_memory'. So...

[root@bilbo src]# sysctl -a | grep -i overcommit
vm.overcommit_memory = 0

...the default seems to be "Heuristic overcommit handling". It seems
that what we want is "vm.overcommit_memory = 3" for paranoid overcommit.

Thanks for getting to the bottom of this Tom. It *is* insane that the
default isn't "paranoid overcommit".

--
-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---
Jon Lapham <lapham@extracta.com.br> Rio de Janeiro, Brasil
Work: Extracta Mol�culas Naturais SA http://www.extracta.com.br/
Web: http://www.jandr.org/
***-*--*----*-------*------------*--------------------*---------------

#14Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#12)
Re: Pre-allocation of shared memory ...

What really kills [:-)] me is that they allocate memory assuming I will
not be using it all, then terminate the executable in an unrecoverable
way when I go to use the memory.

And, they make a judgement on users who don't want this by calling them
"paranoid".

I will add something to the docs about this.

---------------------------------------------------------------------------

Tom Lane wrote:

Jon Lapham <lapham@extracta.com.br> writes:

Just curious. What would a rationally designed OS do in an out of
memory situation?

Fail malloc() requests.

The sysctl docs that Andrew Dunstan just provided give some insight into
the problem: the default behavior of Linux is to promise more virtual
memory than it can actually deliver. That is, it allows malloc to
succeed even when it's not going to be able to actually provide the
address space when push comes to shove. When called to stand and
deliver, the kernel has no way to report failure (other than perhaps a
software-induced SIGSEGV, which would hardly be an improvement). So it
kills the process instead. Unfortunately, the process that happens to
be in the line of fire at this point could be any process, not only the
one that made unreasonable memory demands.

This is perhaps an okay behavior for desktop systems being run by
people who are accustomed to Microsoft-like reliability. But to make it
the default is brain-dead, and to make it the only available behavior
(as seems to have been true until very recently) defies belief. The
setting now called "paranoid overcommit" is IMHO the *only* acceptable
one for any sort of server system. With anything else, you risk having
critical userspace daemons killed through no fault of their own.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#14)
Re: Pre-allocation of shared memory ...

Bruce Momjian <pgman@candle.pha.pa.us> writes:

What really kills [:-)] me is that they allocate memory assuming I will
not be using it all, then terminate the executable in an unrecoverable
way when I go to use the memory.

To be fair, I'm probably misstating things by referring to malloc().
The big problem probably comes from fork() with copy-on-write --- the
kernel has no good way to estimate how much of the shared address space
will eventually become private modified copies, but it can be forgiven
for wanting to make less than the worst-case assumption.

Still, if you are wanting to run a reliable server, I think worst-case
assumption is exactly what you want. Swap space is cheap, and there's
no reason you shouldn't have enough swap to support the worst-case
situation. If the swap area goes largely unused, that's fine.

The policy they're calling "paranoid overcommit" (don't allocate more
virtual memory than you have swap) is as far as I know the standard on
all Unixen other than Linux; certainly it's the traditional behavior.

regards, tom lane

#16Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#15)
Re: Pre-allocation of shared memory ...

OK, doc patch attached and applied. Improvements?

---------------------------------------------------------------------------

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

What really kills [:-)] me is that they allocate memory assuming I will
not be using it all, then terminate the executable in an unrecoverable
way when I go to use the memory.

To be fair, I'm probably misstating things by referring to malloc().
The big problem probably comes from fork() with copy-on-write --- the
kernel has no good way to estimate how much of the shared address space
will eventually become private modified copies, but it can be forgiven
for wanting to make less than the worst-case assumption.

Still, if you are wanting to run a reliable server, I think worst-case
assumption is exactly what you want. Swap space is cheap, and there's
no reason you shouldn't have enough swap to support the worst-case
situation. If the swap area goes largely unused, that's fine.

The policy they're calling "paranoid overcommit" (don't allocate more
virtual memory than you have swap) is as far as I know the standard on
all Unixen other than Linux; certainly it's the traditional behavior.

regards, tom lane

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Attachments:

/bjm/difftext/plainDownload+10-0
#17Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#16)
Re: Pre-allocation of shared memory ...

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, doc patch attached and applied. Improvements?

I think it would be worth spending another sentence to tell people
exactly what the symptom looks like, ie, backends dying with signal 9.

regards, tom lane

#18Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#9)
Re: Pre-allocation of shared memory ...

I have added the following sentence to the docs too:

Note, you will need enough swap space to cover all your memory
needs.

I still wish Linux would just fail the fork/malloc when memory is low,
rather than requiring swap for everything _or_ overcommitting. I wonder
if making a unified buffer cache just made that too hard to do.

---------------------------------------------------------------------------

Andrew Dunstan wrote:

On this machine (RH9, kernel 2.4.20-18.9) the docs say (in
/usr/src/linux-2.4/Documentation/vm/overcommit-accounting ):

-----------------
The Linux kernel supports four overcommit handling modes

0 - Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage

1 - No overcommit handling. Appropriate for some scientific
applications

2 - (NEW) strict overcommit. The total address space commit
for the system is not permitted to exceed swap + half ram.
In almost all situations this means a process will not be
killed while accessing pages but only by malloc failures
that are reported back by the kernel mmap/brk code.

3 - (NEW) paranoid overcommit The total address space commit
for the system is not permitted to exceed swap. The machine
will never kill a process accessing pages it has mapped
except due to a bug (ie report it!)
----------------------

So maybe

sysctl -w vm.overcommit_memory=3

is what's needed? I guess you might pay a performance hit for doing that,
though.

andrew

Yeah, I see it in the Mandrake kernel. But it's not in stock 2.4.19,
so you can't assume everybody has it.

We had this problem on a recent version of good old Slackware.
I think we also had it on RedHat 8 or so.

Doing this kind of killing is definitely a bad habit. I thought it had
it had to do with something else so my proposal for pre-allocation
seems to be pretty obsolete ;).

Thanks a lot.

Hans

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#19Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#17)
Re: Pre-allocation of shared memory ...

OK, new text is:

<para>
Linux has poor default memory overcommit behavior. Rather than
failing if it can not reserve enough memory, it returns success,
but later fails when the memory can't be mapped and terminates
the application with <literal>kill -9</>. To prevent unpredictable
process termination, use:
<programlisting>
sysctl -w vm.overcommit_memory=3
</programlisting>
Note, you will need enough swap space to cover all your memory needs.
</para>
</listitem>
</varlistentry>

---------------------------------------------------------------------------

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, doc patch attached and applied. Improvements?

I think it would be worth spending another sentence to tell people
exactly what the symptom looks like, ie, backends dying with signal 9.

regards, tom lane

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#20Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#10)
Re: Pre-allocation of shared memory ...

Well, let's see what feedback we get.

---------------------------------------------------------------------------

Andrew Dunstan wrote:

A couple of points:

. It is probably a good idea to put do this via /etc/sysctl.conf, which will
be called earlyish by init scripts (on RH9 it is in the network startup
file, for some reason).

. The setting is not available on all kernel versions AFAIK. The admin needs
to check the docs. I have no idea when this went into the kernel, and no
time to spend finding out. Even if we knew, it might have gone into vendor
kernels at other odd times - there are often times when the vendors are in
advance of the officially released kernels.

Andrew

Bruce wrote:

OK, new text is:

<para>
Linux has poor default memory overcommit behavior. Rather than
failing if it can not reserve enough memory, it returns success,
but later fails when the memory can't be mapped and terminates
the application with <literal>kill -9</>. To prevent
unpredictable process termination, use:
<programlisting>
sysctl -w vm.overcommit_memory=3
</programlisting>
Note, you will need enough swap space to cover all your memory
needs.
</para>
</listitem>
</varlistentry>

---------------------------------------------------------------------------

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

OK, doc patch attached and applied. Improvements?

I think it would be worth spending another sentence to tell people
exactly what the symptom looks like, ie, backends dying with signal 9.

regards, tom lane

--
Bruce Momjian                        |  http://candle.pha.pa.us
pgman@candle.pha.pa.us               |  (610) 359-1001
+  If your life is a hard drive,     |  13 Roberts Road
+  Christ can be your backup.        |  Newtown Square, Pennsylvania
19073

---------------------------(end of
broadcast)--------------------------- TIP 2: you can get off all lists
at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#21Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#15)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#21)
#23Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#22)
In reply to: Bruce Momjian (#23)
#25Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#14)
#26Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeroen T. Vermeulen (#24)
#27Alvaro Herrera
alvherre@dcc.uchile.cl
In reply to: Tom Lane (#26)
#28Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#26)
#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#28)
#30Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#27)
#31Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#29)
#32Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#30)
#33Ron Mayer
ron@intervideo.com
In reply to: Jeroen T. Vermeulen (#24)
#34Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#32)
#35Alvaro Herrera
alvherre@dcc.uchile.cl
In reply to: Ron Mayer (#33)
#36Shridhar Daithankar
shridhar_daithankar@persistent.co.in
In reply to: Bruce Momjian (#16)
In reply to: Ron Mayer (#33)
#38Bruce Momjian
bruce@momjian.us
In reply to: Shridhar Daithankar (#36)
#39Patrick Welche
prlw1@newn.cam.ac.uk
In reply to: Bruce Momjian (#31)
#40Bruce Momjian
bruce@momjian.us
In reply to: Patrick Welche (#39)
In reply to: Bruce Momjian (#40)
#42Josh Berkus
josh@agliodbs.com
In reply to: Jeroen T. Vermeulen (#41)
#43Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#42)
#44Lamar Owen
lamar.owen@wgcr.org
In reply to: Josh Berkus (#42)
#45Bruce Momjian
bruce@momjian.us
In reply to: Lamar Owen (#44)
#46Nigel J. Andrews
nandrews@investsystems.co.uk
In reply to: Lamar Owen (#44)
#47Bruce Momjian
bruce@momjian.us
In reply to: Nigel J. Andrews (#46)
In reply to: Lamar Owen (#44)
#49Lamar Owen
lamar.owen@wgcr.org
In reply to: Nigel J. Andrews (#46)
#50Lamar Owen
lamar.owen@wgcr.org
In reply to: Lamar Owen (#49)
#51Andrew Dunstan
andrew@dunslane.net
In reply to: Nigel J. Andrews (#46)
#52Andrew Dunstan
andrew@dunslane.net
In reply to: Nigel J. Andrews (#46)
#53Matthew Kirkwood
matthew@hairy.beasts.org
In reply to: Andrew Dunstan (#51)
In reply to: Matthew Kirkwood (#53)
#55Matthew Kirkwood
matthew@hairy.beasts.org
In reply to: Kurt Roeckx (#54)
#56Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#51)
#57Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#56)
#58Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#56)
#59Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#51)
#60Lamar Owen
lamar.owen@wgcr.org
In reply to: Andrew Dunstan (#56)
#61Shridhar Daithankar
shridhar_daithankar@persistent.co.in
In reply to: Andrew Dunstan (#56)
#62Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#51)
#63Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Bruce Momjian (#31)
#64Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Bruce Momjian (#45)