Current best practice for maximum shared_buffers settings on big hardware?

Started by Bill Moranalmost 9 years ago9 messagesgeneral
Jump to latest
#1Bill Moran
wmoran@potentialtech.com

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make.

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

--
Bill Moran <wmoran@potentialtech.com>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#2Steven Chang
stevenchang1213@gmail.com
In reply to: Bill Moran (#1)
Re: Current best practice for maximum shared_buffers settings on big hardware?

hello, at most 40% total memory, official doc also says so.you can testify it using pg_prewarm and pgfincore .
btw, numa supported? if so, extra care is necessary when starting db cluster.

從我的 Samsung Galaxy 智慧型手機傳送。
-------- 原始訊息 --------自: Bill Moran <wmoran@potentialtech.com> 日期: 2017/5/24 20:24 (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make.

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

--
Bill Moran <wmoran@potentialtech.com>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#3Pavel Stehule
pavel.stehule@gmail.com
In reply to: Steven Chang (#2)
Re: Current best practice for maximum shared_buffers settings on big hardware?

2017-05-24 15:02 GMT+02:00 stevenchang1213 <stevenchang1213@gmail.com>:

hello,
at most 40% total memory, official doc also says so.
you can testify it using pg_prewarm and pgfincore .

There are strong dependency on use case. 40% total memory is related to low
memory servers .. 64GB max.

High SB requires tuning other options - like writer force

Regards

Pavel

Show quoted text

btw, numa supported? if so, extra care is necessary when starting db
cluster.

從我的 Samsung Galaxy 智慧型手機傳送。

-------- 原始訊息 --------
自: Bill Moran <wmoran@potentialtech.com>
日期: 2017/5/24 20:24 (GMT+08:00)
至: pgsql-general@postgresql.org
主旨: [GENERAL] Current best practice for maximum shared_buffers settings on
big hardware?

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make.

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

--
Bill Moran <wmoran@potentialtech.com>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#4Bill Moran
wmoran@potentialtech.com
In reply to: Steven Chang (#2)
Re: Current best practice for maximum shared_buffers settings on big hardware?

On Wed, 24 May 2017 21:02:45 +0800
stevenchang1213 <stevenchang1213@gmail.com> wrote:

hello, at most 40% total memory, official doc also says so.

The docs say, "it is unlikely that an allocation of more than 40% of RAM
to shared_buffers will work better than a smaller amount" which is a far
cry from defining a maximum amount, or even warning that there are dangers
for high values. Also, that is in the same paragraph that starts with
"If you have a dedicated database server with 1GB or more of RAM ..." so
I don't need to check the RCS logs to predict that that paragraph hasn't
been updated in a while.

you can testify it using pg_prewarm and pgfincore.

Not sure how those tools are going to predict whether I'm going to see
database stalls or other performance inversions from adding more
shared_buffers. For the purposes of seeing if shared_buffers are being
used effectively, I need only graph the block hits and misses to see
that a huge number of cache pages are satisfying requests, but also that
the amount of cache misses is still high enough for me to know that my
working set does _not_ fit in shared_buffers. What I _don't_ know is
whether increasing shared_buffers (to say, 128G on a 750G machine) is
still going to result in the same, weird performance inversion I saw
back in the 9.2 days.

btw, numa supported? if so, extra care is necessary when starting db cluster.

Egad how I hate Linux's default NUMA policy. But I do know how to manage
it, and it's not part of the issue.

從我的 Samsung Galaxy 智慧型手機傳送。
-------- 原始訊息 --------自: Bill Moran <wmoran@potentialtech.com> 日期: 2017/5/24 20:24 (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make.

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

--
Bill Moran <wmoran@potentialtech.com>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
Bill Moran <wmoran@potentialtech.com>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#5Steven Chang
stevenchang1213@gmail.com
In reply to: Bill Moran (#4)
Re: Current best practice for maximum shared_buffers settings on big hardware?

1. cache miss due to 256kb ring buffer for large volume data io. that's why warming tool is developed for covering the issue.2.pg_prewarm containing in contrib module since 9.4, but patch  for 9.2, which load relation data to share buffer or is cache.3.pgfincore is a external module like rpm for redhat to be a postgres extension for checking relation cache stats in os and more control like removing relation from os cache. 
I met the same situation as yours also in 9.2 at least 2 years ago. that's why i studied a lot about shared_buffer to figure it out. I suggest u surf internet for postgre share buffer issue. you will find what you want.
steven
從我的 Samsung Galaxy 智慧型手機傳送。
-------- 原始訊息 --------自: Bill Moran <wmoran@potentialtech.com> 日期: 2017/5/24 22:52 (GMT+08:00) 至: stevenchang1213 <stevenchang1213@gmail.com> 副本: pgsql-general@postgresql.org 主旨: Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?
On Wed, 24 May 2017 21:02:45 +0800
stevenchang1213 <stevenchang1213@gmail.com> wrote:

hello, at most 40% total memory, official doc also says so.

The docs say, "it is unlikely that an allocation of more than 40% of RAM
to shared_buffers will work better than a smaller amount" which is a far
cry from defining a maximum amount, or even warning that there are dangers
for high values. Also, that is in the same paragraph that starts with
"If you have a dedicated database server with 1GB or more of RAM ..." so
I don't need to check the RCS logs to predict that that paragraph hasn't
been updated in a while.

you can testify it using pg_prewarm and pgfincore.

Not sure how those tools are going to predict whether I'm going to see
database stalls or other performance inversions from adding more
shared_buffers. For the purposes of seeing if shared_buffers are being
used effectively, I need only graph the block hits and misses to see
that a huge number of cache pages are satisfying requests, but also that
the amount of cache misses is still high enough for me to know that my
working set does _not_ fit in shared_buffers. What I _don't_ know is
whether increasing shared_buffers (to say, 128G on a 750G machine) is
still going to result in the same, weird performance inversion I saw
back in the 9.2 days.

btw, numa supported? if so, extra care is necessary when starting db cluster.

Egad how I hate Linux's default NUMA policy. But I do know how to manage
it, and it's not part of the issue.

從我的 Samsung Galaxy 智慧型手機傳送。
-------- 原始訊息 --------自: Bill Moran <wmoran@potentialtech.com> 日期: 2017/5/24  20:24  (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make.

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

--
Bill Moran <wmoran@potentialtech.com>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
Bill Moran <wmoran@potentialtech.com>

#6Steven Chang
stevenchang1213@gmail.com
In reply to: Steven Chang (#5)
Re: Current best practice for maximum shared_buffers settings on big hardware?

for ur reference 
https://madusudanan.com/blog/understanding-postgres-caching-in-depth/

從我的 Samsung Galaxy 智慧型手機傳送。
-------- 原始訊息 --------自: stevenchang1213 <stevenchang1213@gmail.com> 日期: 2017/5/24 23:48 (GMT+08:00) 至: Bill Moran <wmoran@potentialtech.com> 副本: pgsql-general@postgresql.org 主旨: Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

1. cache miss due to 256kb ring buffer for large volume data io. that's why warming tool is developed for covering the issue.2.pg_prewarm containing in contrib module since 9.4, but patch  for 9.2, which load relation data to share buffer or is cache.3.pgfincore is a external module like rpm for redhat to be a postgres extension for checking relation cache stats in os and more control like removing relation from os cache. 
I met the same situation as yours also in 9.2 at least 2 years ago. that's why i studied a lot about shared_buffer to figure it out. I suggest u surf internet for postgre share buffer issue. you will find what you want.
steven
從我的 Samsung Galaxy 智慧型手機傳送。
-------- 原始訊息 --------自: Bill Moran <wmoran@potentialtech.com> 日期: 2017/5/24 22:52 (GMT+08:00) 至: stevenchang1213 <stevenchang1213@gmail.com> 副本: pgsql-general@postgresql.org 主旨: Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?
On Wed, 24 May 2017 21:02:45 +0800
stevenchang1213 <stevenchang1213@gmail.com> wrote:

hello, at most 40% total memory, official doc also says so.

The docs say, "it is unlikely that an allocation of more than 40% of RAM
to shared_buffers will work better than a smaller amount" which is a far
cry from defining a maximum amount, or even warning that there are dangers
for high values. Also, that is in the same paragraph that starts with
"If you have a dedicated database server with 1GB or more of RAM ..." so
I don't need to check the RCS logs to predict that that paragraph hasn't
been updated in a while.

you can testify it using pg_prewarm and pgfincore.

Not sure how those tools are going to predict whether I'm going to see
database stalls or other performance inversions from adding more
shared_buffers. For the purposes of seeing if shared_buffers are being
used effectively, I need only graph the block hits and misses to see
that a huge number of cache pages are satisfying requests, but also that
the amount of cache misses is still high enough for me to know that my
working set does _not_ fit in shared_buffers. What I _don't_ know is
whether increasing shared_buffers (to say, 128G on a 750G machine) is
still going to result in the same, weird performance inversion I saw
back in the 9.2 days.

btw, numa supported? if so, extra care is necessary when starting db cluster.

Egad how I hate Linux's default NUMA policy. But I do know how to manage
it, and it's not part of the issue.

從我的 Samsung Galaxy 智慧型手機傳送。
-------- 原始訊息 --------自: Bill Moran <wmoran@potentialtech.com> 日期: 2017/5/24  20:24  (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make.

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

--
Bill Moran <wmoran@potentialtech.com>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
Bill Moran <wmoran@potentialtech.com>

#7Justin Pryzby
pryzby@telsasoft.com
In reply to: Bill Moran (#1)
Re: Current best practice for maximum shared_buffers settings on big hardware?

On Wed, May 24, 2017 at 08:24:15AM -0400, Bill Moran wrote:

... I tried allocating 64G to shared buffers and we had a bunch of problems
with inconsistent performance, including "stall" periods where the database
would stop responding for 2 or 3 seconds. After trying all sorts of tuning
options that didn't help, the problem finally went away after reducing
shared_buffers to 32G. I speculated, at the time, that the shared buffer code
hit performance issues managing that much memory, but I never had the
opportunity to really follow up on it.

I think you were hitting an issue related to "kernel shared memory" and maybe
"transparent huge pages".

I was able to work around similar issues with ~32GB allocations to QEMU/QEMU
running on something like kernel 3.13. I didn't spend time to narrow down the
problem, and I don't know if the behavior is better with recent kernel.

/sys/kernel/mm/ksm/run=2
... and maybe also:
/sys/kernel/mm/transparent_hugepage/defrag=madvise
/sys/kernel/mm/ksm/merge_across_nodes=0

Justin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#8Scott Marlowe
scott.marlowe@gmail.com
In reply to: Bill Moran (#1)
Re: Current best practice for maximum shared_buffers settings on big hardware?

On Wed, May 24, 2017 at 6:24 AM, Bill Moran <wmoran@potentialtech.com> wrote:

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make.

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

My most recent employment was working on machines with 512GB to 1TB
memory. We never saw real performance increases past 10GB or so of
shared memory. That was with pg 9.2 and testing on 9.6. The 512GB
machines were processing something on the order of 500 or so writes
per second and 3k to 5k reads per second. Under testing we were able
to push through 18k writes and reads per second on those machines.
These dbs were in the 4 to 5TB range so could not fit in memory.
Letting the linux kernel (3.11 or 3.13 at the time) handle the caching
seemed to get best, most reliable performance. These machines ran big
RAID-5 arrays (6 to 7 TB) with write caching off and could read from
the IO really fast, so mostly we were bound by IO performance not
memory caching.

If you allocate 50% of memory to shared buffers then you're basically
caching everything twice, once in kernel cache and once in shared
memory. The general consensus is that you're better off going one way
or another, either let linux do the caching work, or crank up the
shared memory to 90% or so and let postgresql do it. My experience has
been that the kernel wins almost every time.

But about 95% of all my testing and 100% of my production experience
is on 3.13 kernels with pgsql 9.2 on top of it. 9.6 and 10 etc may
well be much faster with bigger shared memory.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#9Steven Chang
stevenchang1213@gmail.com
In reply to: Scott Marlowe (#8)
Re: Current best practice for maximum shared_buffers settings on big hardware?

Hello,

read it and test it , you will know why offical doc suggest no more
than 40% total memory for shared_buffers

http://raghavt.blogspot.tw/2012/04/caching-in-postgresql.html

And this is also a very good book -- http://www.interdb.jp/pg/index.html

IT job is just to keep repeating study and test for developing your own
experiences .

Best Regards,
Steven

2017-05-25 1:34 GMT+08:00 Scott Marlowe <scott.marlowe@gmail.com>:

Show quoted text

On Wed, May 24, 2017 at 6:24 AM, Bill Moran <wmoran@potentialtech.com>
wrote:

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make.

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

My most recent employment was working on machines with 512GB to 1TB
memory. We never saw real performance increases past 10GB or so of
shared memory. That was with pg 9.2 and testing on 9.6. The 512GB
machines were processing something on the order of 500 or so writes
per second and 3k to 5k reads per second. Under testing we were able
to push through 18k writes and reads per second on those machines.
These dbs were in the 4 to 5TB range so could not fit in memory.
Letting the linux kernel (3.11 or 3.13 at the time) handle the caching
seemed to get best, most reliable performance. These machines ran big
RAID-5 arrays (6 to 7 TB) with write caching off and could read from
the IO really fast, so mostly we were bound by IO performance not
memory caching.

If you allocate 50% of memory to shared buffers then you're basically
caching everything twice, once in kernel cache and once in shared
memory. The general consensus is that you're better off going one way
or another, either let linux do the caching work, or crank up the
shared memory to 90% or so and let postgresql do it. My experience has
been that the kernel wins almost every time.

But about 95% of all my testing and 100% of my production experience
is on 3.13 kernels with pgsql 9.2 on top of it. 9.6 and 10 etc may
well be much faster with bigger shared memory.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general