Time to up bgwriter_lru_maxpages?

Started by Jim Nasbyover 9 years ago31 messageshackers
Jump to latest
#1Jim Nasby
Jim.Nasby@BlueTreble.com

With current limits, the most bgwriter can do (with 8k pages) is 1000
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#2Devrim GÜNDÜZ
devrim@gunduz.org
In reply to: Jim Nasby (#1)
Re: Time to up bgwriter_lru_maxpages?

Hi,

On Mon, 2016-11-28 at 11:40 -0800, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000 
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with 
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?

+1 for that. I've seen many cases that we need more than 1000.

Regards,
--
Devrim Gündüz
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR

#3Joshua D. Drake
jd@commandprompt.com
In reply to: Jim Nasby (#1)
Re: Time to up bgwriter_lru_maxpages?

On 11/28/2016 11:40 AM, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?

Considering a single SSD can do 70% of that limit, I would say yes.

JD

--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Joshua D. Drake (#3)
Re: Time to up bgwriter_lru_maxpages?

On 11/28/16 11:53 AM, Joshua D. Drake wrote:

On 11/28/2016 11:40 AM, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?

Considering a single SSD can do 70% of that limit, I would say yes.

Next question becomes... should there even be an upper limit?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Michael Paquier
michael@paquier.xyz
In reply to: Jim Nasby (#4)
Re: Time to up bgwriter_lru_maxpages?

On Tue, Nov 29, 2016 at 6:20 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 11/28/16 11:53 AM, Joshua D. Drake wrote:

On 11/28/2016 11:40 AM, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?

Considering a single SSD can do 70% of that limit, I would say yes.

Next question becomes... should there even be an upper limit?

Looking at the log history the current default dates of cfeca621, it
would be time to raise the bar a little bit more. Even an utterly high
value could make sense for testing.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Jeff Janes
jeff.janes@gmail.com
In reply to: Jim Nasby (#4)
Re: Time to up bgwriter_lru_maxpages?

On Mon, Nov 28, 2016 at 1:20 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 11/28/16 11:53 AM, Joshua D. Drake wrote:

On 11/28/2016 11:40 AM, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?

Considering a single SSD can do 70% of that limit, I would say yes.

Next question becomes... should there even be an upper limit?

Where the contortions needed to prevent calculation overflow become
annoying?

I'm not a big fan of nannyism in general, but the limits on this parameter
seem particularly pointless. You can't write out more buffers than exist
in the dirty state, nor more than implied by bgwriter_lru_multiplier. So
what is really the worse that can happen if you make it too high?

Cheers,

Jeff

#7Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Jeff Janes (#6)
Re: Time to up bgwriter_lru_maxpages?

On 11/29/16 9:58 AM, Jeff Janes wrote:

Considering a single SSD can do 70% of that limit, I would say yes.

Next question becomes... should there even be an upper limit?

Where the contortions needed to prevent calculation overflow become
annoying?

I'm not a big fan of nannyism in general, but the limits on this
parameter seem particularly pointless. You can't write out more buffers
than exist in the dirty state, nor more than implied
by bgwriter_lru_multiplier. So what is really the worse that can happen
if you make it too high?

Attached is a patch that ups the limit to INT_MAX / 2, which is the same
as shared_buffers.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)

Attachments:

lru_maxpages.patchtext/plain; charset=UTF-8; name=lru_maxpages.patch; x-mac-creator=0; x-mac-type=0Download+2-2
#8Robert Haas
robertmhaas@gmail.com
In reply to: Jim Nasby (#7)
Re: Time to up bgwriter_lru_maxpages?

On Tue, Jan 31, 2017 at 5:07 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 11/29/16 9:58 AM, Jeff Janes wrote:

Considering a single SSD can do 70% of that limit, I would say
yes.

Next question becomes... should there even be an upper limit?

Where the contortions needed to prevent calculation overflow become
annoying?

I'm not a big fan of nannyism in general, but the limits on this
parameter seem particularly pointless. You can't write out more buffers
than exist in the dirty state, nor more than implied
by bgwriter_lru_multiplier. So what is really the worse that can happen
if you make it too high?

Attached is a patch that ups the limit to INT_MAX / 2, which is the same as
shared_buffers.

This looks fine to me.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Robert Haas (#8)
Re: Time to up bgwriter_lru_maxpages?

On 2/1/17 10:27 AM, Robert Haas wrote:

On Tue, Jan 31, 2017 at 5:07 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 11/29/16 9:58 AM, Jeff Janes wrote:

Considering a single SSD can do 70% of that limit, I would say
yes.

Next question becomes... should there even be an upper limit?

Where the contortions needed to prevent calculation overflow become
annoying?

I'm not a big fan of nannyism in general, but the limits on this
parameter seem particularly pointless. You can't write out more buffers
than exist in the dirty state, nor more than implied
by bgwriter_lru_multiplier. So what is really the worse that can happen
if you make it too high?

Attached is a patch that ups the limit to INT_MAX / 2, which is the same as
shared_buffers.

This looks fine to me.

If someone wants to proactively commit this, the CF entry is
https://commitfest.postgresql.org/13/979/. (BTW, the Jan. CF is still
showing as in-progress...)
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Michael Paquier
michael@paquier.xyz
In reply to: Jim Nasby (#9)
Re: Time to up bgwriter_lru_maxpages?

On Thu, Feb 2, 2017 at 7:01 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 2/1/17 10:27 AM, Robert Haas wrote:

This looks fine to me.

This could go without the comments, they are likely going to be
forgotten if any updates happen in the future.

If someone wants to proactively commit this, the CF entry is
https://commitfest.postgresql.org/13/979/.
(BTW, the Jan. CF is still showing as in-progress...)

WIP.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Michael Paquier (#10)
Re: Time to up bgwriter_lru_maxpages?

On 2/1/17 3:36 PM, Michael Paquier wrote:

On Thu, Feb 2, 2017 at 7:01 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 2/1/17 10:27 AM, Robert Haas wrote:

This looks fine to me.

This could go without the comments, they are likely going to be
forgotten if any updates happen in the future.

I'm confused... I put the comments in there so if max shared buffers
ever changed the other one would hopefully be updated as well.

Speaking of which... I have a meeting in 15 minutes to discuss moving to
a server with 4TB of memory. With current limits shared buffers maxes at
16TB, which isn't all that far in the future. While 16TB of shared
buffers might not be a good idea, it's not going to be terribly long
before we start getting questions about it.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Michael Paquier
michael@paquier.xyz
In reply to: Jim Nasby (#11)
Re: Time to up bgwriter_lru_maxpages?

On Thu, Feb 2, 2017 at 9:17 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

Speaking of which... I have a meeting in 15 minutes to discuss moving to a
server with 4TB of memory. With current limits shared buffers maxes at 16TB,
which isn't all that far in the future. While 16TB of shared buffers might
not be a good idea, it's not going to be terribly long before we start
getting questions about it.

Time for int64 GUCs?
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#12)
Re: Time to up bgwriter_lru_maxpages?

On 2017-02-02 09:22:46 +0900, Michael Paquier wrote:

On Thu, Feb 2, 2017 at 9:17 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

Speaking of which... I have a meeting in 15 minutes to discuss moving to a
server with 4TB of memory. With current limits shared buffers maxes at 16TB,
which isn't all that far in the future. While 16TB of shared buffers might
not be a good idea, it's not going to be terribly long before we start
getting questions about it.

Time for int64 GUCs?

I don't think the GUC bit is the hard part. We'd possibly need some
trickery (like not storing bufferid in BufferDesc anymore) to avoid
increasing memory usage.

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Andres Freund
andres@anarazel.de
In reply to: Jim Nasby (#1)
Re: Time to up bgwriter_lru_maxpages?

On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
* 100 times/sec = 780MB/s. It's not hard to exceed that with modern
hardware. Should we increase the limit on bgwriter_lru_maxpages?

FWIW, I think working on replacing bgwriter (e.g. by working on the
patch I send with a POC replacement) wholesale is a better approach than
spending time increasing limits.

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#14)
Re: Time to up bgwriter_lru_maxpages?

On Wed, Feb 1, 2017 at 7:28 PM, Andres Freund <andres@anarazel.de> wrote:

On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
* 100 times/sec = 780MB/s. It's not hard to exceed that with modern
hardware. Should we increase the limit on bgwriter_lru_maxpages?

FWIW, I think working on replacing bgwriter (e.g. by working on the
patch I send with a POC replacement) wholesale is a better approach than
spending time increasing limits.

I'm happy to see it replaced, but increasing the limits is about three
orders of magnitude less work than replacing it, so let's not block
this on the theory that the other thing would be better.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#15)
Re: Time to up bgwriter_lru_maxpages?

On 2017-02-01 20:30:30 -0500, Robert Haas wrote:

On Wed, Feb 1, 2017 at 7:28 PM, Andres Freund <andres@anarazel.de> wrote:

On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
* 100 times/sec = 780MB/s. It's not hard to exceed that with modern
hardware. Should we increase the limit on bgwriter_lru_maxpages?

FWIW, I think working on replacing bgwriter (e.g. by working on the
patch I send with a POC replacement) wholesale is a better approach than
spending time increasing limits.

I'm happy to see it replaced, but increasing the limits is about three
orders of magnitude less work than replacing it, so let's not block
this on the theory that the other thing would be better.

I seriously doubt you can meaningfully exceed 780MB/s with the current
bgwriter. So it's not like the limits are all that relevant right now.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#16)
Re: Time to up bgwriter_lru_maxpages?

On Wed, Feb 1, 2017 at 8:35 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-02-01 20:30:30 -0500, Robert Haas wrote:

On Wed, Feb 1, 2017 at 7:28 PM, Andres Freund <andres@anarazel.de> wrote:

On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
* 100 times/sec = 780MB/s. It's not hard to exceed that with modern
hardware. Should we increase the limit on bgwriter_lru_maxpages?

FWIW, I think working on replacing bgwriter (e.g. by working on the
patch I send with a POC replacement) wholesale is a better approach than
spending time increasing limits.

I'm happy to see it replaced, but increasing the limits is about three
orders of magnitude less work than replacing it, so let's not block
this on the theory that the other thing would be better.

I seriously doubt you can meaningfully exceed 780MB/s with the current
bgwriter. So it's not like the limits are all that relevant right now.

Sigh. The patch is harmless and there are 4 or 5 votes in favor of
it, one of which clearly states that the person involved has hit seen
it be a problem in real workloads. Do we really have to argue about
this?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#17)
Re: Time to up bgwriter_lru_maxpages?

On 2017-02-01 20:38:58 -0500, Robert Haas wrote:

On Wed, Feb 1, 2017 at 8:35 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-02-01 20:30:30 -0500, Robert Haas wrote:

On Wed, Feb 1, 2017 at 7:28 PM, Andres Freund <andres@anarazel.de> wrote:

On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
* 100 times/sec = 780MB/s. It's not hard to exceed that with modern
hardware. Should we increase the limit on bgwriter_lru_maxpages?

FWIW, I think working on replacing bgwriter (e.g. by working on the
patch I send with a POC replacement) wholesale is a better approach than
spending time increasing limits.

I'm happy to see it replaced, but increasing the limits is about three
orders of magnitude less work than replacing it, so let's not block
this on the theory that the other thing would be better.

I seriously doubt you can meaningfully exceed 780MB/s with the current
bgwriter. So it's not like the limits are all that relevant right now.

Sigh. The patch is harmless and there are 4 or 5 votes in favor of
it, one of which clearly states that the person involved has hit seen
it be a problem in real workloads. Do we really have to argue about
this?

I don't mind increasing the limit, it's harmless. I just seriously doubt
it actually addresses any sort of problem.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Andres Freund (#13)
Re: Time to up bgwriter_lru_maxpages?

On 2/1/17 4:27 PM, Andres Freund wrote:

On 2017-02-02 09:22:46 +0900, Michael Paquier wrote:

On Thu, Feb 2, 2017 at 9:17 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

Speaking of which... I have a meeting in 15 minutes to discuss moving to a
server with 4TB of memory. With current limits shared buffers maxes at 16TB,
which isn't all that far in the future. While 16TB of shared buffers might
not be a good idea, it's not going to be terribly long before we start
getting questions about it.

Time for int64 GUCs?

I don't think the GUC bit is the hard part. We'd possibly need some
trickery (like not storing bufferid in BufferDesc anymore) to avoid
increasing memory usage.

Before doing that the first thing to look at would be why the limit is
currently INT_MAX / 2 instead of INT_MAX.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Andres Freund (#14)
Re: Time to up bgwriter_lru_maxpages?

On 2/1/17 4:28 PM, Andres Freund wrote:

On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:

With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
* 100 times/sec = 780MB/s. It's not hard to exceed that with modern
hardware. Should we increase the limit on bgwriter_lru_maxpages?

FWIW, I think working on replacing bgwriter (e.g. by working on the
patch I send with a POC replacement) wholesale is a better approach than
spending time increasing limits.

Do you have a link to that? I'm not seeing anything in the archives.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Robert Haas
robertmhaas@gmail.com
In reply to: Jim Nasby (#19)
#22Andres Freund
andres@anarazel.de
In reply to: Jim Nasby (#20)
#23Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#21)
#24Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Andres Freund (#22)
#25Andres Freund
andres@anarazel.de
In reply to: Jim Nasby (#24)
#26Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Andres Freund (#25)
#27Andres Freund
andres@anarazel.de
In reply to: Jim Nasby (#26)
#28Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Andres Freund (#27)
#29David Steele
david@pgmasters.net
In reply to: Robert Haas (#21)
#30Robert Haas
robertmhaas@gmail.com
In reply to: David Steele (#29)
#31David Steele
david@pgmasters.net
In reply to: Robert Haas (#30)