Dead Space Map version 2
This is the second proposal for Dead Space Map (DSM).
Here is the previous discussion:
http://archives.postgresql.org/pgsql-hackers/2006-12/msg01188.php
I'll post the next version of the Dead Space Map patch to -patches.
I've implemented 2bits/page bitmap and new vacuum commands.
Memory management and recovery features are not done yet.
I think it's better to get DSM and HOT together. DSM is good at complex
updated cases but not at heavily updated cases. HOT has opposite aspects,
as far as I can see. I think they can cover each other.
2bits/page bitmap
-----------------
Each heap pages have 4 states for dead space map; HIGH, LOW, UNFROZEN and
FROZEN. VACUUM uses the states to reduce the number of target pages.
- HIGH : High priority to vacuum. Maybe many dead tuples in the page.
- LOW : Low priority to vacuum Maybe few dead tuples in the page.
- UNFROZEN : No dead tuples, but some unfrozen tuples in the page.
- FROZEN : No dead nor unfrozen tuples in the page.
If we do UPDATE a tuple, the original page containing the tuple is marked
as HIGH and the new page where the updated tuple is placed is marked as LOW.
When we commit the transaction, the updated tuples needs only FREEZE.
That's why the after-page is marked as LOW. However, If we rollback, the
after-page should be vacuumed, so we should mark the page LOW, not UNFROZEN.
We don't know the transaction will commit or rollback at the UPDATE.
If we combine this with the HOT patch, pages with HOT tuples are probably
marked as UNFROZEN because we don't bother vacuuming HOT tuples. They can
be removed incrementally and doesn't require explicit vacuums.
In future work, we can do index-only-scan for tuples that is in UNFROZEN or
FROZEN pages. (currently not implemented)
VACUUM commands
---------------
VACUUM now only scans the pages that possibly have dead tuples.
VACUUM ALL, a new syntax, behaves as the same as before.
- VACUUM FULL : Not changed. scans all pages and compress them.
- VACUUM ALL : Scans all pages; Do the same behavior as previous VACUUM.
- VACUUM : Scans only HIGH pages usually, but also LOW and UNFROZEN
pages on vacuums in the cases for preventing XID wraparound.
The commitment of oldest XID for VACUUM is not changed. There should not be
tuples that XIDs are older than (Current XID - vacuum_freeze_min_age) after
VACUUM. If the VACUUM can guarantee the commitment, it scans only HIGH pages.
Otherwise, it scans HIGH, LOW and UNFROZEN pages for FREEZE.
Performance issues
------------------
* Enable/Disable DSM tracking per tables
DSM requires more or less additional works. If we know specific tables
where DSM does not work well, ex. heavily updated small tables, we can
disable DSM for it. The syntax is:
ALTER TABLE name SET (dsm=true/false);
* Dead Space State Cache
The DSM management module is guarded using one LWLock, DeadSpaceLock.
Almost all accesses to DSM requires only shared lock, but the frequency
of shared lock was very high (tied with BufMappingLock) in my research.
To avoid the lock contention, I added a cache of dead space state in
BufferDesc flags. Backends see the flags first, and avoid locking if no
need to
* Agressive freezing
We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.
This is for making FROZEN pages but not UNFROZEN pages as far as possible
in order to reduce works in XID wraparound vacuums.
Memory management
-----------------
In current implementation, DSM allocates a bunch of memory at start up and
we cannot modify it in running. It's maybe enough because DSM consumes very
little memory -- 32MB memory per 1TB database.
There are 3 parameters for FSM and DSM.
- max_fsm_pages = 204800
- max_fsm_relations = 1000 (= max_dsm_relations)
- max_dsm_pages = 4096000
I'm thinking to change them into 2 new paramaters. We will allocates memory
for DSM that can hold all of estimated_database_size, and for FSM 50% or
something of the size. Is this reasonable?
- estimated_max_relations = 1000
- estimated_database_size = 4GB (= about max_fsm_pages * 8KB * 2)
Recovery
--------
I've already have a recovery extension. However, it can recover DSM
but not FSM. Do we also need to restore FSM? If we don't, unreusable
pages might be left in heaps. Of cource it could be reused if another
tuple in the page are updated, but VACUUM will not find those pages.
Comments and suggestions are really appreciated.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
On Tue, Feb 27, 2007 at 12:05:57PM +0900, ITAGAKI Takahiro wrote:
Each heap pages have 4 states for dead space map; HIGH, LOW, UNFROZEN and
FROZEN. VACUUM uses the states to reduce the number of target pages.- HIGH : High priority to vacuum. Maybe many dead tuples in the page.
- LOW : Low priority to vacuum Maybe few dead tuples in the page.
- UNFROZEN : No dead tuples, but some unfrozen tuples in the page.
- FROZEN : No dead nor unfrozen tuples in the page.If we do UPDATE a tuple, the original page containing the tuple is marked
as HIGH and the new page where the updated tuple is placed is marked as LOW.
Don't you mean UNFROZEN?
When we commit the transaction, the updated tuples needs only FREEZE.
That's why the after-page is marked as LOW. However, If we rollback, the
after-page should be vacuumed, so we should mark the page LOW, not UNFROZEN.
We don't know the transaction will commit or rollback at the UPDATE.
What makes it more important to mark the original page as HIGH instead
of LOW, like the page with the new tuple? The description of the states
indicates that there would likely be a lot more dead tuples in a HIGH
page than in a LOW page.
Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.
* Agressive freezing
We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.
This is for making FROZEN pages but not UNFROZEN pages as far as possible
in order to reduce works in XID wraparound vacuums.
Do you mean using OldestXmin instead of FreezeLimit?
Perhaps it might be better to save that optimization for later...
In current implementation, DSM allocates a bunch of memory at start up and
we cannot modify it in running. It's maybe enough because DSM consumes very
little memory -- 32MB memory per 1TB database.There are 3 parameters for FSM and DSM.
- max_fsm_pages = 204800
- max_fsm_relations = 1000 (= max_dsm_relations)
- max_dsm_pages = 4096000I'm thinking to change them into 2 new paramaters. We will allocates memory
for DSM that can hold all of estimated_database_size, and for FSM 50% or
something of the size. Is this reasonable?
I don't think so, at least not until we get data from the field about
what's typical. If the DSM is tracking every page in the cluster then
I'd expect the FSM to be closer to 10% or 20% of that, anyway.
I've already have a recovery extension. However, it can recover DSM
but not FSM. Do we also need to restore FSM? If we don't, unreusable
pages might be left in heaps. Of cource it could be reused if another
tuple in the page are updated, but VACUUM will not find those pages.
Yes, DSM would make FSM recovery more important, but I thought it was
recoverable now? Or is that only on a clean shutdown?
I suspect we don't need perfect recoverability... theoretically we could
just commit the FSM after vacuum frees pages and leave it at that; if we
revert to that after a crash, backends will grab pages from the FSM only
to find there's no more free space, at which point they could pull the
page from the FSM and find another one. This would lead to degraded
performance for a while after a crash, but that might be a good
trade-off.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
"Jim C. Nasby" <jim@nasby.net> writes:
Yes, DSM would make FSM recovery more important, but I thought it was
recoverable now? Or is that only on a clean shutdown?
Currently we throw away FSM during any non-clean restart. This is
probably overkill but I'm quite unclear what would be a safe
alternative.
I suspect we don't need perfect recoverability...
The main problem with the levels proposed by Takahiro-san is that any
transition from FROZEN to not-FROZEN *must* be exactly recovered,
because vacuum will never visit an allegedly frozen page at all. This
appears to require WAL-logging DSM state changes, which is a pretty
serious performance hit. I'd be happier if the DSM content could be
treated as just a hint. I think that means not trusting it for whether
a page is frozen to the extent of not needing vacuum even for
wraparound. So I'm inclined to propose that there be only two states
(hence only one DSM bit per page): page needs vacuum for space recovery,
or not. Vacuum for XID wraparound would have to hit every page
regardless.
regards, tom lane
On Tue, 2007-02-27 at 12:05 +0900, ITAGAKI Takahiro wrote:
I think it's better to get DSM and HOT together. DSM is good at
complex updated cases but not at heavily updated cases. HOT has
opposite aspects, as far as I can see. I think they can cover each
other.
Very much agreed.
I'll be attempting to watch for any conflicting low-level assumptions as
we progress towards deadline.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
On Tue, 2007-02-27 at 12:05 +0900, ITAGAKI Takahiro wrote:
If we combine this with the HOT patch, pages with HOT tuples are probably
marked as UNFROZEN because we don't bother vacuuming HOT tuples. They can
be removed incrementally and doesn't require explicit vacuums.
Perhaps avoid DSM entries for HOT updates completely?
VACUUM commands
---------------VACUUM now only scans the pages that possibly have dead tuples.
VACUUM ALL, a new syntax, behaves as the same as before.- VACUUM FULL : Not changed. scans all pages and compress them.
- VACUUM ALL : Scans all pages; Do the same behavior as previous VACUUM.
- VACUUM : Scans only HIGH pages usually, but also LOW and UNFROZEN
pages on vacuums in the cases for preventing XID wraparound.
Sounds good.
Performance issues
------------------* Enable/Disable DSM tracking per tables
DSM requires more or less additional works. If we know specific tables
where DSM does not work well, ex. heavily updated small tables, we can
disable DSM for it. The syntax is:
ALTER TABLE name SET (dsm=true/false);
How about a dsm_tracking_limit GUC? (Better name please)
The number of pages in a table before we start tracking DSM entries for
it. DSM only gives worthwhile benefits for larger tables anyway, so let
the user define what large means for them.
dsm_tracking_limit = 1000 by default.
* Dead Space State Cache
The DSM management module is guarded using one LWLock, DeadSpaceLock.
Almost all accesses to DSM requires only shared lock, but the frequency
of shared lock was very high (tied with BufMappingLock) in my research.
To avoid the lock contention, I added a cache of dead space state in
BufferDesc flags. Backends see the flags first, and avoid locking if no
need to
ISTM there should be a point at which DSM is so full we don't bother to
keep track any longer, so we can drop that information. For example if
user runs UPDATE without a WHERE clause, there's no point in tracking
whole relation.
Memory management
-----------------In current implementation, DSM allocates a bunch of memory at start up and
we cannot modify it in running. It's maybe enough because DSM consumes very
little memory -- 32MB memory per 1TB database.
That sounds fine.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
On Tue, 2007-02-27 at 00:55 -0500, Tom Lane wrote:
"Jim C. Nasby" <jim@nasby.net> writes:
Yes, DSM would make FSM recovery more important, but I thought it was
recoverable now? Or is that only on a clean shutdown?Currently we throw away FSM during any non-clean restart. This is
probably overkill but I'm quite unclear what would be a safe
alternative.I suspect we don't need perfect recoverability...
The main problem with the levels proposed by Takahiro-san is that any
transition from FROZEN to not-FROZEN *must* be exactly recovered,
because vacuum will never visit an allegedly frozen page at all. This
appears to require WAL-logging DSM state changes, which is a pretty
serious performance hit. I'd be happier if the DSM content could be
treated as just a hint. I think that means not trusting it for whether
a page is frozen to the extent of not needing vacuum even for
wraparound.
Agreed.
So I'm inclined to propose that there be only two states
(hence only one DSM bit per page): page needs vacuum for space recovery,
or not. Vacuum for XID wraparound would have to hit every page
regardless.
I'm inclined to think: this close to deadline it would be more robust to
go with the simpler option. So, agreed to the one bit per page.
We can revisit the 2 bits/page idea easily for later releases. If the
DSM is non-transactional, upgrading to a new format in the future should
be very easy.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
"Jim C. Nasby" <jim@nasby.net> wrote:
If we do UPDATE a tuple, the original page containing the tuple is marked
as HIGH and the new page where the updated tuple is placed is marked as LOW.Don't you mean UNFROZEN?
No, the new tuples are marked as LOW. I intend to use UNFROZEN and FROZEN
pages as "all tuples in the pages are visible to all transactions" for
index-only-scan in the future.
What makes it more important to mark the original page as HIGH instead
of LOW, like the page with the new tuple? The description of the states
indicates that there would likely be a lot more dead tuples in a HIGH
page than in a LOW page.Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.
Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?
* Agressive freezing
We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.Do you mean using OldestXmin instead of FreezeLimit?
Yes, we will use OldestXmin as the threshold to freeze tuples in
dirty pages or pages that have some dead tuples. Or, many UNFROZEN
pages still remain after vacuum and they will cost us in the next
vacuum preventing XID wraparound.
I'm thinking to change them into 2 new paramaters. We will allocates memory
for DSM that can hold all of estimated_database_size, and for FSM 50% or
something of the size. Is this reasonable?I don't think so, at least not until we get data from the field about
what's typical. If the DSM is tracking every page in the cluster then
I'd expect the FSM to be closer to 10% or 20% of that, anyway.
I'd like to add some kind of logical flavors to max_fsm_pages
and max_dsm_pages. For DSM, max_dsm_pages should represent the
whole database size. In the other hand, what meaning does
max_fsm_pages have? (estimated_updatable_size ?)
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Tom Lane <tgl@sss.pgh.pa.us> wrote:
Vacuum for XID wraparound would have to hit every page regardless.
There is one problem at this point. If we want to guarantee that there
are no tuples that XIDs are older than pg_class.relfrozenxid, we must scan
all pages for XID wraparound for every vacuums. So I used two thresholds
for treating XIDs, that is commented as follows. Do you have better ideas
for this point?
/*
* We use vacuum_freeze_min_age to determine whether a freeze scan is
* needed, but half vacuum_freeze_min_age for the actual freeze limits
* in order to prevent XID wraparound won't occur too frequently.
*/
Also, normal vacuums uses DSM and freeze-vacuum does not, so vacuums
sometimes take longer time than usual. Doesn't the surprise bother us?
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
"Simon Riggs" <simon@2ndquadrant.com> wrote:
If we combine this with the HOT patch, pages with HOT tuples are probably
marked as UNFROZEN because we don't bother vacuuming HOT tuples. They can
be removed incrementally and doesn't require explicit vacuums.Perhaps avoid DSM entries for HOT updates completely?
Yes, if we employ 1bit/page (worth vacuum or not).
Or no if 2bits/page because HOT updates change page states to UNFROZEN.
* Enable/Disable DSM tracking per tables
How about a dsm_tracking_limit GUC? (Better name please)
The number of pages in a table before we start tracking DSM entries for
it. DSM only gives worthwhile benefits for larger tables anyway, so let
the user define what large means for them.
dsm_tracking_limit = 1000 by default.
Sound good. How about small_table_size = 8MB for the variable?
I found that we've already have the value used for truncating
threshold for vacuum. (REL_TRUNCATE_MINIMUM = 1000 in vacuumlazy.c)
I think they have the same purpose in treating of small tables
and we can use the same variable in these places.
* Dead Space State Cache
ISTM there should be a point at which DSM is so full we don't bother to
keep track any longer, so we can drop that information. For example if
user runs UPDATE without a WHERE clause, there's no point in tracking
whole relation.
It's a bit difficult. We have to lock DSM *before* we see whether
the table is tracked or not. So we need to cache the tracked state
in the relcache entry, but it requres some works to keep coherency
between cached states and shared states.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Tom Lane wrote:
The main problem with the levels proposed by Takahiro-san is that any
transition from FROZEN to not-FROZEN *must* be exactly recovered,
because vacuum will never visit an allegedly frozen page at all. This
appears to require WAL-logging DSM state changes, which is a pretty
serious performance hit.
I doubt it would be a big performance hit. AFAICS, all the information
needed to recover the DSM is already written to WAL, so it wouldn't need
any new WAL records.
I'd be happier if the DSM content could be
treated as just a hint. I think that means not trusting it for whether
a page is frozen to the extent of not needing vacuum even for
wraparound. So I'm inclined to propose that there be only two states
(hence only one DSM bit per page): page needs vacuum for space recovery,
or not. Vacuum for XID wraparound would have to hit every page
regardless.
If we don't have a frozen state, we can't use the DSM to implement
index-only scans. Index-only scans will obviously require a lot more
work than just the DSM, but I'd like to have a solution that enables it
in the future.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Tue, Feb 27, 2007 at 12:55:21AM -0500, Tom Lane wrote:
"Jim C. Nasby" <jim@nasby.net> writes:
Yes, DSM would make FSM recovery more important, but I thought it was
recoverable now? Or is that only on a clean shutdown?Currently we throw away FSM during any non-clean restart. This is
probably overkill but I'm quite unclear what would be a safe
alternative.
My thought would be to revert to a FSM that has pages marked as free
that no longer are. Could be done by writing the FSM out every time we
add pages to it. After an unclean restart backends would be getting
pages from the FSM that didn't have free space, in which case they'd
need to yank that page out of the FSM and request a new one. Granted,
this means extra IO until the FSM gets back to a realistic state, but I
suspect that's better than bloating tables out until the next vacuum.
And it's ultimately less IO than re-vacuuming every table to rebuild the
FSM.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
On Tue, Feb 27, 2007 at 05:38:39PM +0900, ITAGAKI Takahiro wrote:
"Jim C. Nasby" <jim@nasby.net> wrote:
If we do UPDATE a tuple, the original page containing the tuple is marked
as HIGH and the new page where the updated tuple is placed is marked as LOW.Don't you mean UNFROZEN?
No, the new tuples are marked as LOW. I intend to use UNFROZEN and FROZEN
pages as "all tuples in the pages are visible to all transactions" for
index-only-scan in the future.
Ahh, ok. Makes sense, though I tend to agree with others that it's
better to leave that off for now, or at least do the initial patch
without it.
What makes it more important to mark the original page as HIGH instead
of LOW, like the page with the new tuple? The description of the states
indicates that there would likely be a lot more dead tuples in a HIGH
page than in a LOW page.Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?
IMO yes, we want the bgwriter to be the only process that's normally
writing pages out. How close we are to that, I don't know...
* Agressive freezing
We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.Do you mean using OldestXmin instead of FreezeLimit?
Yes, we will use OldestXmin as the threshold to freeze tuples in
dirty pages or pages that have some dead tuples. Or, many UNFROZEN
pages still remain after vacuum and they will cost us in the next
vacuum preventing XID wraparound.
Another good idea. If it's not too invasive I'd love to see that as a
stand-alone patch so that we know it can get in.
I'm thinking to change them into 2 new paramaters. We will allocates memory
for DSM that can hold all of estimated_database_size, and for FSM 50% or
something of the size. Is this reasonable?I don't think so, at least not until we get data from the field about
what's typical. If the DSM is tracking every page in the cluster then
I'd expect the FSM to be closer to 10% or 20% of that, anyway.I'd like to add some kind of logical flavors to max_fsm_pages
and max_dsm_pages. For DSM, max_dsm_pages should represent the
whole database size. In the other hand, what meaning does
max_fsm_pages have? (estimated_updatable_size ?)
At some point it might make sense to convert the FSM into a bitmap; that
way everything just scales with database size.
In the meantime, I'm not sure if it makes sense to tie the FSM size to
the DSM size, since each FSM page requires 48x the storage of a DSM
page. I think there's also a lot of cases where FSM size will not scale
the same was DSM size will, such as when there's historical data in the
database.
That raises another question... what happens when we run out of DSM
space?
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
I'd be happier if the DSM content could be
treated as just a hint.
If we don't have a frozen state, we can't use the DSM to implement
index-only scans.
To implement index-only scans, the DSM would have to be expected to
provide 100% reliable coverage, which will increase its cost and
complexity by orders of magnitude. If you insist on that, I will bet
you lunch at a fine restaurant that it doesn't make it into 8.3.
regards, tom lane
"Jim C. Nasby" <jim@nasby.net> wrote:
I'd like to add some kind of logical flavors to max_fsm_pages
and max_dsm_pages.In the meantime, I'm not sure if it makes sense to tie the FSM size to
the DSM size, since each FSM page requires 48x the storage of a DSM
page. I think there's also a lot of cases where FSM size will not scale
the same was DSM size will, such as when there's historical data in the
database.
I see. We need separate variables for FSM and DSM.
Here is a new proposal for replacements of variables at Free Space Map
section in postgresql.conf. Are these changes acceptable? If ok, I'd
like to rewrite codes using them.
# - Space Management -
managed_relations = 1000 # min 100, ~120 bytes each
managed_freespaces = 2GB # 6 bytes of shared memory per 8KB
managed_deadspaces = 8GB # 4KB of shared memory per 32MB
managed_relations:
Replacement of max_fsm_relations. It is also used by DSM.
managed_freespaces:
Replacement of max_fsm_pages. The meaning is not changed,
but can be set in bytes.
managed_deadspaces:
A new parameter for DSM. It might be better to be scaled
with whole database size.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
"Jim C. Nasby" <jim@nasby.net> wrote:
At some point it might make sense to convert the FSM into a bitmap; that
way everything just scales with database size.
In the meantime, I'm not sure if it makes sense to tie the FSM size to
the DSM size, since each FSM page requires 48x the storage of a DSM
page. I think there's also a lot of cases where FSM size will not scale
the same was DSM size will, such as when there's historical data in the
database.
Bitmapped FSM is interesting. Maybe strict accuracy is not needed for FSM.
If we change FSM to use 2 bits/page bitmaps, it requires only 1/48 shared
memory by now. However, 6 bytes/page is small enough for normal use. We need
to reconsider it if we would go into TB class heavily updated databases.
That raises another question... what happens when we run out of DSM
space?
First, discard completely clean memory chunks in DSM. 'Clean' means all of
the tuples managed by the chunk are frozen. This is a lossless transition.
Second, discard tracked tables and its chunks that is least recently
vacuumed. We can assume those tables have many dead tuples and almost
fullscan will be required. We don't bother to keep tracking to such tables.
Many optimizations should still remain at this point, but I'll make
a not-so-complex suggestions in the meantime.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Tom Lane wrote:
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
I'd be happier if the DSM content could be
treated as just a hint.If we don't have a frozen state, we can't use the DSM to implement
index-only scans.To implement index-only scans, the DSM would have to be expected to
provide 100% reliable coverage, which will increase its cost and
complexity by orders of magnitude. If you insist on that, I will bet
you lunch at a fine restaurant that it doesn't make it into 8.3.
:)
While I understand that 100% reliable coverage is a significantly
stronger guarantee, I don't see any particular problems in implementing
that. WAL logging isn't that hard.
I won't insist, I'm not the one doing the programming after all.
Anything is better than what we have now. However, I do hope that
whatever is implemented doesn't need a complete rewrite to make it 100%
reliable in the future.
The basic wish I have is to not use a fixed size shared memory area like
FSM for the DSM. I'd like it to use the shared buffers instead, which
makes the memory management and tuning easier. And it also makes it
easier to get the WAL logging right, even if it's not done for 8.3 but
added later.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Hello, long time no see.
This topic looks interesting. I'm enrious of Itagaki-san and others.
I can't do now what I want, due to other work that I don't want to do
(isn't my boss seeing this?). I wish I could join the community some
day and contribute to the development like the great experts here.
# I can't wait to try Itagakis-san's latest patch for load distributed
checkpoint in my environment and report the result.
# But I may not have enough time...
Let me give some comment below.
From: "Heikki Linnakangas" <heikki@enterprisedb.com>
While I understand that 100% reliable coverage is a significantly
stronger guarantee, I don't see any particular problems in
implementing
that. WAL logging isn't that hard.
I won't insist, I'm not the one doing the programming after all.
Anything is better than what we have now. However, I do hope that
whatever is implemented doesn't need a complete rewrite to make it
100%
reliable in the future.
The basic wish I have is to not use a fixed size shared memory area
like
FSM for the DSM. I'd like it to use the shared buffers instead,
which
makes the memory management and tuning easier. And it also makes it
easier to get the WAL logging right, even if it's not done for 8.3
but
added later.
I hope for the same thing as Heikki-san. Though I'm relatively new to
PostgreSQL source code, I don't think it is very difficult (at least
for experts here) to implement the reliable space management scheme,
so I proposed the following before -- not separate memory area for
FSM, but treating it the same way as data files in the shared buffers.
Though Tom-san is worrying about performance, what makes the
performance degrade greatly? Additional WAL records for updating
space management structures are written sequentially in batch.
Additional dirty shared buffers are written efficiently by kernel (at
least now.) And PostgreSQL is released from the giant lwlock for FSM.
Some performance degradation would surely result. However,
reliability is more important because "vacuum" is almost the greatest
concern for real serious users (not for hobbists who enjoy
performance.) Can anybody say to users "we are working hard, but our
work may not be reliable and sometimes fails. Can you see if our
vacuuming effort failed and try this...?"
And I'm afraid that increasing the number of configuration parameters
is unacceptable for users. It is merely the excuse of developers.
PostgreSQL already has more than 100 parameters. Some of them, such
as bgwriter_*, are difficult for normal users to understand. It's
best to use shared_buffers parameter and show how to set it in the
document.
Addressing the vacuum problem correctly is very important. I hope you
don't introduce new parameters for unfinished work and force users to
check the manual to change the parameters in later versions, i.e.
"managed_* parameters are not supported from this release. Please use
shared_buffers..." Is it a "must" to release 8.3 by this summer? I
think that delaying the release a bit for correct (reliable) vacuum
resolution is worth.
From: "Takayuki Tsunakawa" <tsunakawa.takay@jp.fujitsu.com>
Yes! I'm completely in favor of Itagaki-san. Separating the cache
for
FSM may produce a new configuration parameter like fsm_cache_size,
which the normal users would not desire (unless they like enjoying
difficult DBMS.)
I think that integrating the treatment of space management structure
and data area is good. That means, for example, implementing "Free
Space Table" described in section 14.2.2.1 of Jim Gray's book
"Transaction Processing: Concepts and Techniques", though it may
have
been discussed in PostgreSQL community far long ago (really?). Of
course, some refinements may be necessary to tune to PostgreSQL's
concept, say, creating one free space table file for each data file
to
make the implementation easy. It would reduce the source code
solely
for FSM.
In addition, it would provide the transactional space management.
If
I understand correctly, in the current implementation, updates to
FSM
are lost when the server crashes, aren't they? The idea assumes
that
Show quoted text
FSM will be rebuilt by vacuum because vacuum is inevitable. If
updates to space management area were made transactional, it might
provide the infrastructure for "vacuumless PostgreSQL."
On Wed, Feb 28, 2007 at 04:10:09PM +0900, ITAGAKI Takahiro wrote:
"Jim C. Nasby" <jim@nasby.net> wrote:
At some point it might make sense to convert the FSM into a bitmap; that
way everything just scales with database size.In the meantime, I'm not sure if it makes sense to tie the FSM size to
the DSM size, since each FSM page requires 48x the storage of a DSM
page. I think there's also a lot of cases where FSM size will not scale
the same was DSM size will, such as when there's historical data in the
database.Bitmapped FSM is interesting. Maybe strict accuracy is not needed for FSM.
If we change FSM to use 2 bits/page bitmaps, it requires only 1/48 shared
memory by now. However, 6 bytes/page is small enough for normal use. We need
to reconsider it if we would go into TB class heavily updated databases.That raises another question... what happens when we run out of DSM
space?First, discard completely clean memory chunks in DSM. 'Clean' means all of
the tuples managed by the chunk are frozen. This is a lossless transition.Second, discard tracked tables and its chunks that is least recently
vacuumed. We can assume those tables have many dead tuples and almost
fullscan will be required. We don't bother to keep tracking to such tables.Many optimizations should still remain at this point, but I'll make
a not-so-complex suggestions in the meantime.
Actually, I have to agree with Heikki and Takayuki-san... I really like
the idea of managing DSM (and FSM for that matter) using shared_buffers.
If we do that, that means that we could probably back them to disk very
easily.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
"Jim C. Nasby" <jim@nasby.net> wrote:
Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?IMO yes, we want the bgwriter to be the only process that's normally
writing pages out. How close we are to that, I don't know...
I'm working on making the bgwriter to write almost of dirty pages. This is
the proposal for it using automatic adjustment of bgwriter_lru_maxpages.
The bgwriter_lru_maxpages value will be adjusted to the equal number of calls
of StrategyGetBuffer() per cycle with some safety margins (x2 at present).
The counter are incremented per call and reset to zero at StrategySyncStart().
This patch alone is not so useful except for hiding hardly tunable parameters
from users. However, it would be a first step of allow bgwriters to do some
works before writing dirty buffers.
- [DSM] Pick out pages worth vaccuming and register them into DSM.
- [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)
- [TODO Item] Shrink expired COLD updated tuples to just their headers.
- Set commit hint bits to reduce subsequent writes of blocks.
http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php
I tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.
I got an expected result as below. Over 75% of buffers are written by
bgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values
were much higher than the default value (5). It shows that the most suitable
values greatly depends on workloads.
benchmark | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages
------------+------------+-----------+-------------+-----------------------
default | 300tps | 100% | 77.5% | 120 pages/cycle
with sleep | 150tps | 50% | 98.6% | 70 pages/cycle
I hope that this patch will be a first step of the intelligent bgwriter.
Comments welcome.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
"Jim C. Nasby" <jim@nasby.net> wrote:
Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?IMO yes, we want the bgwriter to be the only process that's normally
writing pages out. How close we are to that, I don't know...
I'm working on making the bgwriter to write almost of dirty pages. This is
the proposal for it using automatic adjustment of bgwriter_lru_maxpages.
The bgwriter_lru_maxpages value will be adjusted to the equal number of calls
of StrategyGetBuffer() per cycle with some safety margins (x2 at present).
The counter are incremented per call and reset to zero at StrategySyncStart().
This patch alone is not so useful except for hiding hardly tunable parameters
from users. However, it would be a first step of allow bgwriters to do some
works before writing dirty buffers.
- [DSM] Pick out pages worth vaccuming and register them into DSM.
- [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)
- [TODO Item] Shrink expired COLD updated tuples to just their headers.
- Set commit hint bits to reduce subsequent writes of blocks.
http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php
I tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.
I got an expected result as below. Over 75% of buffers are written by
bgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values
were much higher than the default value (5). It shows that the most suitable
values greatly depends on workloads.
benchmark | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages
------------+------------+-----------+-------------+-----------------------
default | 300tps | 100% | 77.5% | 120 pages/cycle
with sleep | 150tps | 50% | 98.6% | 70 pages/cycle
I hope that this patch will be a first step of the intelligent bgwriter.
Comments welcome.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Attachments:
automatic_bgwriter_lru.patchapplication/octet-stream; name=automatic_bgwriter_lru.patchDownload
diff -cpr HEAD/doc/src/sgml/config.sgml pgsql-bgwriter/doc/src/sgml/config.sgml
*** HEAD/doc/src/sgml/config.sgml Mon Mar 5 09:48:58 2007
--- pgsql-bgwriter/doc/src/sgml/config.sgml Mon Mar 5 12:39:42 2007
*************** SET ENABLE_SEQSCAN TO OFF;
*** 1208,1248 ****
</listitem>
</varlistentry>
- <varlistentry id="guc-bgwriter-lru-percent" xreflabel="bgwriter_lru_percent">
- <term><varname>bgwriter_lru_percent</varname> (<type>floating point</type>)</term>
- <indexterm>
- <primary><varname>bgwriter_lru_percent</> configuration parameter</primary>
- </indexterm>
- <listitem>
- <para>
- To reduce the probability that server processes will need to issue
- their own writes, the background writer tries to write buffers that
- are likely to be recycled soon. In each round, it examines up to
- <varname>bgwriter_lru_percent</> of the buffers that are nearest to
- being recycled, and writes any that are dirty.
- The default value is 1.0 (1% of the total number of shared buffers).
- This parameter can only be set in the <filename>postgresql.conf</>
- file or on the server command line.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry id="guc-bgwriter-lru-maxpages" xreflabel="bgwriter_lru_maxpages">
- <term><varname>bgwriter_lru_maxpages</varname> (<type>integer</type>)</term>
- <indexterm>
- <primary><varname>bgwriter_lru_maxpages</> configuration parameter</primary>
- </indexterm>
- <listitem>
- <para>
- In each round, no more than this many buffers will be written
- as a result of scanning soon-to-be-recycled buffers.
- The default value is five buffers.
- This parameter can only be set in the <filename>postgresql.conf</>
- file or on the server command line.
- </para>
- </listitem>
- </varlistentry>
-
<varlistentry id="guc-bgwriter-all-percent" xreflabel="bgwriter_all_percent">
<term><varname>bgwriter_all_percent</varname> (<type>floating point</type>)</term>
<indexterm>
--- 1208,1213 ----
*************** SET ENABLE_SEQSCAN TO OFF;
*** 1290,1303 ****
caused by the background writer, but leave more work to be done
at checkpoint time. To reduce load spikes at checkpoints,
increase these two values.
- Similarly, smaller values of <varname>bgwriter_lru_percent</varname> and
- <varname>bgwriter_lru_maxpages</varname> reduce the extra I/O load
- caused by the background writer, but make it more likely that server
- processes will have to issue writes for themselves, delaying interactive
- queries.
To disable background writing entirely,
! set both <varname>maxpages</varname> values and/or both
! <varname>percent</varname> values to zero.
</para>
</sect2>
</sect1>
--- 1255,1269 ----
caused by the background writer, but leave more work to be done
at checkpoint time. To reduce load spikes at checkpoints,
increase these two values.
To disable background writing entirely,
! set <varname>bgwriter_all_percent</varname> value and/or
! <varname>bgwriter_all_maxpages</varname> value to zero.
! </para>
! <para>
! Also, to reduce the probability that server processes will need to
! issue their own writes, the background writer tries to write buffers
! that are likely to be recycled soon. The amount of writes are adjusted
! automatically.
</para>
</sect2>
</sect1>
diff -cpr HEAD/src/backend/postmaster/bgwriter.c pgsql-bgwriter/src/backend/postmaster/bgwriter.c
*** HEAD/src/backend/postmaster/bgwriter.c Mon Jan 22 13:08:10 2007
--- pgsql-bgwriter/src/backend/postmaster/bgwriter.c Mon Mar 5 12:40:14 2007
*************** static volatile sig_atomic_t shutdown_re
*** 141,147 ****
/*
* Private state
*/
! static bool am_bg_writer = false;
static bool ckpt_active = false;
--- 141,147 ----
/*
* Private state
*/
! /*static*/ bool am_bg_writer = false; /* ONLY FOR DEBUG */
static bool ckpt_active = false;
*************** BackgroundWriterMain(void)
*** 484,491 ****
*
* We absorb pending requests after each short sleep.
*/
! if ((bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0) ||
! (bgwriter_lru_percent > 0.0 && bgwriter_lru_maxpages > 0))
udelay = BgWriterDelay * 1000L;
else if (XLogArchiveTimeout > 0)
udelay = 1000000L; /* One second */
--- 484,490 ----
*
* We absorb pending requests after each short sleep.
*/
! if (bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0)
udelay = BgWriterDelay * 1000L;
else if (XLogArchiveTimeout > 0)
udelay = 1000000L; /* One second */
diff -cpr HEAD/src/backend/storage/buffer/bufmgr.c pgsql-bgwriter/src/backend/storage/buffer/bufmgr.c
*** HEAD/src/backend/storage/buffer/bufmgr.c Mon Feb 5 10:35:58 2007
--- pgsql-bgwriter/src/backend/storage/buffer/bufmgr.c Mon Mar 5 12:41:09 2007
***************
*** 62,72 ****
/* GUC variables */
bool zero_damaged_pages = false;
- double bgwriter_lru_percent = 1.0;
double bgwriter_all_percent = 0.333;
- int bgwriter_lru_maxpages = 5;
int bgwriter_all_maxpages = 5;
long NDirectFileRead; /* some I/O's are direct file access. bypass
* bufmgr */
--- 62,71 ----
/* GUC variables */
bool zero_damaged_pages = false;
double bgwriter_all_percent = 0.333;
int bgwriter_all_maxpages = 5;
+ static int bgwriter_lru_maxpages = 5; /* adjusted automatically */
long NDirectFileRead; /* some I/O's are direct file access. bypass
* bufmgr */
*************** BufferSync(void)
*** 945,956 ****
{
int buf_id;
int num_to_scan;
int absorb_counter;
/*
* Find out where to start the circular scan.
*/
! buf_id = StrategySyncStart();
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
--- 944,956 ----
{
int buf_id;
int num_to_scan;
+ int num_to_clean;
int absorb_counter;
/*
* Find out where to start the circular scan.
*/
! buf_id = StrategySyncStart(&num_to_clean);
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
*************** BgBufferSync(void)
*** 992,997 ****
--- 992,998 ----
static int buf_id1 = 0;
int buf_id2;
int num_to_scan;
+ int num_to_clean;
int num_written;
/* Make sure we can handle the pin inside SyncOneBuffer */
*************** BgBufferSync(void)
*** 1036,1058 ****
* This loop considers only unpinned buffers close to the clock sweep
* point.
*/
! if (bgwriter_lru_percent > 0.0 && bgwriter_lru_maxpages > 0)
! {
! num_to_scan = (int) ((NBuffers * bgwriter_lru_percent + 99) / 100);
! num_written = 0;
! buf_id2 = StrategySyncStart();
! while (num_to_scan-- > 0)
! {
! if (SyncOneBuffer(buf_id2, true))
! {
! if (++num_written >= bgwriter_lru_maxpages)
! break;
! }
! if (++buf_id2 >= NBuffers)
! buf_id2 = 0;
! }
}
}
--- 1037,1061 ----
* This loop considers only unpinned buffers close to the clock sweep
* point.
*/
! buf_id2 = StrategySyncStart(&num_to_clean);
! if (bgwriter_lru_maxpages < num_to_clean)
! bgwriter_lru_maxpages -= 1;
! else
! bgwriter_lru_maxpages += 1;
! {
! static int counter = 0;
! if ((++counter) % 50 == 0)
! elog(LOG, "bgwriter_lru_maxpages = %d", bgwriter_lru_maxpages);
! }
! num_written = 0;
! while (num_written < bgwriter_lru_maxpages)
! {
! if (SyncOneBuffer(buf_id2, true))
! ++num_written;
! if (++buf_id2 >= NBuffers)
! buf_id2 = 0;
}
}
*************** BgBufferSync(void)
*** 1062,1070 ****
* If skip_pinned is true, we don't write currently-pinned buffers, nor
* buffers marked recently used, as these are not replacement candidates.
*
! * Returns true if buffer was written, else false. (This could be in error
! * if FlushBuffers finds the buffer clean after locking it, but we don't
! * care all that much.)
*
* Note: caller must have done ResourceOwnerEnlargeBuffers.
*/
--- 1065,1073 ----
* If skip_pinned is true, we don't write currently-pinned buffers, nor
* buffers marked recently used, as these are not replacement candidates.
*
! * Returns true if buffer was written or recyclable soon, else false.
! * (This could be in error if FlushBuffers finds the buffer clean after
! * locking it, but we don't care all that much.)
*
* Note: caller must have done ResourceOwnerEnlargeBuffers.
*/
*************** SyncOneBuffer(int buf_id, bool skip_pinn
*** 1083,1098 ****
* upcoming changes and so we are not required to write such dirty buffer.
*/
LockBufHdr(bufHdr);
! if (!(bufHdr->flags & BM_VALID) || !(bufHdr->flags & BM_DIRTY))
{
UnlockBufHdr(bufHdr);
return false;
}
! if (skip_pinned &&
! (bufHdr->refcount != 0 || bufHdr->usage_count != 0))
{
UnlockBufHdr(bufHdr);
! return false;
}
/*
--- 1086,1101 ----
* upcoming changes and so we are not required to write such dirty buffer.
*/
LockBufHdr(bufHdr);
! if (skip_pinned &&
! (bufHdr->refcount != 0 || bufHdr->usage_count != 0))
{
UnlockBufHdr(bufHdr);
return false;
}
! if (!(bufHdr->flags & BM_VALID) || !(bufHdr->flags & BM_DIRTY))
{
UnlockBufHdr(bufHdr);
! return skip_pinned; /* We can recycle non dirty buffers soon. */
}
/*
*************** PrintBufferLeakWarning(Buffer buffer)
*** 1272,1279 ****
--- 1275,1284 ----
void
FlushBufferPool(void)
{
+ StrategyShowReports(); /* ONLY FOR DEBUG */
BufferSync();
smgrsync();
+ StrategyShowReports(); /* ONLY FOR DEBUG */
}
*************** FlushBuffer(volatile BufferDesc *buf, SM
*** 1365,1370 ****
--- 1370,1377 ----
if (!StartBufferIO(buf, false))
return;
+ StrategyReportWrite(); /* ONLY FOR DEBUG */
+
/* Setup error traceback support for ereport() */
errcontext.callback = buffer_write_error_callback;
errcontext.arg = (void *) buf;
diff -cpr HEAD/src/backend/storage/buffer/freelist.c pgsql-bgwriter/src/backend/storage/buffer/freelist.c
*** HEAD/src/backend/storage/buffer/freelist.c Thu Jan 11 14:20:57 2007
--- pgsql-bgwriter/src/backend/storage/buffer/freelist.c Mon Mar 5 12:36:35 2007
*************** typedef struct
*** 27,32 ****
--- 27,37 ----
/* Clock sweep hand: index of next buffer to consider grabbing */
int nextVictimBuffer;
+ int numGetBuffer; /* Buffer request count per cycle */
+
+ int writtenByBgWriter; /* ONLY FOR DEBUG */
+ int writtenByBackends; /* ONLY FOR DEBUG */
+
int firstFreeBuffer; /* Head of list of unused buffers */
int lastFreeBuffer; /* Tail of list of unused buffers */
*************** StrategyGetBuffer(void)
*** 63,68 ****
--- 68,75 ----
LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE);
+ StrategyControl->numGetBuffer++;
+
/*
* Try to get a buffer from the freelist. Note that the freeNext fields
* are considered to be protected by the BufFreelistLock not the
*************** StrategyFreeBuffer(volatile BufferDesc *
*** 176,191 ****
* BufferSync() will proceed circularly around the buffer array from there.
*/
int
! StrategySyncStart(void)
{
int result;
- /*
- * We could probably dispense with the locking here, but just to be safe
- * ...
- */
LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE);
result = StrategyControl->nextVictimBuffer;
LWLockRelease(BufFreelistLock);
return result;
}
--- 183,196 ----
* BufferSync() will proceed circularly around the buffer array from there.
*/
int
! StrategySyncStart(int *num_to_clean)
{
int result;
LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE);
result = StrategyControl->nextVictimBuffer;
+ *num_to_clean = StrategyControl->numGetBuffer * 2; /* safety margin */
+ StrategyControl->numGetBuffer = 0;
LWLockRelease(BufFreelistLock);
return result;
}
*************** StrategyInitialize(bool init)
*** 270,276 ****
--- 275,303 ----
/* Initialize the clock sweep pointer */
StrategyControl->nextVictimBuffer = 0;
+
+ StrategyControl->numGetBuffer = 0;
+ StrategyControl->writtenByBgWriter = 0;
+ StrategyControl->writtenByBackends = 0;
}
else
Assert(!init);
}
+
+ void
+ StrategyReportWrite(void)
+ {
+ extern bool am_bg_writer;
+ if (am_bg_writer)
+ StrategyControl->writtenByBgWriter++;
+ else
+ StrategyControl->writtenByBackends++;
+ }
+
+ void
+ StrategyShowReports(void)
+ {
+ elog(LOG, "Write stats : bgwriter/backends = %d/%d",
+ StrategyControl->writtenByBgWriter,
+ StrategyControl->writtenByBackends);
+ }
diff -cpr HEAD/src/backend/utils/misc/guc.c pgsql-bgwriter/src/backend/utils/misc/guc.c
*** HEAD/src/backend/utils/misc/guc.c Mon Mar 5 09:48:58 2007
--- pgsql-bgwriter/src/backend/utils/misc/guc.c Mon Mar 5 12:39:42 2007
*************** static struct config_int ConfigureNamesI
*** 1505,1519 ****
},
{
- {"bgwriter_lru_maxpages", PGC_SIGHUP, RESOURCES,
- gettext_noop("Background writer maximum number of LRU pages to flush per round."),
- NULL
- },
- &bgwriter_lru_maxpages,
- 5, 0, 1000, NULL, NULL
- },
-
- {
{"bgwriter_all_maxpages", PGC_SIGHUP, RESOURCES,
gettext_noop("Background writer maximum number of all pages to flush per round."),
NULL
--- 1505,1510 ----
*************** static struct config_real ConfigureNames
*** 1757,1771 ****
},
{
- {"bgwriter_lru_percent", PGC_SIGHUP, RESOURCES,
- gettext_noop("Background writer percentage of LRU buffers to flush per round."),
- NULL
- },
- &bgwriter_lru_percent,
- 1.0, 0.0, 100.0, NULL, NULL
- },
-
- {
{"bgwriter_all_percent", PGC_SIGHUP, RESOURCES,
gettext_noop("Background writer percentage of all buffers to flush per round."),
NULL
--- 1748,1753 ----
diff -cpr HEAD/src/backend/utils/misc/postgresql.conf.sample pgsql-bgwriter/src/backend/utils/misc/postgresql.conf.sample
*** HEAD/src/backend/utils/misc/postgresql.conf.sample Mon Mar 5 09:48:58 2007
--- pgsql-bgwriter/src/backend/utils/misc/postgresql.conf.sample Mon Mar 5 12:39:42 2007
***************
*** 138,146 ****
# - Background writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
! #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round
! #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round
! #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round
#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round
--- 138,144 ----
# - Background writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
! #bgwriter_all_percent = 0.333 # 0-100% of buffers scanned/round
#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round
diff -cpr HEAD/src/include/storage/buf_internals.h pgsql-bgwriter/src/include/storage/buf_internals.h
*** HEAD/src/include/storage/buf_internals.h Thu Jan 11 14:20:57 2007
--- pgsql-bgwriter/src/include/storage/buf_internals.h Mon Mar 5 12:36:35 2007
*************** extern long int LocalBufferFlushCount;
*** 186,195 ****
/* freelist.c */
extern volatile BufferDesc *StrategyGetBuffer(void);
extern void StrategyFreeBuffer(volatile BufferDesc *buf, bool at_head);
! extern int StrategySyncStart(void);
extern Size StrategyShmemSize(void);
extern void StrategyInitialize(bool init);
/* buf_table.c */
extern Size BufTableShmemSize(int size);
extern void InitBufTable(int size);
--- 186,198 ----
/* freelist.c */
extern volatile BufferDesc *StrategyGetBuffer(void);
extern void StrategyFreeBuffer(volatile BufferDesc *buf, bool at_head);
! extern int StrategySyncStart(int *num_to_clean);
extern Size StrategyShmemSize(void);
extern void StrategyInitialize(bool init);
+ extern void StrategyReportWrite(void); /* ONLY FOR DEBUG */
+ extern void StrategyShowReports(void); /* ONLY FOR DEBUG */
+
/* buf_table.c */
extern Size BufTableShmemSize(int size);
extern void InitBufTable(int size);
diff -cpr HEAD/src/include/storage/bufmgr.h pgsql-bgwriter/src/include/storage/bufmgr.h
*** HEAD/src/include/storage/bufmgr.h Thu Jan 11 14:20:57 2007
--- pgsql-bgwriter/src/include/storage/bufmgr.h Mon Mar 5 12:39:42 2007
*************** extern DLLIMPORT int NBuffers;
*** 24,32 ****
/* in bufmgr.c */
extern bool zero_damaged_pages;
- extern double bgwriter_lru_percent;
extern double bgwriter_all_percent;
- extern int bgwriter_lru_maxpages;
extern int bgwriter_all_maxpages;
/* in buf_init.c */
--- 24,30 ----
Sorry, I had a mistake in the patch I sent.
This is a fixed version.
I wrote:
I'm working on making the bgwriter to write almost of dirty pages. This is
the proposal for it using automatic adjustment of bgwriter_lru_maxpages.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Attachments:
automatic_bgwriter_lru2.patchapplication/octet-stream; name=automatic_bgwriter_lru2.patchDownload
diff -cpr HEAD/doc/src/sgml/config.sgml pgsql-bgwriter/doc/src/sgml/config.sgml
*** HEAD/doc/src/sgml/config.sgml Mon Mar 5 09:48:58 2007
--- pgsql-bgwriter/doc/src/sgml/config.sgml Mon Mar 5 12:39:42 2007
*************** SET ENABLE_SEQSCAN TO OFF;
*** 1208,1248 ****
</listitem>
</varlistentry>
- <varlistentry id="guc-bgwriter-lru-percent" xreflabel="bgwriter_lru_percent">
- <term><varname>bgwriter_lru_percent</varname> (<type>floating point</type>)</term>
- <indexterm>
- <primary><varname>bgwriter_lru_percent</> configuration parameter</primary>
- </indexterm>
- <listitem>
- <para>
- To reduce the probability that server processes will need to issue
- their own writes, the background writer tries to write buffers that
- are likely to be recycled soon. In each round, it examines up to
- <varname>bgwriter_lru_percent</> of the buffers that are nearest to
- being recycled, and writes any that are dirty.
- The default value is 1.0 (1% of the total number of shared buffers).
- This parameter can only be set in the <filename>postgresql.conf</>
- file or on the server command line.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry id="guc-bgwriter-lru-maxpages" xreflabel="bgwriter_lru_maxpages">
- <term><varname>bgwriter_lru_maxpages</varname> (<type>integer</type>)</term>
- <indexterm>
- <primary><varname>bgwriter_lru_maxpages</> configuration parameter</primary>
- </indexterm>
- <listitem>
- <para>
- In each round, no more than this many buffers will be written
- as a result of scanning soon-to-be-recycled buffers.
- The default value is five buffers.
- This parameter can only be set in the <filename>postgresql.conf</>
- file or on the server command line.
- </para>
- </listitem>
- </varlistentry>
-
<varlistentry id="guc-bgwriter-all-percent" xreflabel="bgwriter_all_percent">
<term><varname>bgwriter_all_percent</varname> (<type>floating point</type>)</term>
<indexterm>
--- 1208,1213 ----
*************** SET ENABLE_SEQSCAN TO OFF;
*** 1290,1303 ****
caused by the background writer, but leave more work to be done
at checkpoint time. To reduce load spikes at checkpoints,
increase these two values.
- Similarly, smaller values of <varname>bgwriter_lru_percent</varname> and
- <varname>bgwriter_lru_maxpages</varname> reduce the extra I/O load
- caused by the background writer, but make it more likely that server
- processes will have to issue writes for themselves, delaying interactive
- queries.
To disable background writing entirely,
! set both <varname>maxpages</varname> values and/or both
! <varname>percent</varname> values to zero.
</para>
</sect2>
</sect1>
--- 1255,1269 ----
caused by the background writer, but leave more work to be done
at checkpoint time. To reduce load spikes at checkpoints,
increase these two values.
To disable background writing entirely,
! set <varname>bgwriter_all_percent</varname> value and/or
! <varname>bgwriter_all_maxpages</varname> value to zero.
! </para>
! <para>
! Also, to reduce the probability that server processes will need to
! issue their own writes, the background writer tries to write buffers
! that are likely to be recycled soon. The amount of writes are adjusted
! automatically.
</para>
</sect2>
</sect1>
diff -cpr HEAD/src/backend/postmaster/bgwriter.c pgsql-bgwriter/src/backend/postmaster/bgwriter.c
*** HEAD/src/backend/postmaster/bgwriter.c Mon Jan 22 13:08:10 2007
--- pgsql-bgwriter/src/backend/postmaster/bgwriter.c Mon Mar 5 12:40:14 2007
*************** static volatile sig_atomic_t shutdown_re
*** 141,147 ****
/*
* Private state
*/
! static bool am_bg_writer = false;
static bool ckpt_active = false;
--- 141,147 ----
/*
* Private state
*/
! /*static*/ bool am_bg_writer = false; /* ONLY FOR DEBUG */
static bool ckpt_active = false;
*************** BackgroundWriterMain(void)
*** 484,491 ****
*
* We absorb pending requests after each short sleep.
*/
! if ((bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0) ||
! (bgwriter_lru_percent > 0.0 && bgwriter_lru_maxpages > 0))
udelay = BgWriterDelay * 1000L;
else if (XLogArchiveTimeout > 0)
udelay = 1000000L; /* One second */
--- 484,490 ----
*
* We absorb pending requests after each short sleep.
*/
! if (bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0)
udelay = BgWriterDelay * 1000L;
else if (XLogArchiveTimeout > 0)
udelay = 1000000L; /* One second */
diff -cpr HEAD/src/backend/storage/buffer/bufmgr.c pgsql-bgwriter/src/backend/storage/buffer/bufmgr.c
*** HEAD/src/backend/storage/buffer/bufmgr.c Mon Feb 5 10:35:58 2007
--- pgsql-bgwriter/src/backend/storage/buffer/bufmgr.c Mon Mar 5 12:41:09 2007
***************
*** 62,72 ****
/* GUC variables */
bool zero_damaged_pages = false;
- double bgwriter_lru_percent = 1.0;
double bgwriter_all_percent = 0.333;
- int bgwriter_lru_maxpages = 5;
int bgwriter_all_maxpages = 5;
long NDirectFileRead; /* some I/O's are direct file access. bypass
* bufmgr */
--- 62,71 ----
/* GUC variables */
bool zero_damaged_pages = false;
double bgwriter_all_percent = 0.333;
int bgwriter_all_maxpages = 5;
+ static int bgwriter_lru_maxpages = 5; /* adjusted automatically */
long NDirectFileRead; /* some I/O's are direct file access. bypass
* bufmgr */
*************** BufferSync(void)
*** 945,956 ****
{
int buf_id;
int num_to_scan;
int absorb_counter;
/*
* Find out where to start the circular scan.
*/
! buf_id = StrategySyncStart();
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
--- 944,956 ----
{
int buf_id;
int num_to_scan;
+ int num_to_clean;
int absorb_counter;
/*
* Find out where to start the circular scan.
*/
! buf_id = StrategySyncStart(&num_to_clean);
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
*************** BgBufferSync(void)
*** 992,997 ****
--- 992,998 ----
static int buf_id1 = 0;
int buf_id2;
int num_to_scan;
+ int num_to_clean;
int num_written;
/* Make sure we can handle the pin inside SyncOneBuffer */
*************** BgBufferSync(void)
*** 1036,1058 ****
* This loop considers only unpinned buffers close to the clock sweep
* point.
*/
! if (bgwriter_lru_percent > 0.0 && bgwriter_lru_maxpages > 0)
! {
! num_to_scan = (int) ((NBuffers * bgwriter_lru_percent + 99) / 100);
! num_written = 0;
! buf_id2 = StrategySyncStart();
! while (num_to_scan-- > 0)
! {
! if (SyncOneBuffer(buf_id2, true))
! {
! if (++num_written >= bgwriter_lru_maxpages)
! break;
! }
! if (++buf_id2 >= NBuffers)
! buf_id2 = 0;
! }
}
}
--- 1037,1061 ----
* This loop considers only unpinned buffers close to the clock sweep
* point.
*/
! buf_id2 = StrategySyncStart(&num_to_clean);
! if (bgwriter_lru_maxpages > num_to_clean)
! bgwriter_lru_maxpages -= 1;
! else
! bgwriter_lru_maxpages += 1;
! {
! static int counter = 0;
! if ((++counter) % 50 == 0)
! elog(LOG, "bgwriter_lru_maxpages = %d", bgwriter_lru_maxpages);
! }
! num_written = 0;
! while (num_written < bgwriter_lru_maxpages)
! {
! if (SyncOneBuffer(buf_id2, true))
! ++num_written;
! if (++buf_id2 >= NBuffers)
! buf_id2 = 0;
}
}
*************** BgBufferSync(void)
*** 1062,1070 ****
* If skip_pinned is true, we don't write currently-pinned buffers, nor
* buffers marked recently used, as these are not replacement candidates.
*
! * Returns true if buffer was written, else false. (This could be in error
! * if FlushBuffers finds the buffer clean after locking it, but we don't
! * care all that much.)
*
* Note: caller must have done ResourceOwnerEnlargeBuffers.
*/
--- 1065,1073 ----
* If skip_pinned is true, we don't write currently-pinned buffers, nor
* buffers marked recently used, as these are not replacement candidates.
*
! * Returns true if buffer was written or recyclable soon, else false.
! * (This could be in error if FlushBuffers finds the buffer clean after
! * locking it, but we don't care all that much.)
*
* Note: caller must have done ResourceOwnerEnlargeBuffers.
*/
*************** SyncOneBuffer(int buf_id, bool skip_pinn
*** 1083,1098 ****
* upcoming changes and so we are not required to write such dirty buffer.
*/
LockBufHdr(bufHdr);
! if (!(bufHdr->flags & BM_VALID) || !(bufHdr->flags & BM_DIRTY))
{
UnlockBufHdr(bufHdr);
return false;
}
! if (skip_pinned &&
! (bufHdr->refcount != 0 || bufHdr->usage_count != 0))
{
UnlockBufHdr(bufHdr);
! return false;
}
/*
--- 1086,1101 ----
* upcoming changes and so we are not required to write such dirty buffer.
*/
LockBufHdr(bufHdr);
! if (skip_pinned &&
! (bufHdr->refcount != 0 || bufHdr->usage_count != 0))
{
UnlockBufHdr(bufHdr);
return false;
}
! if (!(bufHdr->flags & BM_VALID) || !(bufHdr->flags & BM_DIRTY))
{
UnlockBufHdr(bufHdr);
! return skip_pinned; /* We can recycle non dirty buffers soon. */
}
/*
*************** PrintBufferLeakWarning(Buffer buffer)
*** 1272,1279 ****
--- 1275,1284 ----
void
FlushBufferPool(void)
{
+ StrategyShowReports(); /* ONLY FOR DEBUG */
BufferSync();
smgrsync();
+ StrategyShowReports(); /* ONLY FOR DEBUG */
}
*************** FlushBuffer(volatile BufferDesc *buf, SM
*** 1365,1370 ****
--- 1370,1377 ----
if (!StartBufferIO(buf, false))
return;
+ StrategyReportWrite(); /* ONLY FOR DEBUG */
+
/* Setup error traceback support for ereport() */
errcontext.callback = buffer_write_error_callback;
errcontext.arg = (void *) buf;
diff -cpr HEAD/src/backend/storage/buffer/freelist.c pgsql-bgwriter/src/backend/storage/buffer/freelist.c
*** HEAD/src/backend/storage/buffer/freelist.c Thu Jan 11 14:20:57 2007
--- pgsql-bgwriter/src/backend/storage/buffer/freelist.c Mon Mar 5 12:36:35 2007
*************** typedef struct
*** 27,32 ****
--- 27,37 ----
/* Clock sweep hand: index of next buffer to consider grabbing */
int nextVictimBuffer;
+ int numGetBuffer; /* Buffer request count per cycle */
+
+ int writtenByBgWriter; /* ONLY FOR DEBUG */
+ int writtenByBackends; /* ONLY FOR DEBUG */
+
int firstFreeBuffer; /* Head of list of unused buffers */
int lastFreeBuffer; /* Tail of list of unused buffers */
*************** StrategyGetBuffer(void)
*** 63,68 ****
--- 68,75 ----
LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE);
+ StrategyControl->numGetBuffer++;
+
/*
* Try to get a buffer from the freelist. Note that the freeNext fields
* are considered to be protected by the BufFreelistLock not the
*************** StrategyFreeBuffer(volatile BufferDesc *
*** 176,191 ****
* BufferSync() will proceed circularly around the buffer array from there.
*/
int
! StrategySyncStart(void)
{
int result;
- /*
- * We could probably dispense with the locking here, but just to be safe
- * ...
- */
LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE);
result = StrategyControl->nextVictimBuffer;
LWLockRelease(BufFreelistLock);
return result;
}
--- 183,196 ----
* BufferSync() will proceed circularly around the buffer array from there.
*/
int
! StrategySyncStart(int *num_to_clean)
{
int result;
LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE);
result = StrategyControl->nextVictimBuffer;
+ *num_to_clean = StrategyControl->numGetBuffer * 2; /* safety margin */
+ StrategyControl->numGetBuffer = 0;
LWLockRelease(BufFreelistLock);
return result;
}
*************** StrategyInitialize(bool init)
*** 270,276 ****
--- 275,303 ----
/* Initialize the clock sweep pointer */
StrategyControl->nextVictimBuffer = 0;
+
+ StrategyControl->numGetBuffer = 0;
+ StrategyControl->writtenByBgWriter = 0;
+ StrategyControl->writtenByBackends = 0;
}
else
Assert(!init);
}
+
+ void
+ StrategyReportWrite(void)
+ {
+ extern bool am_bg_writer;
+ if (am_bg_writer)
+ StrategyControl->writtenByBgWriter++;
+ else
+ StrategyControl->writtenByBackends++;
+ }
+
+ void
+ StrategyShowReports(void)
+ {
+ elog(LOG, "Write stats : bgwriter/backends = %d/%d",
+ StrategyControl->writtenByBgWriter,
+ StrategyControl->writtenByBackends);
+ }
diff -cpr HEAD/src/backend/utils/misc/guc.c pgsql-bgwriter/src/backend/utils/misc/guc.c
*** HEAD/src/backend/utils/misc/guc.c Mon Mar 5 09:48:58 2007
--- pgsql-bgwriter/src/backend/utils/misc/guc.c Mon Mar 5 12:39:42 2007
*************** static struct config_int ConfigureNamesI
*** 1505,1519 ****
},
{
- {"bgwriter_lru_maxpages", PGC_SIGHUP, RESOURCES,
- gettext_noop("Background writer maximum number of LRU pages to flush per round."),
- NULL
- },
- &bgwriter_lru_maxpages,
- 5, 0, 1000, NULL, NULL
- },
-
- {
{"bgwriter_all_maxpages", PGC_SIGHUP, RESOURCES,
gettext_noop("Background writer maximum number of all pages to flush per round."),
NULL
--- 1505,1510 ----
*************** static struct config_real ConfigureNames
*** 1757,1771 ****
},
{
- {"bgwriter_lru_percent", PGC_SIGHUP, RESOURCES,
- gettext_noop("Background writer percentage of LRU buffers to flush per round."),
- NULL
- },
- &bgwriter_lru_percent,
- 1.0, 0.0, 100.0, NULL, NULL
- },
-
- {
{"bgwriter_all_percent", PGC_SIGHUP, RESOURCES,
gettext_noop("Background writer percentage of all buffers to flush per round."),
NULL
--- 1748,1753 ----
diff -cpr HEAD/src/backend/utils/misc/postgresql.conf.sample pgsql-bgwriter/src/backend/utils/misc/postgresql.conf.sample
*** HEAD/src/backend/utils/misc/postgresql.conf.sample Mon Mar 5 09:48:58 2007
--- pgsql-bgwriter/src/backend/utils/misc/postgresql.conf.sample Mon Mar 5 12:39:42 2007
***************
*** 138,146 ****
# - Background writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
! #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round
! #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round
! #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round
#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round
--- 138,144 ----
# - Background writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
! #bgwriter_all_percent = 0.333 # 0-100% of buffers scanned/round
#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round
diff -cpr HEAD/src/include/storage/buf_internals.h pgsql-bgwriter/src/include/storage/buf_internals.h
*** HEAD/src/include/storage/buf_internals.h Thu Jan 11 14:20:57 2007
--- pgsql-bgwriter/src/include/storage/buf_internals.h Mon Mar 5 12:36:35 2007
*************** extern long int LocalBufferFlushCount;
*** 186,195 ****
/* freelist.c */
extern volatile BufferDesc *StrategyGetBuffer(void);
extern void StrategyFreeBuffer(volatile BufferDesc *buf, bool at_head);
! extern int StrategySyncStart(void);
extern Size StrategyShmemSize(void);
extern void StrategyInitialize(bool init);
/* buf_table.c */
extern Size BufTableShmemSize(int size);
extern void InitBufTable(int size);
--- 186,198 ----
/* freelist.c */
extern volatile BufferDesc *StrategyGetBuffer(void);
extern void StrategyFreeBuffer(volatile BufferDesc *buf, bool at_head);
! extern int StrategySyncStart(int *num_to_clean);
extern Size StrategyShmemSize(void);
extern void StrategyInitialize(bool init);
+ extern void StrategyReportWrite(void); /* ONLY FOR DEBUG */
+ extern void StrategyShowReports(void); /* ONLY FOR DEBUG */
+
/* buf_table.c */
extern Size BufTableShmemSize(int size);
extern void InitBufTable(int size);
diff -cpr HEAD/src/include/storage/bufmgr.h pgsql-bgwriter/src/include/storage/bufmgr.h
*** HEAD/src/include/storage/bufmgr.h Thu Jan 11 14:20:57 2007
--- pgsql-bgwriter/src/include/storage/bufmgr.h Mon Mar 5 12:39:42 2007
*************** extern DLLIMPORT int NBuffers;
*** 24,32 ****
/* in bufmgr.c */
extern bool zero_damaged_pages;
- extern double bgwriter_lru_percent;
extern double bgwriter_all_percent;
- extern int bgwriter_lru_maxpages;
extern int bgwriter_all_maxpages;
/* in buf_init.c */
--- 24,30 ----
"Jim C. Nasby" <jim@nasby.net> wrote:
* Aggressive freezing
we will use OldestXmin as the threshold to freeze tuples in
dirty pages or pages that have some dead tuples. Or, many UNFROZEN
pages still remain after vacuum and they will cost us in the next
vacuum preventing XID wraparound.Another good idea. If it's not too invasive I'd love to see that as a
stand-alone patch so that we know it can get in.
This is a stand-alone patch for aggressive freezing. I'll propose
to use OldestXmin instead of FreezeLimit as the freeze threshold
in the circumstances below:
- The page is already dirty.
- There are another tuple to be frozen in the same page.
- There are another dead tuples in the same page.
Freezing is delayed until the heap vacuum phase.
Anyway we create new dirty buffers and/or write WAL then, so additional
freezing is almost free. Keeping the number of unfrozen tuples low,
we can reduce the cost of next XID wraparound vacuum and piggyback
multiple freezing operations in the same page.
The following test shows differences of the number of unfrozen tuples
with or without the patch. Formerly, recently inserted tuples are not
frozen immediately (1). Even if there are some dead tuples in the same
page, unfrozen live tuples are not frozen (2). With patch, the number
after first vacuum was already low (3), because the pages including recently
inserted tuples were dirty and not written yet, so aggressive freeze was
performed for it. Moreover, if there are dead tuples in a page, other live
tuples in the same page are also frozen (4).
# CREATE CAST (xid AS integer) WITHOUT FUNCTION AS IMPLICIT;
[without patch]
$ ./pgbench -i -s1 (including vacuum)
# SELECT count(*) FROM accounts WHERE xmin > 2; => 100000 (1)
# UPDATE accounts SET aid = aid WHERE aid % 20 = 0; => UPDATE 5000
# SELECT count(*) FROM accounts WHERE xmin > 2; => 100000
# VACUUM accounts;
# SELECT count(*) FROM accounts WHERE xmin > 2; => 100000 (2)
[with patch]
$ ./pgbench -i -s1 (including vacuum)
# SELECT count(*) FROM accounts WHERE xmin > 2; => 2135 (3)
# UPDATE accounts SET aid = aid WHERE aid % 20 = 0; => UPDATE 5000
# SELECT count(*) FROM accounts WHERE xmin > 2; => 7028
# VACUUM accounts;
# SELECT count(*) FROM accounts WHERE xmin > 2; => 0 (4)
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Attachments:
aggressive_freeze.patchapplication/octet-stream; name=aggressive_freeze.patchDownload
diff -cpr HEAD/src/backend/commands/vacuumlazy.c aggressive_freeze/src/backend/commands/vacuumlazy.c
*** HEAD/src/backend/commands/vacuumlazy.c Mon Feb 26 09:46:04 2007
--- aggressive_freeze/src/backend/commands/vacuumlazy.c Mon Mar 5 19:11:09 2007
*************** lazy_scan_heap(Relation onerel, LVRelSta
*** 220,226 ****
{
BlockNumber nblocks,
blkno;
- HeapTupleData tuple;
char *relname;
BlockNumber empty_pages,
vacuumed_pages;
--- 220,225 ----
*************** lazy_scan_heap(Relation onerel, LVRelSta
*** 260,268 ****
maxoff;
bool tupgone,
hastup;
! int prev_dead_count;
! OffsetNumber frozen[MaxOffsetNumber];
! int nfrozen;
vacuum_delay_point();
--- 259,267 ----
maxoff;
bool tupgone,
hastup;
! int ndead,
! nlive;
! OffsetNumber live_tuples[MaxHeapTuplesPerPage];
vacuum_delay_point();
*************** lazy_scan_heap(Relation onerel, LVRelSta
*** 342,356 ****
continue;
}
- nfrozen = 0;
hastup = false;
! prev_dead_count = vacrelstats->num_dead_tuples;
maxoff = PageGetMaxOffsetNumber(page);
for (offnum = FirstOffsetNumber;
offnum <= maxoff;
offnum = OffsetNumberNext(offnum))
{
! ItemId itemid;
itemid = PageGetItemId(page, offnum);
--- 341,355 ----
continue;
}
hastup = false;
! ndead = nlive = 0;
maxoff = PageGetMaxOffsetNumber(page);
for (offnum = FirstOffsetNumber;
offnum <= maxoff;
offnum = OffsetNumberNext(offnum))
{
! ItemId itemid;
! HeapTupleData tuple;
itemid = PageGetItemId(page, offnum);
*************** lazy_scan_heap(Relation onerel, LVRelSta
*** 401,406 ****
--- 400,406 ----
{
lazy_record_dead_tuple(vacrelstats, &(tuple.t_self));
tups_vacuumed += 1;
+ ndead += 1;
}
else
{
*************** lazy_scan_heap(Relation onerel, LVRelSta
*** 408,458 ****
hastup = true;
/*
! * Each non-removable tuple must be checked to see if it
! * needs freezing. If we already froze anything, then
! * we've already switched the buffer lock to exclusive.
*/
! if (heap_freeze_tuple(tuple.t_data, FreezeLimit,
! (nfrozen > 0) ? InvalidBuffer : buf))
! frozen[nfrozen++] = offnum;
}
} /* scan along page */
! /*
! * If we froze any tuples, mark the buffer dirty, and write a WAL
! * record recording the changes. We must log the changes to be
! * crash-safe against future truncation of CLOG.
! */
! if (nfrozen > 0)
{
! MarkBufferDirty(buf);
! /* no XLOG for temp tables, though */
! if (!onerel->rd_istemp)
{
! XLogRecPtr recptr;
! recptr = log_heap_freeze(onerel, buf, FreezeLimit,
! frozen, nfrozen);
! PageSetLSN(page, recptr);
! PageSetTLI(page, ThisTimeLineID);
}
}
!
! /*
! * If there are no indexes then we can vacuum the page right now
! * instead of doing a second scan.
! */
! if (nindexes == 0 &&
! vacrelstats->num_dead_tuples > 0)
{
! /* Trade in buffer share lock for super-exclusive lock */
! LockBuffer(buf, BUFFER_LOCK_UNLOCK);
! LockBufferForCleanup(buf);
! /* Remove tuples from heap */
! lazy_vacuum_page(onerel, blkno, buf, 0, vacrelstats);
! /* Forget the now-vacuumed tuples, and press on */
! vacrelstats->num_dead_tuples = 0;
! vacuumed_pages++;
}
/*
--- 408,497 ----
hastup = true;
/*
! * We don't freeze tuples here. If there are some dead tuples,
! * we delay freezing until lazy_vacuum_heap in order to avoid
! * making dirty buffers only for freezing. If no dead tuples,
! * we freeze them just below.
*/
! live_tuples[nlive++] = offnum;
}
} /* scan along page */
! if (ndead > 0)
{
! /*
! * If there are no indexes then we can vacuum the page right now
! * instead of doing a second scan.
! */
! if (nindexes == 0)
{
! Assert(vacrelstats->num_dead_tuples == ndead);
! /* Trade in buffer share lock for super-exclusive lock */
! LockBuffer(buf, BUFFER_LOCK_UNLOCK);
! LockBufferForCleanup(buf);
! /* Remove tuples from heap */
! lazy_vacuum_page(onerel, blkno, buf, 0, vacrelstats);
! /* Forget the now-vacuumed tuples, and press on */
! vacrelstats->num_dead_tuples = ndead = 0;
! vacuumed_pages++;
}
}
! else if (nlive > 0)
{
! int nfrozen;
! OffsetNumber frozen[MaxHeapTuplesPerPage];
! TransactionId limit;
!
! /* If the page is already dirty, we freeze tuples aggressively. */
! limit = (BufferIsDirty(buf) ? OldestXmin : FreezeLimit);
!
! nfrozen = 0;
! for (i = 0; i < nlive; i++)
! {
! ItemId itemid;
! HeapTupleHeader tuple;
!
! itemid = PageGetItemId(page, live_tuples[i]);
! tuple = (HeapTupleHeader) PageGetItem(page, itemid);
!
! /*
! * Each non-removable tuple must be checked to see if it
! * needs freezing. If we already froze anything, then
! * we've already switched the buffer lock to exclusive.
! */
! if (heap_freeze_tuple(tuple, limit,
! nfrozen > 0 ? InvalidBuffer : buf))
! {
! /*
! * If there are any tuples to be frozen in this page.
! * We will freeze leftover tuples aggressively. It
! * requires no additional costs.
! */
! limit = OldestXmin;
! frozen[nfrozen++] = live_tuples[i];
! }
! }
!
! /*
! * If we froze any tuples, mark the buffer dirty, and write a WAL
! * record recording the changes. We must log the changes to be
! * crash-safe against future truncation of CLOG.
! */
! if (nfrozen > 0)
! {
! MarkBufferDirty(buf);
! /* no XLOG for temp tables, though */
! if (!onerel->rd_istemp)
! {
! XLogRecPtr recptr;
!
! recptr = log_heap_freeze(onerel, buf, FreezeLimit,
! frozen, nfrozen);
! PageSetLSN(page, recptr);
! PageSetTLI(page, ThisTimeLineID);
! }
! }
}
/*
*************** lazy_scan_heap(Relation onerel, LVRelSta
*** 462,468 ****
* page, so remember its free space as-is. (This path will always be
* taken if there are no indexes.)
*/
! if (vacrelstats->num_dead_tuples == prev_dead_count)
{
lazy_record_free_space(vacrelstats, blkno,
PageGetFreeSpace(page));
--- 501,507 ----
* page, so remember its free space as-is. (This path will always be
* taken if there are no indexes.)
*/
! if (ndead == 0)
{
lazy_record_free_space(vacrelstats, blkno,
PageGetFreeSpace(page));
*************** lazy_vacuum_heap(Relation onerel, LVRelS
*** 571,577 ****
}
/*
! * lazy_vacuum_page() -- free dead tuples on a page
* and repair its fragmentation.
*
* Caller must hold pin and lock on the buffer.
--- 610,616 ----
}
/*
! * lazy_vacuum_page() -- free dead tuples and freeze live tuples on a page
* and repair its fragmentation.
*
* Caller must hold pin and lock on the buffer.
*************** lazy_vacuum_page(Relation onerel, BlockN
*** 587,607 ****
OffsetNumber unused[MaxOffsetNumber];
int uncnt;
Page page = BufferGetPage(buffer);
! ItemId itemid;
START_CRIT_SECTION();
! for (; tupindex < vacrelstats->num_dead_tuples; tupindex++)
{
! BlockNumber tblk;
! OffsetNumber toff;
! tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
! if (tblk != blkno)
! break; /* past end of tuples for this block */
! toff = ItemPointerGetOffsetNumber(&vacrelstats->dead_tuples[tupindex]);
! itemid = PageGetItemId(page, toff);
! itemid->lp_flags &= ~LP_USED;
}
uncnt = PageRepairFragmentation(page, unused);
--- 626,683 ----
OffsetNumber unused[MaxOffsetNumber];
int uncnt;
Page page = BufferGetPage(buffer);
! OffsetNumber offdead;
! OffsetNumber offnum,
! maxoff;
! OffsetNumber frozen[MaxHeapTuplesPerPage];
! int nfrozen;
!
! Assert(tupindex < vacrelstats->num_dead_tuples);
! Assert(blkno == ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]));
!
! offdead = ItemPointerGetOffsetNumber(&vacrelstats->dead_tuples[tupindex]);
! maxoff = PageGetMaxOffsetNumber(page);
! nfrozen = 0;
START_CRIT_SECTION();
! for (offnum = FirstOffsetNumber;
! offnum <= maxoff;
! offnum = OffsetNumberNext(offnum))
{
! ItemId itemid = PageGetItemId(page, offnum);
! if (offnum == offdead)
! {
! itemid->lp_flags &= ~LP_USED;
!
! tupindex++;
! if (tupindex < vacrelstats->num_dead_tuples &&
! blkno == ItemPointerGetBlockNumber(
! &vacrelstats->dead_tuples[tupindex]))
! {
! offdead = ItemPointerGetOffsetNumber(
! &vacrelstats->dead_tuples[tupindex]);
! }
! else
! {
! /* past end of dead tuples for this block */
! offdead = InvalidOffsetNumber;
! }
! }
! else if (ItemIdIsUsed(itemid))
! {
! HeapTupleHeader tuple;
!
! tuple = (HeapTupleHeader) PageGetItem(page, itemid);
!
! /*
! * Do an aggressive freeze. We use OldestXmin as the freeze
! * threshold instead FreezeLimit here.
! */
! if (heap_freeze_tuple(tuple, OldestXmin, InvalidBuffer))
! frozen[nfrozen++] = offnum;
! }
}
uncnt = PageRepairFragmentation(page, unused);
*************** lazy_vacuum_page(Relation onerel, BlockN
*** 613,618 ****
--- 689,696 ----
{
XLogRecPtr recptr;
+ if (nfrozen > 0)
+ log_heap_freeze(onerel, buffer, OldestXmin, frozen, nfrozen);
recptr = log_heap_clean(onerel, buffer, unused, uncnt);
PageSetLSN(page, recptr);
PageSetTLI(page, ThisTimeLineID);
diff -cpr HEAD/src/backend/storage/buffer/bufmgr.c aggressive_freeze/src/backend/storage/buffer/bufmgr.c
*** HEAD/src/backend/storage/buffer/bufmgr.c Mon Feb 5 10:35:59 2007
--- aggressive_freeze/src/backend/storage/buffer/bufmgr.c Mon Mar 5 19:11:09 2007
*************** buffer_write_error_callback(void *arg)
*** 2149,2151 ****
--- 2149,2163 ----
bufHdr->tag.rnode.dbNode,
bufHdr->tag.rnode.relNode);
}
+
+ /*
+ * BufferIsDirty -- retrieve dirty status of the buffer
+ */
+ bool
+ BufferIsDirty(Buffer buffer)
+ {
+ volatile BufferDesc *bufHdr;
+
+ bufHdr = &BufferDescriptors[buffer - 1];
+ return (bufHdr->flags & BM_DIRTY) != 0;
+ }
diff -cpr HEAD/src/include/storage/bufmgr.h aggressive_freeze/src/include/storage/bufmgr.h
*** HEAD/src/include/storage/bufmgr.h Thu Jan 11 14:20:57 2007
--- aggressive_freeze/src/include/storage/bufmgr.h Mon Mar 5 19:11:09 2007
*************** extern Size BufferShmemSize(void);
*** 141,146 ****
--- 141,147 ----
extern RelFileNode BufferGetFileNode(Buffer buffer);
extern void SetBufferCommitInfoNeedsSave(Buffer buffer);
+ extern bool BufferIsDirty(Buffer buffer);
extern void UnlockBuffers(void);
extern void LockBuffer(Buffer buffer, int mode);
ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> writes:
This is a stand-alone patch for aggressive freezing. I'll propose
to use OldestXmin instead of FreezeLimit as the freeze threshold
in the circumstances below:
I think it's a really bad idea to freeze that aggressively under any
circumstances except being told to (ie, VACUUM FREEZE). When you
freeze, you lose history information that might be needed later --- for
forensic purposes if nothing else. You need to show a fairly amazing
performance gain to justify that, and I don't think you can.
regards, tom lane
Tom Lane wrote:
ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> writes:
This is a stand-alone patch for aggressive freezing. I'll propose
to use OldestXmin instead of FreezeLimit as the freeze threshold
in the circumstances below:I think it's a really bad idea to freeze that aggressively under any
circumstances except being told to (ie, VACUUM FREEZE). When you
freeze, you lose history information that might be needed later --- for
forensic purposes if nothing else. You need to show a fairly amazing
performance gain to justify that, and I don't think you can.
There could be a GUC vacuum_freeze_limit, and the actual FreezeLimit
would be calculated as
GetOldestXmin() - vacuum_freeze_limit
The default for vacuum_freeze_limit would be MaxTransactionId/2, just
as it is now.
greetings, Florian Pflug
Florian G. Pflug wrote:
There could be a GUC vacuum_freeze_limit, and the actual FreezeLimit
would be calculated as
GetOldestXmin() - vacuum_freeze_limit
We already have that. It's called vacuum_freeze_min_age, and the default
is 100 million transactions.
IIRC we added it late in the 8.2 release cycle when we changed the clog
truncation point to depend on freeze limit.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas wrote:
Florian G. Pflug wrote:
There could be a GUC vacuum_freeze_limit, and the actual FreezeLimit
would be calculated as
GetOldestXmin() - vacuum_freeze_limitWe already have that. It's called vacuum_freeze_min_age, and the default
is 100 million transactions.IIRC we added it late in the 8.2 release cycle when we changed the clog
truncation point to depend on freeze limit.
Ok, that explains why I didn't find it when I checked the source - I
checked the 8.1 sources by accident ;-)
Anyway, thanks for pointing that out ;-)
greetings, Florian Pflug
Tom Lane <tgl@sss.pgh.pa.us> wrote:
This is a stand-alone patch for aggressive freezing. I'll propose
to use OldestXmin instead of FreezeLimit as the freeze threshold
in the circumstances below:I think it's a really bad idea to freeze that aggressively under any
circumstances except being told to (ie, VACUUM FREEZE). When you
freeze, you lose history information that might be needed later --- for
forensic purposes if nothing else.
I don't think we can supply such a historical database functionality here,
because we can guarantee it just only for INSERTed tuples even if we pay
attention. We've already enabled autovacuum as default, so that we cannot
predict when the next vacuum starts and recently UPDATEd and DELETEd tuples
are removed at random times. Furthermore, HOT will also accelerate removing
expired tuples. Instead, we'd better to use WAL or something like audit
logs for keeping history information.
You need to show a fairly amazing
performance gain to justify that, and I don't think you can.
Thank you for your advice. I found that aggressive freezing for
already dirty pages made things worse, but for pages that contain
other tuples being frozen or dead tuples was useful.
I did an acceleration test for XID wraparound vacuum.
I initialized the database with
$ ./pgbench -i -s100
# VACUUM FREEZE accounts;
# SET vacuum_freeze_min_age = 6;
and repeated the following queries.
CHECKPOINT;
UPDATE accounts SET aid=aid WHERE random() < 0.005;
SELECT count(*) FROM accounts WHERE xmin > 2;
VACUUM accounts;
After the freeze threshold got at vacuum_freeze_min_age (run >= 3),
the VACUUM became faster with aggressive freezing. I think it came
from piggybacking multiple freezing operations -- the number of
unfrozen tuples were kept lower values.
* Durations of VACUUM [sec]
run| HEAD | freeze
---+--------+--------
1 | 5.8 | 8.2
2 | 5.2 | 9.0
3 | 118.2 | 102.0
4 | 122.4 | 99.8
5 | 121.0 | 79.8
6 | 122.1 | 77.9
7 | 123.8 | 115.5
---+--------+--------
avg| 121.5 | 95.0
3-7|
* Numbers of unfrozen tuples
run| HEAD | freeze
---+--------+--------
1 | 50081 | 50434
2 | 99836 | 100072
3 | 100047 | 86484
4 | 100061 | 86524
5 | 99766 | 87046
6 | 99854 | 86824
7 | 99502 | 86595
---+--------+--------
avg| 99846 | 86695
3-7|
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
"ITAGAKI Takahiro" <itagaki.takahiro@oss.ntt.co.jp> writes:
I don't think we can supply such a historical database functionality here,
because we can guarantee it just only for INSERTed tuples even if we pay
attention. We've already enabled autovacuum as default, so that we cannot
predict when the next vacuum starts and recently UPDATEd and DELETEd tuples
are removed at random times. Furthermore, HOT will also accelerate removing
expired tuples. Instead, we'd better to use WAL or something like audit
logs for keeping history information.
Well comparing the data to WAL is precisely the kind of debugging that I think
Tom is concerned with.
The hoped for gain here is that vacuum finds fewer pages with tuples that
exceed vacuum_freeze_min_age? That seems useful though vacuum is still going
to have to read every page and I suspect most of the writes pertain to dead
tuples, not freezing tuples.
This strikes me as something that will be more useful once we have the DSM
especially if it ends up including a frozen map. Once we have the DSM vacuum
will no longer be visiting every page, so it will be much easier for pages to
get quite old and only be caught by a vacuum freeze. The less i/o that vacuum
freeze has to do the better. If we get a freeze map then agressive freezing
would help keep pages out of that map so they never need to be vacuumed just
to freeze the tuples in them.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> writes:
Tom Lane <tgl@sss.pgh.pa.us> wrote:
I think it's a really bad idea to freeze that aggressively under any
circumstances except being told to (ie, VACUUM FREEZE). When you
freeze, you lose history information that might be needed later --- for
forensic purposes if nothing else.
I don't think we can supply such a historical database functionality here,
because we can guarantee it just only for INSERTed tuples even if we pay
attention. We've already enabled autovacuum as default, so that we cannot
predict when the next vacuum starts and recently UPDATEd and DELETEd tuples
are removed at random times.
I said nothing about expired tuples. The point of not freezing is to
preserve information about the insertion time of live tuples. And your
test case is unconvincing, because no sane DBA would run with such a
small value of vacuum_freeze_min_age.
regards, tom lane
Gregory Stark <stark@enterprisedb.com> wrote:
The hoped for gain here is that vacuum finds fewer pages with tuples that
exceed vacuum_freeze_min_age? That seems useful though vacuum is still going
to have to read every page and I suspect most of the writes pertain to dead
tuples, not freezing tuples.
Yes. VACUUM makes dirty pages only for freezing exceeded tuples in
particular cases and I think we can reduce the writes by keeping the
number of unfrozen tuples low.
There are three additional costs in FREEZE.
1. CPU cost for changing the xids of target tuples.
2. Writes cost for WAL entries of FREEZE (log_heap_freeze).
3. Writes cost for newly created dirty pages.
I did additional freezing in the following two cases. We'll have created
dirty buffers and WAL entries for required operations then, so that I think
the additional costs of 2 and 3 are ignorable, though 1 still affects us.
| - There are another tuple to be frozen in the same page.
| - There are another dead tuples in the same page.
| Freezing is delayed until the heap vacuum phase.
This strikes me as something that will be more useful once we have the DSM
especially if it ends up including a frozen map. Once we have the DSM vacuum
will no longer be visiting every page, so it will be much easier for pages to
get quite old and only be caught by a vacuum freeze. The less i/o that vacuum
freeze has to do the better. If we get a freeze map then agressive freezing
would help keep pages out of that map so they never need to be vacuumed just
to freeze the tuples in them.
Yeah, I was planning to 2 bits/page DSM exactly for the purpose. One of the
bits means to-be-vacuumed and another means to-be-frozen. It helps us avoid
full scanning of the pages for XID wraparound vacuums, but DSM should be more
reliable and not lost any information. I made an attempt to accomplish it
in DSM, but I understand the need to demonstrate it works as designed to you.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Tom Lane <tgl@sss.pgh.pa.us> wrote:
I said nothing about expired tuples. The point of not freezing is to
preserve information about the insertion time of live tuples.
I don't know what good it will do -- for debugging?
Why don't you use CURRENT_TIMESTAMP?
And your
test case is unconvincing, because no sane DBA would run with such a
small value of vacuum_freeze_min_age.
I intended to use the value for an accelerated test.
The penalties of freeze are divided for the long term in normal use,
but we surely suffer from them by bits.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> writes:
Tom Lane <tgl@sss.pgh.pa.us> wrote:
I said nothing about expired tuples. The point of not freezing is to
preserve information about the insertion time of live tuples.
I don't know what good it will do -- for debugging?
Exactly. As an example, I've been chasing offline a report from Merlin
Moncure about duplicate entries in a unique index; I still don't know
what exactly is going on there, but the availability of knowledge about
which transactions inserted which entries has been really helpful. If
we had a system designed to freeze tuples as soon as possible, that info
would have been gone forever pretty soon after the problem happened.
I don't say that this behavior can never be acceptable, but you need
much more than a marginal performance improvement to convince me that
it's worth the loss of forensic information.
regards, tom lane
Your patch has been added to the PostgreSQL unapplied patches list at:
http://momjian.postgresql.org/cgi-bin/pgpatches
It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.
---------------------------------------------------------------------------
ITAGAKI Takahiro wrote:
Sorry, I had a mistake in the patch I sent.
This is a fixed version.I wrote:
I'm working on making the bgwriter to write almost of dirty pages. This is
the proposal for it using automatic adjustment of bgwriter_lru_maxpages.Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
[ Attachment, skipping... ]
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
ITAGAKI Takahiro wrote:
"Jim C. Nasby" <jim@nasby.net> wrote:
Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?IMO yes, we want the bgwriter to be the only process that's normally
writing pages out. How close we are to that, I don't know...I'm working on making the bgwriter to write almost of dirty pages. This is
the proposal for it using automatic adjustment of bgwriter_lru_maxpages.The bgwriter_lru_maxpages value will be adjusted to the equal number of calls
of StrategyGetBuffer() per cycle with some safety margins (x2 at present).
The counter are incremented per call and reset to zero at StrategySyncStart().This patch alone is not so useful except for hiding hardly tunable parameters
from users. However, it would be a first step of allow bgwriters to do some
works before writing dirty buffers.- [DSM] Pick out pages worth vaccuming and register them into DSM.
- [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)
- [TODO Item] Shrink expired COLD updated tuples to just their headers.
- Set commit hint bits to reduce subsequent writes of blocks.
http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.phpI tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.
I got an expected result as below. Over 75% of buffers are written by
bgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values
were much higher than the default value (5). It shows that the most suitable
values greatly depends on workloads.benchmark | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages
------------+------------+-----------+-------------+-----------------------
default | 300tps | 100% | 77.5% | 120 pages/cycle
with sleep | 150tps | 50% | 98.6% | 70 pages/cycleI hope that this patch will be a first step of the intelligent bgwriter.
Comments welcome.
The general approach looks good to me. I'm queuing some benchmarks to
see how effective it is with a fairly constant workload.
This change in bgwriter.c looks fishy:
*************** BackgroundWriterMain(void)
*** 484,491 ****
*
* We absorb pending requests after each short sleep.
*/
! if ((bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0) ||
! (bgwriter_lru_percent > 0.0 && bgwriter_lru_maxpages > 0))
udelay = BgWriterDelay * 1000L;
else if (XLogArchiveTimeout > 0)
udelay = 1000000L; /* One second */
--- 484,490 ----
*
* We absorb pending requests after each short sleep.
*/
! if (bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0)
udelay = BgWriterDelay * 1000L;
else if (XLogArchiveTimeout > 0)
udelay = 1000000L; /* One second */
Doesn't that mean that bgwriter only runs every 1 or 10 seconds,
regardless of bgwriter_delay, if bgwriter_all_* parameters are not set?
The algorithm used to update bgwriter_lru_maxpages needs some thought.
Currently, it's decreased by one when less clean pages were required by
backends than expected, and increased otherwise. Exponential smoothing
or something similar seems like the natural choice to me.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Attached are two patches that try to recast the ideas of Itagaki
Takahiro's auto bgwriter_lru_maxpages patch in the direction I think this
code needs to move. Epic-length commentary follows.
The original code came from before there was a pg_stat_bgwriter. The
first patch (buf-alloc-stats) takes the two most interesting pieces of
data the original patch collected, the number of buffers allocated
recently and the number that the clients wrote out, and ties all that into
the new stats structure. With this patch applied, you can get a feel for
things like churn/turnover in the buffer pool that were very hard to
quantify before. Also, it makes it easy to measure how well your
background writer is doing at writing buffers so the clients don't have
to. Applying this would complete one of my personal goals for the 8.3
release, which was having stats to track every type of buffer write.
I split this out because I think it's very useful to have regardless of
whether the automatic tuning portion is accepted, and I think these
smaller patches make the review easier. The main thing I would recommend
someone check is how am_bg_writer is (mis?)used here. I spliced some of
the debugging-only code from the original patch, and I can't tell if the
result is a robust enough approach to solving the problem of having every
client indirectly report their activity to the background writer. Other
than that, I think this code is ready for review and potentially
comitting.
The second patch (limit-lru) adds on top of that a constraint of the LRU
writer so that it doesn't do any more work than it has to. Note that I
left verbose debugging code in here because I'm much less confident this
patch is complete.
It predicts upcoming buffer allocations using a 16-period weighted moving
average of recent activity, which you can think of as the last 3.2 seconds
at the default interval. After testing a few systems that seemed a decent
compromise of smoothing in both directions. I found the 2X overallocation
fudge factor of the original patch way too aggressive, and just pick the
larger of the most recent allocation amount or the smoothed value. The
main thing that throws off the allocation estimation is when you hit a
checkpoint, which can give a big spike after the background writer returns
to BgBufferSync and notices all the buffers that were allocated during the
checkpoint write; the code then tries to find more buffers it can recycle
than it needs to. Since the checkpoint itself normally leaves a large
wake of reusable buffers behind it, I didn't find this to be a serious
problem.
There's another communication issue here, which is that SyncOneBuffer
needs to return more information about the buffer than it currently does
once it gets it locked. The background writer needs to know more than
just if it was written to tune itself. The original patch used a clever
trick for this which worked but I found confusing. I happen to have a
bunch of other background writer tuning code I'm working on, and I had to
come up with a more robust way to communicate buffer internals back via
this channel. I used that code here, it's a bitmask setup similar to how
flags like BM_DIRTY are used. It's overkill for solving this particular
problem, but I think the interface is clean and it helps support future
enhancements in intelligent background writing.
Now we get to the controversial part. The original patch removed the
bgwriter_lru_maxpages parameter and updated the documentation accordingly.
I didn't do that here. The reason is that after playing around in this
area I'm not convinced yet I can satisfy all the tuning scenarios I'd like
to be able to handle that way. I describe this patch as enforcing a
constraint instead; it allows you to set the LRU parameters much higher
than was reasonable before without having to be as concerned about the LRU
writer wasting resources.
I already brought up some issues in this area on -hackers (
http://archives.postgresql.org/pgsql-hackers/2007-04/msg00781.php ) but my
work hasn't advanced as fast as I'd hoped. I wanted to submit what I've
finished anyway because I think any approach here is going to have cope
with the issues addressed in these two patches, and I'm happy now with how
they're solved here. It's only a one-line delete to disable the LRU
limiting behavior of the second patch, at which point it's strictly
internals code with no expected functional impact that alternate
approaches might be built on.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Attachments:
limit-lru.patchtext/plain; CHARSET=US-ASCII; NAME=limit-lru.patchDownload
Index: src/backend/storage/buffer/bufmgr.c
===================================================================
RCS file: /d3/pgsql/cvs/pgsql-local/src/backend/storage/buffer/bufmgr.c,v
retrieving revision 1.1.1.1
diff -c -r1.1.1.1 bufmgr.c
*** src/backend/storage/buffer/bufmgr.c 7 May 2007 01:48:49 -0000 1.1.1.1
--- src/backend/storage/buffer/bufmgr.c 12 May 2007 22:26:56 -0000
***************
*** 67,72 ****
--- 67,79 ----
/* interval for calling AbsorbFsyncRequests in BufferSync */
#define WRITES_PER_ABSORB 1000
+ /* Return codes describing what SyncOneBuffer found out and did with the
+ * buffer it processed. The way code here tests for whether a write
+ * was done depends on BUF_WRITTEN being the highest bit value in this set. */
+ #define BUF_WRITTEN 0x80
+ #define BUF_CLEAN 0x40
+ #define BUF_REUSABLE 0x20
+ #define BUF_USAGE_COUNT 0x1F
/* GUC variables */
bool zero_damaged_pages = false;
***************
*** 101,107 ****
static void PinBuffer_Locked(volatile BufferDesc *buf);
static void UnpinBuffer(volatile BufferDesc *buf,
bool fixOwner, bool normalAccess);
! static bool SyncOneBuffer(int buf_id, bool skip_pinned);
static void WaitIO(volatile BufferDesc *buf);
static bool StartBufferIO(volatile BufferDesc *buf, bool forInput);
static void TerminateBufferIO(volatile BufferDesc *buf, bool clear_dirty,
--- 108,114 ----
static void PinBuffer_Locked(volatile BufferDesc *buf);
static void UnpinBuffer(volatile BufferDesc *buf,
bool fixOwner, bool normalAccess);
! static int SyncOneBuffer(int buf_id, bool skip_recently_used);
static void WaitIO(volatile BufferDesc *buf);
static bool StartBufferIO(volatile BufferDesc *buf, bool forInput);
static void TerminateBufferIO(volatile BufferDesc *buf, bool clear_dirty,
***************
*** 1007,1013 ****
absorb_counter = WRITES_PER_ABSORB;
while (num_to_scan-- > 0)
{
! if (SyncOneBuffer(buf_id, false))
{
BgWriterStats.m_buf_written_checkpoints++;
--- 1014,1020 ----
absorb_counter = WRITES_PER_ABSORB;
while (num_to_scan-- > 0)
{
! if (SyncOneBuffer(buf_id, false)>=BUF_WRITTEN)
{
BgWriterStats.m_buf_written_checkpoints++;
***************
*** 1040,1047 ****
int buf_id2;
int num_to_scan;
int num_written;
! int recent_alloc;
int num_client_writes;
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
--- 1047,1063 ----
int buf_id2;
int num_to_scan;
int num_written;
!
! /* Statistics returned by the freelist strategy code */
int num_client_writes;
+ int recent_alloc;
+
+
+ /* Used to estimate the upcoming LRU eviction activity */
+ static int smoothed_alloc = 0;
+ int upcoming_alloc_estimate;
+ int reusable_buffers;
+ int buffer_state;
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
***************
*** 1073,1079 ****
{
if (++buf_id1 >= NBuffers)
buf_id1 = 0;
! if (SyncOneBuffer(buf_id1, false))
{
if (++num_written >= bgwriter_all_maxpages)
{
--- 1089,1095 ----
{
if (++buf_id1 >= NBuffers)
buf_id1 = 0;
! if (SyncOneBuffer(buf_id1, false)>=BUF_WRITTEN)
{
if (++num_written >= bgwriter_all_maxpages)
{
***************
*** 1092,1097 ****
--- 1108,1142 ----
BgWriterStats.m_buf_alloc+=recent_alloc;
BgWriterStats.m_buf_written_client+=num_client_writes;
+ /* Estimate number of buffers to write based on a smoothed weighted
+ * average of previous and recent buffer allocations */
+ smoothed_alloc = smoothed_alloc * 15 / 16 + recent_alloc / 16;
+
+ /* Expect we will soon need either the smoothed amount or the recent allocation amount,
+ * whichever is larger */
+ upcoming_alloc_estimate = smoothed_alloc;
+ if (recent_alloc > upcoming_alloc_estimate)
+ upcoming_alloc_estimate = recent_alloc;
+
+ /**** DEBUG show the smoothing in action ***/
+ if (1)
+ {
+ static int count = 0;
+ static int alloc[10];
+ static int smoothed[10];
+ alloc[count % 10]=recent_alloc;
+ smoothed[count % 10]=smoothed_alloc;
+ if (++count % 10 == 9)
+ {
+ elog(LOG,"alloc = %d %d %d %d %d %d %d %d %d %d",
+ alloc[0],alloc[1],alloc[2],alloc[3],alloc[4],
+ alloc[5],alloc[6],alloc[7],alloc[8],alloc[9]);
+ elog(LOG,"smoothed = %d %d %d %d %d %d %d %d %d %d",
+ smoothed[0],smoothed[1],smoothed[2],smoothed[3],smoothed[4],
+ smoothed[5],smoothed[6],smoothed[7],smoothed[8],smoothed[9]);
+ }
+ }
+
/*
* This loop considers only unpinned buffers close to the clock sweep
* point.
***************
*** 1100,1139 ****
{
num_to_scan = (int) ((NBuffers * bgwriter_lru_percent + 99) / 100);
num_written = 0;
!
while (num_to_scan-- > 0)
{
! if (SyncOneBuffer(buf_id2, true))
{
if (++num_written >= bgwriter_lru_maxpages)
{
BgWriterStats.m_maxwritten_lru++;
break;
}
}
if (++buf_id2 >= NBuffers)
buf_id2 = 0;
}
BgWriterStats.m_buf_written_lru += num_written;
}
}
/*
* SyncOneBuffer -- process a single buffer during syncing.
*
! * If skip_pinned is true, we don't write currently-pinned buffers, nor
* buffers marked recently used, as these are not replacement candidates.
*
! * Returns true if buffer was written, else false. (This could be in error
! * if FlushBuffers finds the buffer clean after locking it, but we don't
! * care all that much.)
*
* Note: caller must have done ResourceOwnerEnlargeBuffers.
*/
! static bool
! SyncOneBuffer(int buf_id, bool skip_pinned)
{
volatile BufferDesc *bufHdr = &BufferDescriptors[buf_id];
/*
* Check whether buffer needs writing.
--- 1145,1207 ----
{
num_to_scan = (int) ((NBuffers * bgwriter_lru_percent + 99) / 100);
num_written = 0;
! reusable_buffers = 0;
while (num_to_scan-- > 0)
{
! buffer_state=SyncOneBuffer(buf_id2, true);
! if (buffer_state>=BUF_WRITTEN)
{
+ reusable_buffers++;
if (++num_written >= bgwriter_lru_maxpages)
{
BgWriterStats.m_maxwritten_lru++;
break;
}
}
+ else if (buffer_state & BUF_REUSABLE) reusable_buffers++;
+
if (++buf_id2 >= NBuffers)
buf_id2 = 0;
+
+ /* Exit when target for upcoming allocations reached */
+ if (reusable_buffers>=upcoming_alloc_estimate) break;
}
BgWriterStats.m_buf_written_lru += num_written;
+
+ if (1 && num_written>0) /**** DEBUG Show what happened this pass */
+ {
+ elog(LOG,"scanned=%d written=%d client write=%d alloc_est=%d reusable=%d",
+ (int) ((NBuffers * bgwriter_lru_percent + 99) / 100) - num_to_scan,
+ num_written,num_client_writes,upcoming_alloc_estimate,reusable_buffers);
+ }
+
}
}
/*
* SyncOneBuffer -- process a single buffer during syncing.
*
! * If skip_recently_used is true, we don't write currently-pinned buffers, nor
* buffers marked recently used, as these are not replacement candidates.
*
! * Returns an integer code describing both the state the buffer was
! * in when examined and what was done with it. The lower-order bits
! * are set to the usage_count of the buffer, and the following
! * bit masks are set accordingly: BUF_WRITTEN, BUF_CLEAN, BUF_REUSABLE
! *
! * (This could be in error if FlushBuffers finds the buffer clean after
! * locking it, but we don't care all that much.)
! *
! * The results are ordered such that the simple test for whether a buffer was
! * written is to check whether the return code is >=BUF_WRITTEN
*
* Note: caller must have done ResourceOwnerEnlargeBuffers.
*/
! static int
! SyncOneBuffer(int buf_id, bool skip_recently_used)
{
volatile BufferDesc *bufHdr = &BufferDescriptors[buf_id];
+ int buffer_state;
/*
* Check whether buffer needs writing.
***************
*** 1145,1160 ****
* upcoming changes and so we are not required to write such dirty buffer.
*/
LockBufHdr(bufHdr);
if (!(bufHdr->flags & BM_VALID) || !(bufHdr->flags & BM_DIRTY))
{
UnlockBufHdr(bufHdr);
! return false;
}
! if (skip_pinned &&
! (bufHdr->refcount != 0 || bufHdr->usage_count != 0))
{
UnlockBufHdr(bufHdr);
! return false;
}
/*
--- 1213,1237 ----
* upcoming changes and so we are not required to write such dirty buffer.
*/
LockBufHdr(bufHdr);
+
+ /* Starting state says this buffer is dirty, not reusable, and unwritten */
+ buffer_state = bufHdr->usage_count;
+
if (!(bufHdr->flags & BM_VALID) || !(bufHdr->flags & BM_DIRTY))
+ buffer_state|=BUF_CLEAN;
+
+ if (bufHdr->refcount == 0 && bufHdr->usage_count == 0)
+ buffer_state|=BUF_REUSABLE;
+ else if (skip_recently_used)
{
UnlockBufHdr(bufHdr);
! return buffer_state;
}
!
! if (buffer_state & BUF_CLEAN)
{
UnlockBufHdr(bufHdr);
! return buffer_state;
}
/*
***************
*** 1169,1175 ****
LWLockRelease(bufHdr->content_lock);
UnpinBuffer(bufHdr, true, false /* don't change freelist */ );
! return true;
}
--- 1246,1252 ----
LWLockRelease(bufHdr->content_lock);
UnpinBuffer(bufHdr, true, false /* don't change freelist */ );
! return buffer_state | BUF_WRITTEN;
}
buf-alloc-stats.patchapplication/octet-stream; NAME=buf-alloc-stats.patchDownload
Index: doc/src/sgml/monitoring.sgml
===================================================================
RCS file: /projects/cvsroot/pgsql/doc/src/sgml/monitoring.sgml,v
retrieving revision 1.50
diff -c -r1.50 monitoring.sgml
*** doc/src/sgml/monitoring.sgml 27 Apr 2007 20:08:43 -0000 1.50
--- doc/src/sgml/monitoring.sgml 7 May 2007 01:30:19 -0000
***************
*** 250,258 ****
<row>
<entry><structname>pg_stat_bgwriter</></entry>
<entry>One row only, showing cluster-wide statistics from the
! background writer: number of scheduled checkpoints, requested
! checkpoints, buffers written by checkpoints, lru-scans and all-scans,
! and the number of times the bgwriter aborted a round because it had
written too many buffers during lru-scans and all-scans.
</entry>
</row>
--- 250,259 ----
<row>
<entry><structname>pg_stat_bgwriter</></entry>
<entry>One row only, showing cluster-wide statistics from the
! background writer and shared buffer pool: number of scheduled
! checkpoints, requested checkpoints, buffers written by checkpoints,
! lru-scans, all-scans, and clients, total buffers allocated, and the
! number of times the bgwriter aborted a round because it had
written too many buffers during lru-scans and all-scans.
</entry>
</row>
***************
*** 815,820 ****
--- 816,839 ----
</row>
<row>
+ <entry><literal><function>pg_stat_get_bgwriter_buf_written_client</function>()</literal></entry>
+ <entry><type>bigint</type></entry>
+ <entry>
+ The number of buffers written by clients because they needed
+ to allocate a new buffer
+ </entry>
+ </row>
+
+ <row>
+ <entry><literal><function>pg_stat_get_bgwriter_buf_alloc</function>()</literal></entry>
+ <entry><type>bigint</type></entry>
+ <entry>
+ The total number of buffers allocated into the shared buffer
+ cache
+ </entry>
+ </row>
+
+ <row>
<entry><literal><function>pg_stat_clear_snapshot</function>()</literal></entry>
<entry><type>void</type></entry>
<entry>
Index: src/backend/catalog/system_views.sql
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/catalog/system_views.sql,v
retrieving revision 1.37
diff -c -r1.37 system_views.sql
*** src/backend/catalog/system_views.sql 30 Mar 2007 18:34:55 -0000 1.37
--- src/backend/catalog/system_views.sql 7 May 2007 01:30:20 -0000
***************
*** 373,376 ****
pg_stat_get_bgwriter_buf_written_lru() AS buffers_lru,
pg_stat_get_bgwriter_buf_written_all() AS buffers_all,
pg_stat_get_bgwriter_maxwritten_lru() AS maxwritten_lru,
! pg_stat_get_bgwriter_maxwritten_all() AS maxwritten_all;
--- 373,379 ----
pg_stat_get_bgwriter_buf_written_lru() AS buffers_lru,
pg_stat_get_bgwriter_buf_written_all() AS buffers_all,
pg_stat_get_bgwriter_maxwritten_lru() AS maxwritten_lru,
! pg_stat_get_bgwriter_maxwritten_all() AS maxwritten_all,
! pg_stat_get_bgwriter_buf_written_client() AS buffers_client,
! pg_stat_get_bgwriter_buf_alloc() AS buffers_alloc
! ;
Index: src/backend/postmaster/bgwriter.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/postmaster/bgwriter.c,v
retrieving revision 1.37
diff -c -r1.37 bgwriter.c
*** src/backend/postmaster/bgwriter.c 30 Mar 2007 18:34:55 -0000 1.37
--- src/backend/postmaster/bgwriter.c 7 May 2007 01:30:21 -0000
***************
*** 147,156 ****
static volatile sig_atomic_t shutdown_requested = false;
/*
! * Private state
*/
! static bool am_bg_writer = false;
static bool ckpt_active = false;
static time_t last_checkpoint_time;
--- 147,160 ----
static volatile sig_atomic_t shutdown_requested = false;
/*
! * Buffer and fsync activity and statistics nned to work differently
! * when the current process is the background writer
*/
! bool am_bg_writer = false;
+ /*
+ * Private state
+ */
static bool ckpt_active = false;
static time_t last_checkpoint_time;
Index: src/backend/postmaster/pgstat.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/postmaster/pgstat.c,v
retrieving revision 1.155
diff -c -r1.155 pgstat.c
*** src/backend/postmaster/pgstat.c 30 Apr 2007 16:37:08 -0000 1.155
--- src/backend/postmaster/pgstat.c 7 May 2007 01:30:23 -0000
***************
*** 1736,1742 ****
BgWriterStats.m_buf_written_lru == 0 &&
BgWriterStats.m_buf_written_all == 0 &&
BgWriterStats.m_maxwritten_lru == 0 &&
! BgWriterStats.m_maxwritten_all == 0)
return;
/*
--- 1736,1744 ----
BgWriterStats.m_buf_written_lru == 0 &&
BgWriterStats.m_buf_written_all == 0 &&
BgWriterStats.m_maxwritten_lru == 0 &&
! BgWriterStats.m_maxwritten_all == 0 &&
! BgWriterStats.m_buf_written_client == 0 &&
! BgWriterStats.m_buf_alloc == 0)
return;
/*
***************
*** 2805,2808 ****
--- 2807,2812 ----
globalStats.buf_written_all += msg->m_buf_written_all;
globalStats.maxwritten_lru += msg->m_maxwritten_lru;
globalStats.maxwritten_all += msg->m_maxwritten_all;
+ globalStats.buf_written_client += msg->m_buf_written_client;
+ globalStats.buf_alloc += msg->m_buf_alloc;
}
Index: src/backend/storage/buffer/bufmgr.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/storage/buffer/bufmgr.c,v
retrieving revision 1.218
diff -c -r1.218 bufmgr.c
*** src/backend/storage/buffer/bufmgr.c 2 May 2007 23:34:48 -0000 1.218
--- src/backend/storage/buffer/bufmgr.c 7 May 2007 01:30:24 -0000
***************
*** 987,997 ****
int buf_id;
int num_to_scan;
int absorb_counter;
/*
* Find out where to start the circular scan.
*/
! buf_id = StrategySyncStart();
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
--- 987,1001 ----
int buf_id;
int num_to_scan;
int absorb_counter;
+ int recent_alloc;
+ int num_client_writes;
/*
* Find out where to start the circular scan.
*/
! buf_id = StrategySyncStart(&recent_alloc,&num_client_writes);
! BgWriterStats.m_buf_alloc+=recent_alloc;
! BgWriterStats.m_buf_written_client+=num_client_writes;
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
***************
*** 1036,1041 ****
--- 1040,1047 ----
int buf_id2;
int num_to_scan;
int num_written;
+ int recent_alloc;
+ int num_client_writes;
/* Make sure we can handle the pin inside SyncOneBuffer */
ResourceOwnerEnlargeBuffers(CurrentResourceOwner);
***************
*** 1080,1085 ****
--- 1086,1098 ----
}
/*
+ * Find out where to start the circular scan.
+ */
+ buf_id2 = StrategySyncStart(&recent_alloc,&num_client_writes);
+ BgWriterStats.m_buf_alloc+=recent_alloc;
+ BgWriterStats.m_buf_written_client+=num_client_writes;
+
+ /*
* This loop considers only unpinned buffers close to the clock sweep
* point.
*/
***************
*** 1088,1095 ****
num_to_scan = (int) ((NBuffers * bgwriter_lru_percent + 99) / 100);
num_written = 0;
- buf_id2 = StrategySyncStart();
-
while (num_to_scan-- > 0)
{
if (SyncOneBuffer(buf_id2, true))
--- 1101,1106 ----
***************
*** 1451,1456 ****
--- 1462,1468 ----
false);
BufferFlushCount++;
+ StrategyReportWrite();
/*
* Mark the buffer as clean (unless BM_JUST_DIRTIED has become set) and
Index: src/backend/storage/buffer/freelist.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/storage/buffer/freelist.c,v
retrieving revision 1.58
diff -c -r1.58 freelist.c
*** src/backend/storage/buffer/freelist.c 5 Jan 2007 22:19:37 -0000 1.58
--- src/backend/storage/buffer/freelist.c 7 May 2007 01:30:24 -0000
***************
*** 29,34 ****
--- 29,36 ----
int firstFreeBuffer; /* Head of list of unused buffers */
int lastFreeBuffer; /* Tail of list of unused buffers */
+ int numGetBuffer; /* Calls to BufferAlloc since last reset */
+ int numClientWrites; /* Buffers written by clients since last reset */
/*
* NOTE: lastFreeBuffer is undefined when firstFreeBuffer is -1 (that is,
***************
*** 42,47 ****
--- 44,51 ----
/* Backend-local state about whether currently vacuuming */
bool strategy_hint_vacuum = false;
+ /* Used to determine which type of process we're running as */
+ extern bool am_bg_writer;
/*
* StrategyGetBuffer
***************
*** 62,67 ****
--- 66,72 ----
int trycounter;
LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE);
+ StrategyControl->numGetBuffer++;
/*
* Try to get a buffer from the freelist. Note that the freeNext fields
***************
*** 176,182 ****
* BufferSync() will proceed circularly around the buffer array from there.
*/
int
! StrategySyncStart(void)
{
int result;
--- 181,187 ----
* BufferSync() will proceed circularly around the buffer array from there.
*/
int
! StrategySyncStart(int *num_buf_alloc,int *num_client_writes)
{
int result;
***************
*** 186,196 ****
--- 191,220 ----
*/
LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE);
result = StrategyControl->nextVictimBuffer;
+
+ /* Return and reset statistics for activity since last call */
+ if (num_buf_alloc!=NULL) *num_buf_alloc=StrategyControl->numGetBuffer;
+ if (num_client_writes!=NULL) *num_client_writes=StrategyControl->numClientWrites;
+ StrategyControl->numGetBuffer = 0;
+ StrategyControl->numClientWrites = 0;
+
LWLockRelease(BufFreelistLock);
return result;
}
/*
+ * StrategyReportWrite -- After a buffer is written out, update
+ * local statistics based on who did the writing
+ */
+ void
+ StrategyReportWrite(void)
+ {
+ /* The background writer keeps track of buffers it writes already,
+ only count client writes */
+ if (!am_bg_writer) StrategyControl->numClientWrites++;
+ }
+
+ /*
* StrategyHintVacuum -- tell us whether VACUUM is active
*/
void
***************
*** 270,275 ****
--- 294,301 ----
/* Initialize the clock sweep pointer */
StrategyControl->nextVictimBuffer = 0;
+ StrategyControl->numGetBuffer = 0;
+ StrategyControl->numClientWrites = 0;
}
else
Assert(!init);
Index: src/backend/utils/adt/pgstatfuncs.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/utils/adt/pgstatfuncs.c,v
retrieving revision 1.41
diff -c -r1.41 pgstatfuncs.c
*** src/backend/utils/adt/pgstatfuncs.c 30 Mar 2007 18:34:55 -0000 1.41
--- src/backend/utils/adt/pgstatfuncs.c 7 May 2007 01:30:25 -0000
***************
*** 68,73 ****
--- 68,75 ----
extern Datum pg_stat_get_bgwriter_buf_written_all(PG_FUNCTION_ARGS);
extern Datum pg_stat_get_bgwriter_maxwritten_lru(PG_FUNCTION_ARGS);
extern Datum pg_stat_get_bgwriter_maxwritten_all(PG_FUNCTION_ARGS);
+ extern Datum pg_stat_get_bgwriter_buf_written_client(PG_FUNCTION_ARGS);
+ extern Datum pg_stat_get_bgwriter_buf_alloc(PG_FUNCTION_ARGS);
extern Datum pg_stat_clear_snapshot(PG_FUNCTION_ARGS);
extern Datum pg_stat_reset(PG_FUNCTION_ARGS);
***************
*** 808,813 ****
--- 810,826 ----
PG_RETURN_INT64(pgstat_fetch_global()->maxwritten_all);
}
+ Datum
+ pg_stat_get_bgwriter_buf_written_client(PG_FUNCTION_ARGS)
+ {
+ PG_RETURN_INT64(pgstat_fetch_global()->buf_written_client);
+ }
+
+ Datum
+ pg_stat_get_bgwriter_buf_alloc(PG_FUNCTION_ARGS)
+ {
+ PG_RETURN_INT64(pgstat_fetch_global()->buf_alloc);
+ }
/* Discard the active statistics snapshot */
Datum
Index: src/include/pgstat.h
===================================================================
RCS file: /projects/cvsroot/pgsql/src/include/pgstat.h,v
retrieving revision 1.58
diff -c -r1.58 pgstat.h
*** src/include/pgstat.h 30 Apr 2007 16:37:08 -0000 1.58
--- src/include/pgstat.h 7 May 2007 01:30:26 -0000
***************
*** 228,233 ****
--- 228,235 ----
PgStat_Counter m_buf_written_all;
PgStat_Counter m_maxwritten_lru;
PgStat_Counter m_maxwritten_all;
+ PgStat_Counter m_buf_written_client;
+ PgStat_Counter m_buf_alloc;
} PgStat_MsgBgWriter;
***************
*** 329,334 ****
--- 331,338 ----
PgStat_Counter buf_written_all;
PgStat_Counter maxwritten_lru;
PgStat_Counter maxwritten_all;
+ PgStat_Counter buf_written_client;
+ PgStat_Counter buf_alloc;
} PgStat_GlobalStats;
Index: src/include/catalog/pg_proc.h
===================================================================
RCS file: /projects/cvsroot/pgsql/src/include/catalog/pg_proc.h,v
retrieving revision 1.454
diff -c -r1.454 pg_proc.h
*** src/include/catalog/pg_proc.h 2 Apr 2007 03:49:40 -0000 1.454
--- src/include/catalog/pg_proc.h 7 May 2007 01:30:33 -0000
***************
*** 3001,3006 ****
--- 3001,3010 ----
DESCR("Statistics: Number of times the bgwriter stopped processing when it had written too many buffers during LRU scans");
DATA(insert OID = 2775 ( pg_stat_get_bgwriter_maxwritten_all PGNSP PGUID 12 1 0 f f t f s 0 20 "" _null_ _null_ _null_ pg_stat_get_bgwriter_maxwritten_all - _null_ ));
DESCR("Statistics: Number of times the bgwriter stopped processing when it had written too many buffers during all-buffer scans");
+ DATA(insert OID = 2776 ( pg_stat_get_bgwriter_buf_written_client PGNSP PGUID 12 1 0 f f t f s 0 20 "" _null_ _null_ _null_ pg_stat_get_bgwriter_buf_written_client - _null_ ));
+ DESCR("Statistics: Number of buffers written by client backends");
+ DATA(insert OID = 2777 ( pg_stat_get_bgwriter_buf_alloc PGNSP PGUID 12 1 0 f f t f s 0 20 "" _null_ _null_ _null_ pg_stat_get_bgwriter_buf_alloc - _null_ ));
+ DESCR("Statistics: Number of buffers allocated for the shared buffer cache");
DATA(insert OID = 2230 ( pg_stat_clear_snapshot PGNSP PGUID 12 1 0 f f f f v 0 2278 "" _null_ _null_ _null_ pg_stat_clear_snapshot - _null_ ));
DESCR("Statistics: Discard current transaction's statistics snapshot");
DATA(insert OID = 2274 ( pg_stat_reset PGNSP PGUID 12 1 0 f f f f v 0 2278 "" _null_ _null_ _null_ pg_stat_reset - _null_ ));
Index: src/include/storage/buf_internals.h
===================================================================
RCS file: /projects/cvsroot/pgsql/src/include/storage/buf_internals.h,v
retrieving revision 1.89
diff -c -r1.89 buf_internals.h
*** src/include/storage/buf_internals.h 5 Jan 2007 22:19:57 -0000 1.89
--- src/include/storage/buf_internals.h 7 May 2007 01:30:33 -0000
***************
*** 186,192 ****
/* freelist.c */
extern volatile BufferDesc *StrategyGetBuffer(void);
extern void StrategyFreeBuffer(volatile BufferDesc *buf, bool at_head);
! extern int StrategySyncStart(void);
extern Size StrategyShmemSize(void);
extern void StrategyInitialize(bool init);
--- 186,193 ----
/* freelist.c */
extern volatile BufferDesc *StrategyGetBuffer(void);
extern void StrategyFreeBuffer(volatile BufferDesc *buf, bool at_head);
! extern int StrategySyncStart(int *num_buf_alloc, int *num_client_writes);
! extern void StrategyReportWrite();
extern Size StrategyShmemSize(void);
extern void StrategyInitialize(bool init);
Index: src/test/regress/expected/rules.out
===================================================================
RCS file: /projects/cvsroot/pgsql/src/test/regress/expected/rules.out,v
retrieving revision 1.127
diff -c -r1.127 rules.out
*** src/test/regress/expected/rules.out 30 Mar 2007 18:34:56 -0000 1.127
--- src/test/regress/expected/rules.out 7 May 2007 01:30:36 -0000
***************
*** 1292,1298 ****
pg_stat_activity | SELECT d.oid AS datid, d.datname, pg_stat_get_backend_pid(s.backendid) AS procpid, pg_stat_get_backend_userid(s.backendid) AS usesysid, u.rolname AS usename, pg_stat_get_backend_activity(s.backendid) AS current_query, pg_stat_get_backend_waiting(s.backendid) AS waiting, pg_stat_get_backend_txn_start(s.backendid) AS txn_start, pg_stat_get_backend_activity_start(s.backendid) AS query_start, pg_stat_get_backend_start(s.backendid) AS backend_start, pg_stat_get_backend_client_addr(s.backendid) AS client_addr, pg_stat_get_backend_client_port(s.backendid) AS client_port FROM pg_database d, (SELECT pg_stat_get_backend_idset() AS backendid) s, pg_authid u WHERE ((pg_stat_get_backend_dbid(s.backendid) = d.oid) AND (pg_stat_get_backend_userid(s.backendid) = u.oid));
pg_stat_all_indexes | SELECT c.oid AS relid, i.oid AS indexrelid, n.nspname AS schemaname, c.relname, i.relname AS indexrelname, pg_stat_get_numscans(i.oid) AS idx_scan, pg_stat_get_tuples_returned(i.oid) AS idx_tup_read, pg_stat_get_tuples_fetched(i.oid) AS idx_tup_fetch FROM (((pg_class c JOIN pg_index x ON ((c.oid = x.indrelid))) JOIN pg_class i ON ((i.oid = x.indexrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.relkind = ANY (ARRAY['r'::"char", 't'::"char"]));
pg_stat_all_tables | SELECT c.oid AS relid, n.nspname AS schemaname, c.relname, pg_stat_get_numscans(c.oid) AS seq_scan, pg_stat_get_tuples_returned(c.oid) AS seq_tup_read, (sum(pg_stat_get_numscans(i.indexrelid)))::bigint AS idx_scan, ((sum(pg_stat_get_tuples_fetched(i.indexrelid)))::bigint + pg_stat_get_tuples_fetched(c.oid)) AS idx_tup_fetch, pg_stat_get_tuples_inserted(c.oid) AS n_tup_ins, pg_stat_get_tuples_updated(c.oid) AS n_tup_upd, pg_stat_get_tuples_deleted(c.oid) AS n_tup_del, pg_stat_get_live_tuples(c.oid) AS n_live_tup, pg_stat_get_dead_tuples(c.oid) AS n_dead_tup, pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum, pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum, pg_stat_get_last_analyze_time(c.oid) AS last_analyze, pg_stat_get_last_autoanalyze_time(c.oid) AS last_autoanalyze FROM ((pg_class c LEFT JOIN pg_index i ON ((c.oid = i.indrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.relkind = ANY (ARRAY['r'::"char", 't'::"char"])) GROUP BY c.oid, n.nspname, c.relname;
! pg_stat_bgwriter | SELECT pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed, pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req, pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint, pg_stat_get_bgwriter_buf_written_lru() AS buffers_lru, pg_stat_get_bgwriter_buf_written_all() AS buffers_all, pg_stat_get_bgwriter_maxwritten_lru() AS maxwritten_lru, pg_stat_get_bgwriter_maxwritten_all() AS maxwritten_all;
pg_stat_database | SELECT d.oid AS datid, d.datname, pg_stat_get_db_numbackends(d.oid) AS numbackends, pg_stat_get_db_xact_commit(d.oid) AS xact_commit, pg_stat_get_db_xact_rollback(d.oid) AS xact_rollback, (pg_stat_get_db_blocks_fetched(d.oid) - pg_stat_get_db_blocks_hit(d.oid)) AS blks_read, pg_stat_get_db_blocks_hit(d.oid) AS blks_hit, pg_stat_get_db_tuples_returned(d.oid) AS tup_returned, pg_stat_get_db_tuples_fetched(d.oid) AS tup_fetched, pg_stat_get_db_tuples_inserted(d.oid) AS tup_inserted, pg_stat_get_db_tuples_updated(d.oid) AS tup_updated, pg_stat_get_db_tuples_deleted(d.oid) AS tup_deleted FROM pg_database d;
pg_stat_sys_indexes | SELECT pg_stat_all_indexes.relid, pg_stat_all_indexes.indexrelid, pg_stat_all_indexes.schemaname, pg_stat_all_indexes.relname, pg_stat_all_indexes.indexrelname, pg_stat_all_indexes.idx_scan, pg_stat_all_indexes.idx_tup_read, pg_stat_all_indexes.idx_tup_fetch FROM pg_stat_all_indexes WHERE (pg_stat_all_indexes.schemaname = ANY (ARRAY['pg_catalog'::"name", 'pg_toast'::"name", 'information_schema'::"name"]));
pg_stat_sys_tables | SELECT pg_stat_all_tables.relid, pg_stat_all_tables.schemaname, pg_stat_all_tables.relname, pg_stat_all_tables.seq_scan, pg_stat_all_tables.seq_tup_read, pg_stat_all_tables.idx_scan, pg_stat_all_tables.idx_tup_fetch, pg_stat_all_tables.n_tup_ins, pg_stat_all_tables.n_tup_upd, pg_stat_all_tables.n_tup_del, pg_stat_all_tables.n_live_tup, pg_stat_all_tables.n_dead_tup, pg_stat_all_tables.last_vacuum, pg_stat_all_tables.last_autovacuum, pg_stat_all_tables.last_analyze, pg_stat_all_tables.last_autoanalyze FROM pg_stat_all_tables WHERE (pg_stat_all_tables.schemaname = ANY (ARRAY['pg_catalog'::"name", 'pg_toast'::"name", 'information_schema'::"name"]));
--- 1292,1298 ----
pg_stat_activity | SELECT d.oid AS datid, d.datname, pg_stat_get_backend_pid(s.backendid) AS procpid, pg_stat_get_backend_userid(s.backendid) AS usesysid, u.rolname AS usename, pg_stat_get_backend_activity(s.backendid) AS current_query, pg_stat_get_backend_waiting(s.backendid) AS waiting, pg_stat_get_backend_txn_start(s.backendid) AS txn_start, pg_stat_get_backend_activity_start(s.backendid) AS query_start, pg_stat_get_backend_start(s.backendid) AS backend_start, pg_stat_get_backend_client_addr(s.backendid) AS client_addr, pg_stat_get_backend_client_port(s.backendid) AS client_port FROM pg_database d, (SELECT pg_stat_get_backend_idset() AS backendid) s, pg_authid u WHERE ((pg_stat_get_backend_dbid(s.backendid) = d.oid) AND (pg_stat_get_backend_userid(s.backendid) = u.oid));
pg_stat_all_indexes | SELECT c.oid AS relid, i.oid AS indexrelid, n.nspname AS schemaname, c.relname, i.relname AS indexrelname, pg_stat_get_numscans(i.oid) AS idx_scan, pg_stat_get_tuples_returned(i.oid) AS idx_tup_read, pg_stat_get_tuples_fetched(i.oid) AS idx_tup_fetch FROM (((pg_class c JOIN pg_index x ON ((c.oid = x.indrelid))) JOIN pg_class i ON ((i.oid = x.indexrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.relkind = ANY (ARRAY['r'::"char", 't'::"char"]));
pg_stat_all_tables | SELECT c.oid AS relid, n.nspname AS schemaname, c.relname, pg_stat_get_numscans(c.oid) AS seq_scan, pg_stat_get_tuples_returned(c.oid) AS seq_tup_read, (sum(pg_stat_get_numscans(i.indexrelid)))::bigint AS idx_scan, ((sum(pg_stat_get_tuples_fetched(i.indexrelid)))::bigint + pg_stat_get_tuples_fetched(c.oid)) AS idx_tup_fetch, pg_stat_get_tuples_inserted(c.oid) AS n_tup_ins, pg_stat_get_tuples_updated(c.oid) AS n_tup_upd, pg_stat_get_tuples_deleted(c.oid) AS n_tup_del, pg_stat_get_live_tuples(c.oid) AS n_live_tup, pg_stat_get_dead_tuples(c.oid) AS n_dead_tup, pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum, pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum, pg_stat_get_last_analyze_time(c.oid) AS last_analyze, pg_stat_get_last_autoanalyze_time(c.oid) AS last_autoanalyze FROM ((pg_class c LEFT JOIN pg_index i ON ((c.oid = i.indrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.relkind = ANY (ARRAY['r'::"char", 't'::"char"])) GROUP BY c.oid, n.nspname, c.relname;
! pg_stat_bgwriter | SELECT pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed, pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req, pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint, pg_stat_get_bgwriter_buf_written_lru() AS buffers_lru, pg_stat_get_bgwriter_buf_written_all() AS buffers_all, pg_stat_get_bgwriter_maxwritten_lru() AS maxwritten_lru, pg_stat_get_bgwriter_maxwritten_all() AS maxwritten_all, pg_stat_get_bgwriter_buf_written_client() AS buffers_client, pg_stat_get_bgwriter_buf_alloc() AS buffers_alloc;
pg_stat_database | SELECT d.oid AS datid, d.datname, pg_stat_get_db_numbackends(d.oid) AS numbackends, pg_stat_get_db_xact_commit(d.oid) AS xact_commit, pg_stat_get_db_xact_rollback(d.oid) AS xact_rollback, (pg_stat_get_db_blocks_fetched(d.oid) - pg_stat_get_db_blocks_hit(d.oid)) AS blks_read, pg_stat_get_db_blocks_hit(d.oid) AS blks_hit, pg_stat_get_db_tuples_returned(d.oid) AS tup_returned, pg_stat_get_db_tuples_fetched(d.oid) AS tup_fetched, pg_stat_get_db_tuples_inserted(d.oid) AS tup_inserted, pg_stat_get_db_tuples_updated(d.oid) AS tup_updated, pg_stat_get_db_tuples_deleted(d.oid) AS tup_deleted FROM pg_database d;
pg_stat_sys_indexes | SELECT pg_stat_all_indexes.relid, pg_stat_all_indexes.indexrelid, pg_stat_all_indexes.schemaname, pg_stat_all_indexes.relname, pg_stat_all_indexes.indexrelname, pg_stat_all_indexes.idx_scan, pg_stat_all_indexes.idx_tup_read, pg_stat_all_indexes.idx_tup_fetch FROM pg_stat_all_indexes WHERE (pg_stat_all_indexes.schemaname = ANY (ARRAY['pg_catalog'::"name", 'pg_toast'::"name", 'information_schema'::"name"]));
pg_stat_sys_tables | SELECT pg_stat_all_tables.relid, pg_stat_all_tables.schemaname, pg_stat_all_tables.relname, pg_stat_all_tables.seq_scan, pg_stat_all_tables.seq_tup_read, pg_stat_all_tables.idx_scan, pg_stat_all_tables.idx_tup_fetch, pg_stat_all_tables.n_tup_ins, pg_stat_all_tables.n_tup_upd, pg_stat_all_tables.n_tup_del, pg_stat_all_tables.n_live_tup, pg_stat_all_tables.n_dead_tup, pg_stat_all_tables.last_vacuum, pg_stat_all_tables.last_autovacuum, pg_stat_all_tables.last_analyze, pg_stat_all_tables.last_autoanalyze FROM pg_stat_all_tables WHERE (pg_stat_all_tables.schemaname = ANY (ARRAY['pg_catalog'::"name", 'pg_toast'::"name", 'information_schema'::"name"]));
Greg Smith wrote:
The original code came from before there was a pg_stat_bgwriter. The
first patch (buf-alloc-stats) takes the two most interesting pieces of
data the original patch collected, the number of buffers allocated
recently and the number that the clients wrote out, and ties all that
into the new stats structure. With this patch applied, you can get a
feel for things like churn/turnover in the buffer pool that were very
hard to quantify before. Also, it makes it easy to measure how well
your background writer is doing at writing buffers so the clients don't
have to. Applying this would complete one of my personal goals for the
8.3 release, which was having stats to track every type of buffer write.I split this out because I think it's very useful to have regardless of
whether the automatic tuning portion is accepted, and I think these
smaller patches make the review easier. The main thing I would
recommend someone check is how am_bg_writer is (mis?)used here. I
spliced some of the debugging-only code from the original patch, and I
can't tell if the result is a robust enough approach to solving the
problem of having every client indirectly report their activity to the
background writer. Other than that, I think this code is ready for
review and potentially comitting.
This looks good to me in principle. StrategyReportWrite increments
numClientWrites without holding the BufFreeListLock, that's a race
condition. The terminology needs some adjustment; clients don't write
buffers, backends do.
Splitting the patch to two is a good idea.
The second patch (limit-lru) adds on top of that a constraint of the LRU
writer so that it doesn't do any more work than it has to. Note that I
left verbose debugging code in here because I'm much less confident this
patch is complete.It predicts upcoming buffer allocations using a 16-period weighted
moving average of recent activity, which you can think of as the last
3.2 seconds at the default interval. After testing a few systems that
seemed a decent compromise of smoothing in both directions. I found the
2X overallocation fudge factor of the original patch way too aggressive,
and just pick the larger of the most recent allocation amount or the
smoothed value. The main thing that throws off the allocation
estimation is when you hit a checkpoint, which can give a big spike
after the background writer returns to BgBufferSync and notices all the
buffers that were allocated during the checkpoint write; the code then
tries to find more buffers it can recycle than it needs to. Since the
checkpoint itself normally leaves a large wake of reusable buffers
behind it, I didn't find this to be a serious problem.
Can you tell more about the tests you performed? That algorithm seems
decent, but I wonder why the simple fudge factor wasn't good enough? I
would've thought that a 2x or even bigger fudge factor would still be
only a tiny fraction of shared_buffers, and wouldn't really affect
performance.
The load distributed checkpoint patch should mitigate the checkpoint
spike problem by continuing the LRU scan throughout the checkpoint.
There's another communication issue here, which is that SyncOneBuffer
needs to return more information about the buffer than it currently does
once it gets it locked. The background writer needs to know more than
just if it was written to tune itself. The original patch used a clever
trick for this which worked but I found confusing. I happen to have a
bunch of other background writer tuning code I'm working on, and I had
to come up with a more robust way to communicate buffer internals back
via this channel. I used that code here, it's a bitmask setup similar
to how flags like BM_DIRTY are used. It's overkill for solving this
particular problem, but I think the interface is clean and it helps
support future enhancements in intelligent background writing.
Uh, that looks pretty ugly to me. The normal way to return multiple
values is to pass a pointer as an argument, though that can get ugly as
well if there's a lot of return values. What combinations of the flags
are valid? Would an enum be better? Or how about moving the checks for
dirty and pinned buffers from SyncOneBuffer to the callers?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Sun, 13 May 2007, Heikki Linnakangas wrote:
StrategyReportWrite increments numClientWrites without holding the
BufFreeListLock, that's a race condition. The terminology needs some
adjustment; clients don't write buffers, backends do.
That was another piece of debugging code I moved into the main path
without thinking too hard about it, good catch. I have a
documentation/naming patch I've started on that revises a lot of the
pg_stat_bgwriter names to be more consistant and easier to understand (as
well as re-ordering the view); the underlying code is still fluid enough
that I was trying to nail that down first.
That algorithm seems decent, but I wonder why the simple fudge factor
wasn't good enough? I would've thought that a 2x or even bigger fudge
factor would still be only a tiny fraction of shared_buffers, and
wouldn't really affect performance.
I like the way the smoothing evens out the I/O rates. I saw occasional
spots where the buffer allocations drop to 0 for a few intervals while
other stuff is going on everybody is waiting for, and I didn't want all
LRU cleanup come to halt just because there's a fraction of a second where
nothing happened in the middle of a very busy period.
As for why not overestimate, if you get into a situation where the buffer
cache is very dirty with much of the data being recently used (I normally
see this with bulk UPDATEs on indexed tables), you can end up scanning
many buffers for each one you find that can be written out. In this kind
of situation, deciding that you actually need to write out twice as many
just because you don't trust your estimate is very inefficient.
I was able to simulate most of the bad behavior I look for with the
pgbench schema using "update accounts set abalance=abalance+1;". To throw
some sample numbers out, on my test server I was just doing final work on
last night, I was seeing peaks of about 600-1200 buffers allocated per
200ms interval doing that simple UPDATE with shared_buffers=32768.
Let's call it 2% of the pool. If 50% of the pool is either dirty or can't
be reused yet, that means I'll average having to scan 2%/50%=4% of the
pool to find enough buffers to reuse per interval. I wouldn't describe
that as a tiny fraction, and doubling it is not an insignificant load
increase. I'd like to be able to increase the LRU percentage scanned
without being concerned that I'm wasting resources because of this
situation.
The fact that this problem exists is what got me digging into the
background writer code in the first place, because it's way worse on my
production server than this example suggests. The buffer cache is bigger,
but the ability of the server to dirty it under heavy load is far better.
Returning to the theme discussed in the -hackers thread I referenced:
you can't try to make the background writer LRU do all the writes without
exposing yourself to issues like this, because it doesn't touch the usage
counts. Therefore it's vulnerable to breakdowns if your buffer pool
shifts toward dirty and non-reusable.
Having the background writer run amok when reusable buffers are rare can
really pull down the performance of the other backends (as well as delay
checkpoints), both in terms of CPU usage and locking issues. I don't feel
it's a good idea to try and push it too hard unless some of these
underlying issues are fixed first; I'd rather err on the side of letting
it do less rather than more than it has to.
The normal way to return multiple values is to pass a pointer as an
argument, though that can get ugly as well if there's a lot of return
values.
I'm open to better suggestions, but after tinkering with this interface
for over a month now--including pointers and enums--this is the first
implementation I was happy with.
There are four things I eventually need returned here, to support the
fully automatic BGW tuning. My 1st implementation passed in pointers, and
in addition to being ugly I found consistantly checking for null pointers
and data consistancy a drag, both from the coding and the overhead
perspective.
What combinations of the flags are valid? Would an enum be better?
And my 2nd generation code used an enum. There are five possible return
code states:
CLEAN + REUSABLE + !WRITTEN
CLEAN + !REUSABLE + !WRITTEN
!CLEAN + !REUSABLE + WRITTEN (all-scan only)
!CLEAN + !REUSABLE + !WRITTEN (rejected by skip)
!CLEAN + REUSABLE + WRITTEN
!CLEAN + REUSABLE + !WRITTEN isn't possible (all paths will write dirty
reusable buffers)
I found the enum-based code more confusing, both reading it and making
sure it was correct when writing it, than the current form. Right now I
have lines like:
if (buffer_state & BUF_REUSABLE)
With an enum this has to be something like
if (buffer_state == BUF_CLEAN_REUSABLE || buffer_state ==
BUF_REUSABLE_WRITTEN)
And that was a pain all around; I kept having to stare at the table above
to make sure the code was correct. Also, in order to pass back full
usage_count information I was back to either pointers or bitshifting
anyway. While this particular patch doesn't need the usage count, the
later ones I'm working on do, and I'd like to get this interface complete
while it's being tinkered with anyway.
Or how about moving the checks for dirty and pinned buffers from
SyncOneBuffer to the callers?
There are 3 callers to SyncOneBuffer, and almost all the code is shared
between them. Trying to push even just the dirty/pinned stuff back into
the callers would end up being a cut and paste job that would duplicate
many lines. That's on top of the fact that the buffer is cleanly
locked/unlocked all in one section of code right now, and I didn't see how
to move any parts of that to the callers without disrupting that clean
interface.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Greg Smith <gsmith@gregsmith.com> wrote:
The first patch (buf-alloc-stats) takes the two most interesting pieces of
data the original patch collected, the number of buffers allocated
recently and the number that the clients wrote out, and ties all that into
the new stats structure.
The second patch (limit-lru) adds on top of that a constraint of the LRU
writer so that it doesn't do any more work than it has to.
Both patches look good.
Now we get to the controversial part. The original patch removed the
bgwriter_lru_maxpages parameter and updated the documentation accordingly.
I didn't do that here. The reason is that after playing around in this
area I'm not convinced yet I can satisfy all the tuning scenarios I'd like
to be able to handle that way. I describe this patch as enforcing a
constraint instead; it allows you to set the LRU parameters much higher
than was reasonable before without having to be as concerned about the LRU
writer wasting resources.
I'm agreeable to the limiters of resource usage by bgwriter.
BTW, your patch will cut LRU writes short, but will not encourage to
do more works. So should set more aggressive values to bgwriter_lru_percent
and bgwriter_lru_maxpages as defaults? My original motivation was to enlarge
bgwriter_lru_maxpages automatically; the default bgwriter_lru_maxpages (=5)
seemed to be too small.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
On Mon, 14 May 2007, ITAGAKI Takahiro wrote:
BTW, your patch will cut LRU writes short, but will not encourage to
do more works. So should set more aggressive values to bgwriter_lru_percent
and bgwriter_lru_maxpages as defaults?
Setting a bigger default maximum is one possibility I was thinking about.
Since the whole background writer setup is kind of complicated, the other
thing I was working on is writing a guide on how to use the new
pg_stat_bgwriter information to figure out if you need to increase
bgwriter_[all|lru]_pages (and the other parameters too). It makes it much
easier to write that if you can say "You can safely set
bgwriter_lru_maxpages high because it only writes what it needs to based
on your usage".
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Greg Smith wrote:
On Mon, 14 May 2007, ITAGAKI Takahiro wrote:
BTW, your patch will cut LRU writes short, but will not encourage to
do more works. So should set more aggressive values to
bgwriter_lru_percent
and bgwriter_lru_maxpages as defaults?Setting a bigger default maximum is one possibility I was thinking
about. Since the whole background writer setup is kind of complicated,
the other thing I was working on is writing a guide on how to use the
new pg_stat_bgwriter information to figure out if you need to increase
bgwriter_[all|lru]_pages (and the other parameters too). It makes it
much easier to write that if you can say "You can safely set
bgwriter_lru_maxpages high because it only writes what it needs to based
on your usage".
If it's safe to set it high, let's default it to infinity.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Greg Smith <gsmith@gregsmith.com> writes:
Since the whole background writer setup is kind of complicated, the other
thing I was working on is writing a guide on how to use the new
pg_stat_bgwriter information to figure out if you need to increase
bgwriter_[all|lru]_pages (and the other parameters too). It makes it much
easier to write that if you can say "You can safely set
bgwriter_lru_maxpages high because it only writes what it needs to based
on your usage".
If you can write something like that, why do we need the parameter at all?
regards, tom lane
On Mon, 14 May 2007, Heikki Linnakangas wrote:
If it's safe to set it high, let's default it to infinity.
The maximum right now is 1000, and that would be a reasonable new default.
You really don't to write more than 1000 per interval anyway without
taking a break for checkpoints; the more writes you do at once, the higher
the chances are you'll have the whole thing stall because the OS makes you
wait for a write (this is not a theoretical comment; I've watched it
happen when I try to get the BGW doing too much).
If someone has so much activity that they're allocating more than that
during a period, they should shrink the delay instead. The kinds of
systems where 1000 isn't high enough for bgwriter_lru_maxpages are going
to be compelled to adjust these parameters anyway for good performance.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
On Mon, 14 May 2007, Tom Lane wrote:
If you can write something like that, why do we need the parameter at all?
Couple of reasons:
-As I already mentioned in my last message, I think it's unwise to let the
LRU writes go completely unbounded. I still think there should be a
maximum, and if there is one it should be tunable. You can get into
situations where the only way to get the LRU writer to work at all is to
set the % to scan fairly high, but that exposes you to way more writes
than you might want per interval in situations where buffers to write are
easy to find.
-There is considerable coupling between how the LRU and the all background
writers work. There are workloads where the LRU writer is relatively
ineffective, and only the all one really works well. If there is a
limiter on the writes from the all writer, but not on the LRU, admins may
not be able to get the balance between the two they want. I know I
wouldn't.
-Just because I can advise what is generally the right move, that doesn't
mean it's always the right one. Someone may notice that the maximum pages
written limit is being nailed and not care.
The last system I really got deep into the background writer mechanics on,
it could be very effective at improving performance and reducing
checkpoint spikes under low to medium loads. But under heavy load, it
just got in the way of the individual backends running, which was
absolutely necessary in order to execute the LRU mechanics (usage_count--)
so less important buffers could be kicked out. I would like people to
still be able to set a tuning such that the background writers were useful
under average loads, but didn't ever try to do too much. It's much more
difficult to do that if bgwriter_lru_maxpages goes away.
I realized recently the task I should take on here is to run some more
experiments with the latest code and pass along suggested techniques for
producing/identifying the kind of problem conditions I've run into in the
past; then we can see if other people can reproduce them. I got a new
8-core server I need to thrash anyway and will try and do just that
starting tomorrow.
For all I know my concerns are strictly a rare edge case. But since the
final adjustments to things like whether there is an upper limit or not
are very small patches compared to what's already been done here, I sent
in what I thought was ready to go because I didn't want to hold up
reviewing the bulk of the code over some of these fine details.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
On Mon, May 14, 2007 at 11:19:23PM -0400, Greg Smith wrote:
On Mon, 14 May 2007, Tom Lane wrote:
If you can write something like that, why do we need the parameter at all?
Couple of reasons:
-As I already mentioned in my last message, I think it's unwise to let the
LRU writes go completely unbounded. I still think there should be a
maximum, and if there is one it should be tunable. You can get into
situations where the only way to get the LRU writer to work at all is to
set the % to scan fairly high, but that exposes you to way more writes
than you might want per interval in situations where buffers to write are
easy to find.-There is considerable coupling between how the LRU and the all background
writers work. There are workloads where the LRU writer is relatively
ineffective, and only the all one really works well. If there is a
limiter on the writes from the all writer, but not on the LRU, admins may
not be able to get the balance between the two they want. I know I
wouldn't.-Just because I can advise what is generally the right move, that doesn't
mean it's always the right one. Someone may notice that the maximum pages
written limit is being nailed and not care.The last system I really got deep into the background writer mechanics on,
it could be very effective at improving performance and reducing
checkpoint spikes under low to medium loads. But under heavy load, it
just got in the way of the individual backends running, which was
absolutely necessary in order to execute the LRU mechanics (usage_count--)
so less important buffers could be kicked out. I would like people to
still be able to set a tuning such that the background writers were useful
under average loads, but didn't ever try to do too much. It's much more
difficult to do that if bgwriter_lru_maxpages goes away.I realized recently the task I should take on here is to run some more
experiments with the latest code and pass along suggested techniques for
producing/identifying the kind of problem conditions I've run into in the
past; then we can see if other people can reproduce them. I got a new
8-core server I need to thrash anyway and will try and do just that
starting tomorrow.For all I know my concerns are strictly a rare edge case. But since the
final adjustments to things like whether there is an upper limit or not
are very small patches compared to what's already been done here, I sent
in what I thought was ready to go because I didn't want to hold up
reviewing the bulk of the code over some of these fine details.
Apologies for asking this on the wrong list, but it is at least the right
thread.
What is the current thinking on bg_writer setttings for systems such as
4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?
-dg
--
David Gould daveg@sonic.net
If simplicity worked, the world would be overrun with insects.
Greg Smith wrote:
I realized recently the task I should take on here is to run some more
experiments with the latest code and pass along suggested techniques for
producing/identifying the kind of problem conditions I've run into in
the past; then we can see if other people can reproduce them. I got a
new 8-core server I need to thrash anyway and will try and do just that
starting tomorrow.
Yes, please do that. I can't imagine a situation where a tunable maximum
would help, but you've clearly spent a lot more time experimenting with
it than me.
I have noticed that on a heavily (over)loaded system with fully
saturated I/O, bgwriter doesn't make any difference because all the
backends need to wait for writes anyway. But it doesn't hurt either.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Moving to -performance.
On Mon, May 14, 2007 at 09:55:16PM -0700, daveg wrote:
Apologies for asking this on the wrong list, but it is at least the right
thread.What is the current thinking on bg_writer setttings for systems such as
4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?
It depends greatly on how much of your data tends to stay 'pinned' in
shared_buffers between checkpoints. In a case where the same data tends
to stay resident you're going to need to depend on the 'all' scan to
decrease the impact of checkpoints (though the load distributed
checkpoint patch will change that greatly).
Other than that tuning bgwriter boils down to your IO capability as well
as how often you're checkpointing.
--
Jim Nasby decibel@decibel.org
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
On Tue, 15 May 2007, Jim C. Nasby wrote:
Moving to -performance.
No, really, moved to performance now.
On Mon, May 14, 2007 at 09:55:16PM -0700, daveg wrote:
What is the current thinking on bg_writer setttings for systems such as
4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?
First off, the primary purpose of both background writers are to keep the
individual client backends from stalling to wait for disk I/O. If you're
running a batch workload, and there isn't a specific person waiting for a
response, the background writer isn't as critical to worry about.
As Jim already said, tuning the background writer well really requires a
look at the usage profile of your buffer pool and some thinking about your
I/O capacity just as much as it does your CPU/memory situation.
For the first part, I submitted a patch that updates the
contrib/pg_buffercache module to show the usage count information of your
buffer cache. The LRU writer only writes things with a usage_count of 0,
so taking some snapshots of that data regularly will give you an idea
whether you can useful use it or whether you'd be better off making the
all scan more aggressive. It's a simple patch that only effects a contrib
module you can add and remove easily, I would characterize it as pretty
safe to apply even to a production system as long as you're doing the
initial tests off-hours. The patch is at
http://archives.postgresql.org/pgsql-patches/2007-03/msg00555.php
And the usual summary query I run after installing it in a database is:
select usagecount,count(*),isdirty from pg_buffercache group by
isdirty,usagecount order by isdirty,usagecount;
As for the I/O side of things, I'd suggest you compute a worst-case
scenario for how many disk writes will happen if every buffer the
background writer comes across is dirty and base your settings on what
you're comfortable with there. Say you kept the default interval of 200ms
but increased the maximum pages value to 1000; each writer could
theoretically push 1000 x 8KB x 5/second = 40MB/s worth of data to disk.
Since these are database writes that have to be interleaved with reads,
the sustainable rate here is not as high as you might think. You might
get a useful performance boost just pushing the max numbers from the
defaults to up into the couple of hundred range--with the amount of RAM
you probably have decided to the buffer cache even the default small
percentages will cover a lot of ground and might need to be increased. I
like 250 as a round number because it makes for at most an even 10MB a
second flow out per writer. I wouldn't go too high on the max writes per
pass unless you're in a position to run some good tests to confirm you're
not actually making things worse.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Your patch has been added to the PostgreSQL unapplied patches list at:
http://momjian.postgresql.org/cgi-bin/pgpatches
It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.
---------------------------------------------------------------------------
Greg Smith wrote:
Attached are two patches that try to recast the ideas of Itagaki
Takahiro's auto bgwriter_lru_maxpages patch in the direction I think this
code needs to move. Epic-length commentary follows.The original code came from before there was a pg_stat_bgwriter. The
first patch (buf-alloc-stats) takes the two most interesting pieces of
data the original patch collected, the number of buffers allocated
recently and the number that the clients wrote out, and ties all that into
the new stats structure. With this patch applied, you can get a feel for
things like churn/turnover in the buffer pool that were very hard to
quantify before. Also, it makes it easy to measure how well your
background writer is doing at writing buffers so the clients don't have
to. Applying this would complete one of my personal goals for the 8.3
release, which was having stats to track every type of buffer write.I split this out because I think it's very useful to have regardless of
whether the automatic tuning portion is accepted, and I think these
smaller patches make the review easier. The main thing I would recommend
someone check is how am_bg_writer is (mis?)used here. I spliced some of
the debugging-only code from the original patch, and I can't tell if the
result is a robust enough approach to solving the problem of having every
client indirectly report their activity to the background writer. Other
than that, I think this code is ready for review and potentially
comitting.The second patch (limit-lru) adds on top of that a constraint of the LRU
writer so that it doesn't do any more work than it has to. Note that I
left verbose debugging code in here because I'm much less confident this
patch is complete.It predicts upcoming buffer allocations using a 16-period weighted moving
average of recent activity, which you can think of as the last 3.2 seconds
at the default interval. After testing a few systems that seemed a decent
compromise of smoothing in both directions. I found the 2X overallocation
fudge factor of the original patch way too aggressive, and just pick the
larger of the most recent allocation amount or the smoothed value. The
main thing that throws off the allocation estimation is when you hit a
checkpoint, which can give a big spike after the background writer returns
to BgBufferSync and notices all the buffers that were allocated during the
checkpoint write; the code then tries to find more buffers it can recycle
than it needs to. Since the checkpoint itself normally leaves a large
wake of reusable buffers behind it, I didn't find this to be a serious
problem.There's another communication issue here, which is that SyncOneBuffer
needs to return more information about the buffer than it currently does
once it gets it locked. The background writer needs to know more than
just if it was written to tune itself. The original patch used a clever
trick for this which worked but I found confusing. I happen to have a
bunch of other background writer tuning code I'm working on, and I had to
come up with a more robust way to communicate buffer internals back via
this channel. I used that code here, it's a bitmask setup similar to how
flags like BM_DIRTY are used. It's overkill for solving this particular
problem, but I think the interface is clean and it helps support future
enhancements in intelligent background writing.Now we get to the controversial part. The original patch removed the
bgwriter_lru_maxpages parameter and updated the documentation accordingly.
I didn't do that here. The reason is that after playing around in this
area I'm not convinced yet I can satisfy all the tuning scenarios I'd like
to be able to handle that way. I describe this patch as enforcing a
constraint instead; it allows you to set the LRU parameters much higher
than was reasonable before without having to be as concerned about the LRU
writer wasting resources.I already brought up some issues in this area on -hackers (
http://archives.postgresql.org/pgsql-hackers/2007-04/msg00781.php ) but my
work hasn't advanced as fast as I'd hoped. I wanted to submit what I've
finished anyway because I think any approach here is going to have cope
with the issues addressed in these two patches, and I'm happy now with how
they're solved here. It's only a one-line delete to disable the LRU
limiting behavior of the second patch, at which point it's strictly
internals code with no expected functional impact that alternate
approaches might be built on.--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Content-Description:
[ Attachment, skipping... ]
Content-Description:
[ Attachment, skipping... ]
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +