Dead Space Map version 2
This is the second proposal for Dead Space Map (DSM).
Here is the previous discussion:
http://archives.postgresql.org/pgsql-hackers/2006-12/msg01188.php
I'll post the next version of the Dead Space Map patch to -patches.
I've implemented 2bits/page bitmap and new vacuum commands.
Memory management and recovery features are not done yet.
I think it's better to get DSM and HOT together. DSM is good at complex
updated cases but not at heavily updated cases. HOT has opposite aspects,
as far as I can see. I think they can cover each other.
2bits/page bitmap
-----------------
Each heap pages have 4 states for dead space map; HIGH, LOW, UNFROZEN and
FROZEN. VACUUM uses the states to reduce the number of target pages.
- HIGH : High priority to vacuum. Maybe many dead tuples in the page.
- LOW : Low priority to vacuum Maybe few dead tuples in the page.
- UNFROZEN : No dead tuples, but some unfrozen tuples in the page.
- FROZEN : No dead nor unfrozen tuples in the page.
If we do UPDATE a tuple, the original page containing the tuple is marked
as HIGH and the new page where the updated tuple is placed is marked as LOW.
When we commit the transaction, the updated tuples needs only FREEZE.
That's why the after-page is marked as LOW. However, If we rollback, the
after-page should be vacuumed, so we should mark the page LOW, not UNFROZEN.
We don't know the transaction will commit or rollback at the UPDATE.
If we combine this with the HOT patch, pages with HOT tuples are probably
marked as UNFROZEN because we don't bother vacuuming HOT tuples. They can
be removed incrementally and doesn't require explicit vacuums.
In future work, we can do index-only-scan for tuples that is in UNFROZEN or
FROZEN pages. (currently not implemented)
VACUUM commands
---------------
VACUUM now only scans the pages that possibly have dead tuples.
VACUUM ALL, a new syntax, behaves as the same as before.
- VACUUM FULL : Not changed. scans all pages and compress them.
- VACUUM ALL : Scans all pages; Do the same behavior as previous VACUUM.
- VACUUM : Scans only HIGH pages usually, but also LOW and UNFROZEN
pages on vacuums in the cases for preventing XID wraparound.
The commitment of oldest XID for VACUUM is not changed. There should not be
tuples that XIDs are older than (Current XID - vacuum_freeze_min_age) after
VACUUM. If the VACUUM can guarantee the commitment, it scans only HIGH pages.
Otherwise, it scans HIGH, LOW and UNFROZEN pages for FREEZE.
Performance issues
------------------
* Enable/Disable DSM tracking per tables
DSM requires more or less additional works. If we know specific tables
where DSM does not work well, ex. heavily updated small tables, we can
disable DSM for it. The syntax is:
ALTER TABLE name SET (dsm=true/false);
* Dead Space State Cache
The DSM management module is guarded using one LWLock, DeadSpaceLock.
Almost all accesses to DSM requires only shared lock, but the frequency
of shared lock was very high (tied with BufMappingLock) in my research.
To avoid the lock contention, I added a cache of dead space state in
BufferDesc flags. Backends see the flags first, and avoid locking if no
need to
* Agressive freezing
We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.
This is for making FROZEN pages but not UNFROZEN pages as far as possible
in order to reduce works in XID wraparound vacuums.
Memory management
-----------------
In current implementation, DSM allocates a bunch of memory at start up and
we cannot modify it in running. It's maybe enough because DSM consumes very
little memory -- 32MB memory per 1TB database.
There are 3 parameters for FSM and DSM.
- max_fsm_pages = 204800
- max_fsm_relations = 1000 (= max_dsm_relations)
- max_dsm_pages = 4096000
I'm thinking to change them into 2 new paramaters. We will allocates memory
for DSM that can hold all of estimated_database_size, and for FSM 50% or
something of the size. Is this reasonable?
- estimated_max_relations = 1000
- estimated_database_size = 4GB (= about max_fsm_pages * 8KB * 2)
Recovery
--------
I've already have a recovery extension. However, it can recover DSM
but not FSM. Do we also need to restore FSM? If we don't, unreusable
pages might be left in heaps. Of cource it could be reused if another
tuple in the page are updated, but VACUUM will not find those pages.
Comments and suggestions are really appreciated.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
On Tue, Feb 27, 2007 at 12:05:57PM +0900, ITAGAKI Takahiro wrote:
Each heap pages have 4 states for dead space map; HIGH, LOW, UNFROZEN and
FROZEN. VACUUM uses the states to reduce the number of target pages.- HIGH : High priority to vacuum. Maybe many dead tuples in the page.
- LOW : Low priority to vacuum Maybe few dead tuples in the page.
- UNFROZEN : No dead tuples, but some unfrozen tuples in the page.
- FROZEN : No dead nor unfrozen tuples in the page.If we do UPDATE a tuple, the original page containing the tuple is marked
as HIGH and the new page where the updated tuple is placed is marked as LOW.
Don't you mean UNFROZEN?
When we commit the transaction, the updated tuples needs only FREEZE.
That's why the after-page is marked as LOW. However, If we rollback, the
after-page should be vacuumed, so we should mark the page LOW, not UNFROZEN.
We don't know the transaction will commit or rollback at the UPDATE.
What makes it more important to mark the original page as HIGH instead
of LOW, like the page with the new tuple? The description of the states
indicates that there would likely be a lot more dead tuples in a HIGH
page than in a LOW page.
Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.
* Agressive freezing
We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.
This is for making FROZEN pages but not UNFROZEN pages as far as possible
in order to reduce works in XID wraparound vacuums.
Do you mean using OldestXmin instead of FreezeLimit?
Perhaps it might be better to save that optimization for later...
In current implementation, DSM allocates a bunch of memory at start up and
we cannot modify it in running. It's maybe enough because DSM consumes very
little memory -- 32MB memory per 1TB database.There are 3 parameters for FSM and DSM.
- max_fsm_pages = 204800
- max_fsm_relations = 1000 (= max_dsm_relations)
- max_dsm_pages = 4096000I'm thinking to change them into 2 new paramaters. We will allocates memory
for DSM that can hold all of estimated_database_size, and for FSM 50% or
something of the size. Is this reasonable?
I don't think so, at least not until we get data from the field about
what's typical. If the DSM is tracking every page in the cluster then
I'd expect the FSM to be closer to 10% or 20% of that, anyway.
I've already have a recovery extension. However, it can recover DSM
but not FSM. Do we also need to restore FSM? If we don't, unreusable
pages might be left in heaps. Of cource it could be reused if another
tuple in the page are updated, but VACUUM will not find those pages.
Yes, DSM would make FSM recovery more important, but I thought it was
recoverable now? Or is that only on a clean shutdown?
I suspect we don't need perfect recoverability... theoretically we could
just commit the FSM after vacuum frees pages and leave it at that; if we
revert to that after a crash, backends will grab pages from the FSM only
to find there's no more free space, at which point they could pull the
page from the FSM and find another one. This would lead to degraded
performance for a while after a crash, but that might be a good
trade-off.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
"Jim C. Nasby" <jim@nasby.net> writes:
Yes, DSM would make FSM recovery more important, but I thought it was
recoverable now? Or is that only on a clean shutdown?
Currently we throw away FSM during any non-clean restart. This is
probably overkill but I'm quite unclear what would be a safe
alternative.
I suspect we don't need perfect recoverability...
The main problem with the levels proposed by Takahiro-san is that any
transition from FROZEN to not-FROZEN *must* be exactly recovered,
because vacuum will never visit an allegedly frozen page at all. This
appears to require WAL-logging DSM state changes, which is a pretty
serious performance hit. I'd be happier if the DSM content could be
treated as just a hint. I think that means not trusting it for whether
a page is frozen to the extent of not needing vacuum even for
wraparound. So I'm inclined to propose that there be only two states
(hence only one DSM bit per page): page needs vacuum for space recovery,
or not. Vacuum for XID wraparound would have to hit every page
regardless.
regards, tom lane
On Tue, 2007-02-27 at 12:05 +0900, ITAGAKI Takahiro wrote:
I think it's better to get DSM and HOT together. DSM is good at
complex updated cases but not at heavily updated cases. HOT has
opposite aspects, as far as I can see. I think they can cover each
other.
Very much agreed.
I'll be attempting to watch for any conflicting low-level assumptions as
we progress towards deadline.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
On Tue, 2007-02-27 at 12:05 +0900, ITAGAKI Takahiro wrote:
If we combine this with the HOT patch, pages with HOT tuples are probably
marked as UNFROZEN because we don't bother vacuuming HOT tuples. They can
be removed incrementally and doesn't require explicit vacuums.
Perhaps avoid DSM entries for HOT updates completely?
VACUUM commands
---------------VACUUM now only scans the pages that possibly have dead tuples.
VACUUM ALL, a new syntax, behaves as the same as before.- VACUUM FULL : Not changed. scans all pages and compress them.
- VACUUM ALL : Scans all pages; Do the same behavior as previous VACUUM.
- VACUUM : Scans only HIGH pages usually, but also LOW and UNFROZEN
pages on vacuums in the cases for preventing XID wraparound.
Sounds good.
Performance issues
------------------* Enable/Disable DSM tracking per tables
DSM requires more or less additional works. If we know specific tables
where DSM does not work well, ex. heavily updated small tables, we can
disable DSM for it. The syntax is:
ALTER TABLE name SET (dsm=true/false);
How about a dsm_tracking_limit GUC? (Better name please)
The number of pages in a table before we start tracking DSM entries for
it. DSM only gives worthwhile benefits for larger tables anyway, so let
the user define what large means for them.
dsm_tracking_limit = 1000 by default.
* Dead Space State Cache
The DSM management module is guarded using one LWLock, DeadSpaceLock.
Almost all accesses to DSM requires only shared lock, but the frequency
of shared lock was very high (tied with BufMappingLock) in my research.
To avoid the lock contention, I added a cache of dead space state in
BufferDesc flags. Backends see the flags first, and avoid locking if no
need to
ISTM there should be a point at which DSM is so full we don't bother to
keep track any longer, so we can drop that information. For example if
user runs UPDATE without a WHERE clause, there's no point in tracking
whole relation.
Memory management
-----------------In current implementation, DSM allocates a bunch of memory at start up and
we cannot modify it in running. It's maybe enough because DSM consumes very
little memory -- 32MB memory per 1TB database.
That sounds fine.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
On Tue, 2007-02-27 at 00:55 -0500, Tom Lane wrote:
"Jim C. Nasby" <jim@nasby.net> writes:
Yes, DSM would make FSM recovery more important, but I thought it was
recoverable now? Or is that only on a clean shutdown?Currently we throw away FSM during any non-clean restart. This is
probably overkill but I'm quite unclear what would be a safe
alternative.I suspect we don't need perfect recoverability...
The main problem with the levels proposed by Takahiro-san is that any
transition from FROZEN to not-FROZEN *must* be exactly recovered,
because vacuum will never visit an allegedly frozen page at all. This
appears to require WAL-logging DSM state changes, which is a pretty
serious performance hit. I'd be happier if the DSM content could be
treated as just a hint. I think that means not trusting it for whether
a page is frozen to the extent of not needing vacuum even for
wraparound.
Agreed.
So I'm inclined to propose that there be only two states
(hence only one DSM bit per page): page needs vacuum for space recovery,
or not. Vacuum for XID wraparound would have to hit every page
regardless.
I'm inclined to think: this close to deadline it would be more robust to
go with the simpler option. So, agreed to the one bit per page.
We can revisit the 2 bits/page idea easily for later releases. If the
DSM is non-transactional, upgrading to a new format in the future should
be very easy.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
"Jim C. Nasby" <jim@nasby.net> wrote:
If we do UPDATE a tuple, the original page containing the tuple is marked
as HIGH and the new page where the updated tuple is placed is marked as LOW.Don't you mean UNFROZEN?
No, the new tuples are marked as LOW. I intend to use UNFROZEN and FROZEN
pages as "all tuples in the pages are visible to all transactions" for
index-only-scan in the future.
What makes it more important to mark the original page as HIGH instead
of LOW, like the page with the new tuple? The description of the states
indicates that there would likely be a lot more dead tuples in a HIGH
page than in a LOW page.Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.
Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?
* Agressive freezing
We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.Do you mean using OldestXmin instead of FreezeLimit?
Yes, we will use OldestXmin as the threshold to freeze tuples in
dirty pages or pages that have some dead tuples. Or, many UNFROZEN
pages still remain after vacuum and they will cost us in the next
vacuum preventing XID wraparound.
I'm thinking to change them into 2 new paramaters. We will allocates memory
for DSM that can hold all of estimated_database_size, and for FSM 50% or
something of the size. Is this reasonable?I don't think so, at least not until we get data from the field about
what's typical. If the DSM is tracking every page in the cluster then
I'd expect the FSM to be closer to 10% or 20% of that, anyway.
I'd like to add some kind of logical flavors to max_fsm_pages
and max_dsm_pages. For DSM, max_dsm_pages should represent the
whole database size. In the other hand, what meaning does
max_fsm_pages have? (estimated_updatable_size ?)
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Tom Lane <tgl@sss.pgh.pa.us> wrote:
Vacuum for XID wraparound would have to hit every page regardless.
There is one problem at this point. If we want to guarantee that there
are no tuples that XIDs are older than pg_class.relfrozenxid, we must scan
all pages for XID wraparound for every vacuums. So I used two thresholds
for treating XIDs, that is commented as follows. Do you have better ideas
for this point?
/*
* We use vacuum_freeze_min_age to determine whether a freeze scan is
* needed, but half vacuum_freeze_min_age for the actual freeze limits
* in order to prevent XID wraparound won't occur too frequently.
*/
Also, normal vacuums uses DSM and freeze-vacuum does not, so vacuums
sometimes take longer time than usual. Doesn't the surprise bother us?
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
"Simon Riggs" <simon@2ndquadrant.com> wrote:
If we combine this with the HOT patch, pages with HOT tuples are probably
marked as UNFROZEN because we don't bother vacuuming HOT tuples. They can
be removed incrementally and doesn't require explicit vacuums.Perhaps avoid DSM entries for HOT updates completely?
Yes, if we employ 1bit/page (worth vacuum or not).
Or no if 2bits/page because HOT updates change page states to UNFROZEN.
* Enable/Disable DSM tracking per tables
How about a dsm_tracking_limit GUC? (Better name please)
The number of pages in a table before we start tracking DSM entries for
it. DSM only gives worthwhile benefits for larger tables anyway, so let
the user define what large means for them.
dsm_tracking_limit = 1000 by default.
Sound good. How about small_table_size = 8MB for the variable?
I found that we've already have the value used for truncating
threshold for vacuum. (REL_TRUNCATE_MINIMUM = 1000 in vacuumlazy.c)
I think they have the same purpose in treating of small tables
and we can use the same variable in these places.
* Dead Space State Cache
ISTM there should be a point at which DSM is so full we don't bother to
keep track any longer, so we can drop that information. For example if
user runs UPDATE without a WHERE clause, there's no point in tracking
whole relation.
It's a bit difficult. We have to lock DSM *before* we see whether
the table is tracked or not. So we need to cache the tracked state
in the relcache entry, but it requres some works to keep coherency
between cached states and shared states.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Tom Lane wrote:
The main problem with the levels proposed by Takahiro-san is that any
transition from FROZEN to not-FROZEN *must* be exactly recovered,
because vacuum will never visit an allegedly frozen page at all. This
appears to require WAL-logging DSM state changes, which is a pretty
serious performance hit.
I doubt it would be a big performance hit. AFAICS, all the information
needed to recover the DSM is already written to WAL, so it wouldn't need
any new WAL records.
I'd be happier if the DSM content could be
treated as just a hint. I think that means not trusting it for whether
a page is frozen to the extent of not needing vacuum even for
wraparound. So I'm inclined to propose that there be only two states
(hence only one DSM bit per page): page needs vacuum for space recovery,
or not. Vacuum for XID wraparound would have to hit every page
regardless.
If we don't have a frozen state, we can't use the DSM to implement
index-only scans. Index-only scans will obviously require a lot more
work than just the DSM, but I'd like to have a solution that enables it
in the future.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Tue, Feb 27, 2007 at 12:55:21AM -0500, Tom Lane wrote:
"Jim C. Nasby" <jim@nasby.net> writes:
Yes, DSM would make FSM recovery more important, but I thought it was
recoverable now? Or is that only on a clean shutdown?Currently we throw away FSM during any non-clean restart. This is
probably overkill but I'm quite unclear what would be a safe
alternative.
My thought would be to revert to a FSM that has pages marked as free
that no longer are. Could be done by writing the FSM out every time we
add pages to it. After an unclean restart backends would be getting
pages from the FSM that didn't have free space, in which case they'd
need to yank that page out of the FSM and request a new one. Granted,
this means extra IO until the FSM gets back to a realistic state, but I
suspect that's better than bloating tables out until the next vacuum.
And it's ultimately less IO than re-vacuuming every table to rebuild the
FSM.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
On Tue, Feb 27, 2007 at 05:38:39PM +0900, ITAGAKI Takahiro wrote:
"Jim C. Nasby" <jim@nasby.net> wrote:
If we do UPDATE a tuple, the original page containing the tuple is marked
as HIGH and the new page where the updated tuple is placed is marked as LOW.Don't you mean UNFROZEN?
No, the new tuples are marked as LOW. I intend to use UNFROZEN and FROZEN
pages as "all tuples in the pages are visible to all transactions" for
index-only-scan in the future.
Ahh, ok. Makes sense, though I tend to agree with others that it's
better to leave that off for now, or at least do the initial patch
without it.
What makes it more important to mark the original page as HIGH instead
of LOW, like the page with the new tuple? The description of the states
indicates that there would likely be a lot more dead tuples in a HIGH
page than in a LOW page.Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?
IMO yes, we want the bgwriter to be the only process that's normally
writing pages out. How close we are to that, I don't know...
* Agressive freezing
We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.Do you mean using OldestXmin instead of FreezeLimit?
Yes, we will use OldestXmin as the threshold to freeze tuples in
dirty pages or pages that have some dead tuples. Or, many UNFROZEN
pages still remain after vacuum and they will cost us in the next
vacuum preventing XID wraparound.
Another good idea. If it's not too invasive I'd love to see that as a
stand-alone patch so that we know it can get in.
I'm thinking to change them into 2 new paramaters. We will allocates memory
for DSM that can hold all of estimated_database_size, and for FSM 50% or
something of the size. Is this reasonable?I don't think so, at least not until we get data from the field about
what's typical. If the DSM is tracking every page in the cluster then
I'd expect the FSM to be closer to 10% or 20% of that, anyway.I'd like to add some kind of logical flavors to max_fsm_pages
and max_dsm_pages. For DSM, max_dsm_pages should represent the
whole database size. In the other hand, what meaning does
max_fsm_pages have? (estimated_updatable_size ?)
At some point it might make sense to convert the FSM into a bitmap; that
way everything just scales with database size.
In the meantime, I'm not sure if it makes sense to tie the FSM size to
the DSM size, since each FSM page requires 48x the storage of a DSM
page. I think there's also a lot of cases where FSM size will not scale
the same was DSM size will, such as when there's historical data in the
database.
That raises another question... what happens when we run out of DSM
space?
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
I'd be happier if the DSM content could be
treated as just a hint.
If we don't have a frozen state, we can't use the DSM to implement
index-only scans.
To implement index-only scans, the DSM would have to be expected to
provide 100% reliable coverage, which will increase its cost and
complexity by orders of magnitude. If you insist on that, I will bet
you lunch at a fine restaurant that it doesn't make it into 8.3.
regards, tom lane
"Jim C. Nasby" <jim@nasby.net> wrote:
I'd like to add some kind of logical flavors to max_fsm_pages
and max_dsm_pages.In the meantime, I'm not sure if it makes sense to tie the FSM size to
the DSM size, since each FSM page requires 48x the storage of a DSM
page. I think there's also a lot of cases where FSM size will not scale
the same was DSM size will, such as when there's historical data in the
database.
I see. We need separate variables for FSM and DSM.
Here is a new proposal for replacements of variables at Free Space Map
section in postgresql.conf. Are these changes acceptable? If ok, I'd
like to rewrite codes using them.
# - Space Management -
managed_relations = 1000 # min 100, ~120 bytes each
managed_freespaces = 2GB # 6 bytes of shared memory per 8KB
managed_deadspaces = 8GB # 4KB of shared memory per 32MB
managed_relations:
Replacement of max_fsm_relations. It is also used by DSM.
managed_freespaces:
Replacement of max_fsm_pages. The meaning is not changed,
but can be set in bytes.
managed_deadspaces:
A new parameter for DSM. It might be better to be scaled
with whole database size.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
"Jim C. Nasby" <jim@nasby.net> wrote:
At some point it might make sense to convert the FSM into a bitmap; that
way everything just scales with database size.
In the meantime, I'm not sure if it makes sense to tie the FSM size to
the DSM size, since each FSM page requires 48x the storage of a DSM
page. I think there's also a lot of cases where FSM size will not scale
the same was DSM size will, such as when there's historical data in the
database.
Bitmapped FSM is interesting. Maybe strict accuracy is not needed for FSM.
If we change FSM to use 2 bits/page bitmaps, it requires only 1/48 shared
memory by now. However, 6 bytes/page is small enough for normal use. We need
to reconsider it if we would go into TB class heavily updated databases.
That raises another question... what happens when we run out of DSM
space?
First, discard completely clean memory chunks in DSM. 'Clean' means all of
the tuples managed by the chunk are frozen. This is a lossless transition.
Second, discard tracked tables and its chunks that is least recently
vacuumed. We can assume those tables have many dead tuples and almost
fullscan will be required. We don't bother to keep tracking to such tables.
Many optimizations should still remain at this point, but I'll make
a not-so-complex suggestions in the meantime.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
Tom Lane wrote:
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
I'd be happier if the DSM content could be
treated as just a hint.If we don't have a frozen state, we can't use the DSM to implement
index-only scans.To implement index-only scans, the DSM would have to be expected to
provide 100% reliable coverage, which will increase its cost and
complexity by orders of magnitude. If you insist on that, I will bet
you lunch at a fine restaurant that it doesn't make it into 8.3.
:)
While I understand that 100% reliable coverage is a significantly
stronger guarantee, I don't see any particular problems in implementing
that. WAL logging isn't that hard.
I won't insist, I'm not the one doing the programming after all.
Anything is better than what we have now. However, I do hope that
whatever is implemented doesn't need a complete rewrite to make it 100%
reliable in the future.
The basic wish I have is to not use a fixed size shared memory area like
FSM for the DSM. I'd like it to use the shared buffers instead, which
makes the memory management and tuning easier. And it also makes it
easier to get the WAL logging right, even if it's not done for 8.3 but
added later.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Hello, long time no see.
This topic looks interesting. I'm enrious of Itagaki-san and others.
I can't do now what I want, due to other work that I don't want to do
(isn't my boss seeing this?). I wish I could join the community some
day and contribute to the development like the great experts here.
# I can't wait to try Itagakis-san's latest patch for load distributed
checkpoint in my environment and report the result.
# But I may not have enough time...
Let me give some comment below.
From: "Heikki Linnakangas" <heikki@enterprisedb.com>
While I understand that 100% reliable coverage is a significantly
stronger guarantee, I don't see any particular problems in
implementing
that. WAL logging isn't that hard.
I won't insist, I'm not the one doing the programming after all.
Anything is better than what we have now. However, I do hope that
whatever is implemented doesn't need a complete rewrite to make it
100%
reliable in the future.
The basic wish I have is to not use a fixed size shared memory area
like
FSM for the DSM. I'd like it to use the shared buffers instead,
which
makes the memory management and tuning easier. And it also makes it
easier to get the WAL logging right, even if it's not done for 8.3
but
added later.
I hope for the same thing as Heikki-san. Though I'm relatively new to
PostgreSQL source code, I don't think it is very difficult (at least
for experts here) to implement the reliable space management scheme,
so I proposed the following before -- not separate memory area for
FSM, but treating it the same way as data files in the shared buffers.
Though Tom-san is worrying about performance, what makes the
performance degrade greatly? Additional WAL records for updating
space management structures are written sequentially in batch.
Additional dirty shared buffers are written efficiently by kernel (at
least now.) And PostgreSQL is released from the giant lwlock for FSM.
Some performance degradation would surely result. However,
reliability is more important because "vacuum" is almost the greatest
concern for real serious users (not for hobbists who enjoy
performance.) Can anybody say to users "we are working hard, but our
work may not be reliable and sometimes fails. Can you see if our
vacuuming effort failed and try this...?"
And I'm afraid that increasing the number of configuration parameters
is unacceptable for users. It is merely the excuse of developers.
PostgreSQL already has more than 100 parameters. Some of them, such
as bgwriter_*, are difficult for normal users to understand. It's
best to use shared_buffers parameter and show how to set it in the
document.
Addressing the vacuum problem correctly is very important. I hope you
don't introduce new parameters for unfinished work and force users to
check the manual to change the parameters in later versions, i.e.
"managed_* parameters are not supported from this release. Please use
shared_buffers..." Is it a "must" to release 8.3 by this summer? I
think that delaying the release a bit for correct (reliable) vacuum
resolution is worth.
From: "Takayuki Tsunakawa" <tsunakawa.takay@jp.fujitsu.com>
Yes! I'm completely in favor of Itagaki-san. Separating the cache
for
FSM may produce a new configuration parameter like fsm_cache_size,
which the normal users would not desire (unless they like enjoying
difficult DBMS.)
I think that integrating the treatment of space management structure
and data area is good. That means, for example, implementing "Free
Space Table" described in section 14.2.2.1 of Jim Gray's book
"Transaction Processing: Concepts and Techniques", though it may
have
been discussed in PostgreSQL community far long ago (really?). Of
course, some refinements may be necessary to tune to PostgreSQL's
concept, say, creating one free space table file for each data file
to
make the implementation easy. It would reduce the source code
solely
for FSM.
In addition, it would provide the transactional space management.
If
I understand correctly, in the current implementation, updates to
FSM
are lost when the server crashes, aren't they? The idea assumes
that
Show quoted text
FSM will be rebuilt by vacuum because vacuum is inevitable. If
updates to space management area were made transactional, it might
provide the infrastructure for "vacuumless PostgreSQL."
On Wed, Feb 28, 2007 at 04:10:09PM +0900, ITAGAKI Takahiro wrote:
"Jim C. Nasby" <jim@nasby.net> wrote:
At some point it might make sense to convert the FSM into a bitmap; that
way everything just scales with database size.In the meantime, I'm not sure if it makes sense to tie the FSM size to
the DSM size, since each FSM page requires 48x the storage of a DSM
page. I think there's also a lot of cases where FSM size will not scale
the same was DSM size will, such as when there's historical data in the
database.Bitmapped FSM is interesting. Maybe strict accuracy is not needed for FSM.
If we change FSM to use 2 bits/page bitmaps, it requires only 1/48 shared
memory by now. However, 6 bytes/page is small enough for normal use. We need
to reconsider it if we would go into TB class heavily updated databases.That raises another question... what happens when we run out of DSM
space?First, discard completely clean memory chunks in DSM. 'Clean' means all of
the tuples managed by the chunk are frozen. This is a lossless transition.Second, discard tracked tables and its chunks that is least recently
vacuumed. We can assume those tables have many dead tuples and almost
fullscan will be required. We don't bother to keep tracking to such tables.Many optimizations should still remain at this point, but I'll make
a not-so-complex suggestions in the meantime.
Actually, I have to agree with Heikki and Takayuki-san... I really like
the idea of managing DSM (and FSM for that matter) using shared_buffers.
If we do that, that means that we could probably back them to disk very
easily.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
"Jim C. Nasby" <jim@nasby.net> wrote:
Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?IMO yes, we want the bgwriter to be the only process that's normally
writing pages out. How close we are to that, I don't know...
I'm working on making the bgwriter to write almost of dirty pages. This is
the proposal for it using automatic adjustment of bgwriter_lru_maxpages.
The bgwriter_lru_maxpages value will be adjusted to the equal number of calls
of StrategyGetBuffer() per cycle with some safety margins (x2 at present).
The counter are incremented per call and reset to zero at StrategySyncStart().
This patch alone is not so useful except for hiding hardly tunable parameters
from users. However, it would be a first step of allow bgwriters to do some
works before writing dirty buffers.
- [DSM] Pick out pages worth vaccuming and register them into DSM.
- [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)
- [TODO Item] Shrink expired COLD updated tuples to just their headers.
- Set commit hint bits to reduce subsequent writes of blocks.
http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php
I tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.
I got an expected result as below. Over 75% of buffers are written by
bgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values
were much higher than the default value (5). It shows that the most suitable
values greatly depends on workloads.
benchmark | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages
------------+------------+-----------+-------------+-----------------------
default | 300tps | 100% | 77.5% | 120 pages/cycle
with sleep | 150tps | 50% | 98.6% | 70 pages/cycle
I hope that this patch will be a first step of the intelligent bgwriter.
Comments welcome.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
"Jim C. Nasby" <jim@nasby.net> wrote:
Perhaps it would be better to have the bgwriter take a look at how many
dead tuples (or how much space the dead tuples account for) when it
writes a page out and adjust the DSM at that time.Yeah, I feel it is worth optimizable, too. One question is, how we treat
dirty pages written by backends not by bgwriter? If we want to add some
works in bgwriter, do we also need to make bgwriter to write almost of
dirty pages?IMO yes, we want the bgwriter to be the only process that's normally
writing pages out. How close we are to that, I don't know...
I'm working on making the bgwriter to write almost of dirty pages. This is
the proposal for it using automatic adjustment of bgwriter_lru_maxpages.
The bgwriter_lru_maxpages value will be adjusted to the equal number of calls
of StrategyGetBuffer() per cycle with some safety margins (x2 at present).
The counter are incremented per call and reset to zero at StrategySyncStart().
This patch alone is not so useful except for hiding hardly tunable parameters
from users. However, it would be a first step of allow bgwriters to do some
works before writing dirty buffers.
- [DSM] Pick out pages worth vaccuming and register them into DSM.
- [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)
- [TODO Item] Shrink expired COLD updated tuples to just their headers.
- Set commit hint bits to reduce subsequent writes of blocks.
http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php
I tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.
I got an expected result as below. Over 75% of buffers are written by
bgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values
were much higher than the default value (5). It shows that the most suitable
values greatly depends on workloads.
benchmark | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages
------------+------------+-----------+-------------+-----------------------
default | 300tps | 100% | 77.5% | 120 pages/cycle
with sleep | 150tps | 50% | 98.6% | 70 pages/cycle
I hope that this patch will be a first step of the intelligent bgwriter.
Comments welcome.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center