ExclusiveLock

Started by Simon Riggsabout 21 years ago27 messages
#1Simon Riggs
simon@2ndquadrant.com

Recent runs of DBT-2 show very occasional ExclusiveLock (s) being held
by transactions, sometimes waiting to be granted.

On Sat, Nov 06, 2004 at 11:40:49AM +0000, Simon Riggs wrote:

The lockstats just show there's all those Exclusive Locks on

order_line, right?:

http://www.osdl.org/projects/dbt2dev/results/dev4-010/191/db/lockstats.out

The output is...
relname | pid | mode | granted
---------------+-------+------------------+---------
new_order | 21735 | AccessShareLock | t
new_order | 21735 | RowExclusiveLock | t
orders | 21715 | AccessShareLock | t
orders | 21715 | RowExclusiveLock | t
pg_class | 23254 | AccessShareLock | t
order_line | 21715 | AccessShareLock | t
order_line | 21715 | RowExclusiveLock | t
order_line | 21735 | ExclusiveLock | f
new_order | 21715 | AccessShareLock | t

...

which shows a non-granted lock, waiting for a Table-level

ExclusiveLock

on order_line. This is unexpected

(by me, that is...)

According to the manual, Exclusive Lock is not normally held by SQL
statements. There are no LOCK TABLE statements in DBT-2.

My digging reveals that ExclusiveLock is held on user relations by
_bt_getbuf() - when we extend a btree relation by one page

I also find ExclusiveLock is held by
- LISTEN/NOTIFY
- XactLockTableInsert()/XactLockTableDelete()
but those don't look like they lock user relations

LockAcquire() says its locks show in lock tables, so is index extension
the source of the ExclusiveLocks shown in the lock output? Presumably
they would be short duration, so you wouldn't see them unless you caught
it at just the right moment....unless we start to queue up on the
leadingedge of the index.

I expect index extension to be a source of contention anyway, but are we
actually *seeing* it? Or is it another issue, and is this an 8.0
problem?

--
Best Regards, Simon Riggs

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#1)
Re: ExclusiveLock

Simon Riggs <simon@2ndquadrant.com> writes:

Recent runs of DBT-2 show very occasional ExclusiveLock (s) being held
by transactions, sometimes waiting to be granted.

I think you are right that these reflect heap or btree-index extension
operations. Those do not actually take locks on the *table* however,
but locks on a single page within it (which are completely orthogonal to
table locks and don't conflict). The pg_locks output leaves something
to be desired, because you can't tell the difference between table and
page locks.

It's odd that your example does not appear to show someone else holding
a conflicting lock.

regards, tom lane

#3Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#2)
Re: ExclusiveLock

On Mon, 2004-11-08 at 21:37, Tom Lane wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

Recent runs of DBT-2 show very occasional ExclusiveLock (s) being held
by transactions, sometimes waiting to be granted.

I think you are right that these reflect heap or btree-index extension
operations. Those do not actually take locks on the *table* however,
but locks on a single page within it (which are completely orthogonal to
table locks and don't conflict). The pg_locks output leaves something
to be desired, because you can't tell the difference between table and
page locks.

Good. Thought it was worth discussion...

It's odd that your example does not appear to show someone else holding
a conflicting lock.

There is....I didn't copy the whole lock table output...here it is...

relname | pid | mode | granted
---------------+-------+------------------+---------
new_order | 21735 | AccessShareLock | t
new_order | 21735 | RowExclusiveLock | t
orders | 21715 | AccessShareLock | t
orders | 21715 | RowExclusiveLock | t
pg_class | 23254 | AccessShareLock | t
order_line | 21715 | AccessShareLock | t
order_line | 21715 | RowExclusiveLock | t
order_line | 21735 | ExclusiveLock | f
new_order | 21715 | AccessShareLock | t
new_order | 21715 | RowExclusiveLock | t
customer | 21715 | AccessShareLock | t
pk_order_line | 21735 | AccessShareLock | t
pk_order_line | 21735 | RowExclusiveLock | t
item | 21715 | AccessShareLock | t
orders | 21735 | AccessShareLock | t
orders | 21735 | RowExclusiveLock | t
order_line | 21735 | AccessShareLock | t
order_line | 21735 | RowExclusiveLock | t
stock | 21715 | AccessShareLock | t
stock | 21715 | RowExclusiveLock | t
order_line | 21715 | ExclusiveLock | t
pk_order_line | 21715 | RowExclusiveLock | t
pg_locks | 23254 | AccessShareLock | t
district | 21715 | AccessShareLock | t
district | 21715 | RowShareLock | t
district | 21715 | RowExclusiveLock | t
warehouse | 21715 | AccessShareLock | t
customer | 21735 | AccessShareLock | t
customer | 21735 | RowExclusiveLock | t
(29 rows)

Pids 21715 and 21735 are conflicting.

There's also another example where the lock table output is > 1400 rows,
with two lock requests pending.

The oprofile for this run looks like this: (but is not of course a
snapshot at a point in time, like the lock list)

CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
samples % app name symbol name
170746 42.7220 vmlinux-2.6.8.1-osdl2 ia64_pal_call_static
18934 4.7374 libc-2.3.2.so (no symbols)
10691 2.6750 postgres FunctionCall2
9814 2.4555 postgres hash_seq_search
8654 2.1653 postgres SearchCatCache
7389 1.8488 postgres AllocSetAlloc
6122 1.5318 postgres hash_search
5707 1.4279 postgres OpernameGetCandidates
4901 1.2263 postgres StrategyDirtyBufferList
4627 1.1577 postgres XLogInsert
4424 1.1069 postgres pglz_decompress
4371 1.0937 vmlinux-2.6.8.1-osdl2 __copy_user
3796 0.9498 vmlinux-2.6.8.1-osdl2 finish_task_switch
3483 0.8715 postgres LWLockAcquire
3458 0.8652 postgres eqjoinsel
3001 0.7509 vmlinux-2.6.8.1-osdl2 get_exec_dcookie
2824 0.7066 postgres AtEOXact_CatCache
2745 0.6868 postgres _bt_compare
2730 0.6831 postgres nocachegetattr
2715 0.6793 postgres SearchCatCacheList
2659 0.6653 postgres MemoryContextAllocZeroAligned
2604 0.6515 postgres yyparse
2553 0.6388 postgres eqsel
2127 0.5322 postgres deconstruct_array
1921 0.4806 postgres hash_any
1919 0.4801 postgres int4eq
1855 0.4641 postgres LWLockRelease
1839 0.4601 postgres StrategyBufferLookup
1777 0.4446 postgres GetSnapshotData
1729 0.4326 postgres heap_getsysattr
1595 0.3991 postgres DLMoveToFront
1586 0.3968 postgres MemoryContextAlloc
1485 0.3716 vmlinux-2.6.8.1-osdl2 try_atomic_semop
1455 0.3641 postgres anonymous symbol from section .plt
1409 0.3525 postgres lappend
1352 0.3383 postgres heap_release_fetch
1270 0.3178 postgres PinBuffer
1141 0.2855 postgres DirectFunctionCall1
1132 0.2832 postgres base_yylex
982 0.2457 postgres pgstat_initstats
957 0.2394 vmlinux-2.6.8.1-osdl2 __make_request
926 0.2317 postgres AllocSetFree
892 0.2232 vmlinux-2.6.8.1-osdl2 try_to_wake_up
874 0.2187 postgres _bt_checkkeys
870 0.2177 postgres fmgr_isbuiltin
853 0.2134 postgres ReadBufferInternal
852 0.2132 postgres pfree
850 0.2127 postgres _bt_moveright
848 0.2122 vmlinux-2.6.8.1-osdl2 do_cciss_request
766 0.1917 postgres ExecTargetList
734 0.1837 postgres SearchSysCache
730 0.1827 postgres PGSemaphoreLock
706 0.1766 postgres expression_tree_walker
684 0.1711 postgres ExecEvalVar
674 0.1686 postgres StrategyGetBuffer
669 0.1674 postgres ResourceOwnerForgetCatCacheRef
660 0.1651 postgres lcons
614 0.1536 vmlinux-2.6.8.1-osdl2 find_get_page
586 0.1466 postgres _bt_restscan
582 0.1456 postgres MemoryContextAllocZero
551 0.1379 postgres LockRelease
551 0.1379 postgres heap_formtuple
540 0.1351 postgres OidFunctionCall3
537 0.1344 postgres check_stack_depth
527 0.1319 postgres ExecutePlan
521 0.1304 postgres CatalogCacheComputeHashValue
510 0.1276 postgres buildRelationAliases
508 0.1271 vmlinux-2.6.8.1-osdl2 find_get_pages_tag
504 0.1261 postgres btgettuple
499 0.1249 postgres IndexNext
454 0.1136 postgres ExecInitExpr
453 0.1133 postgres ExecProcNode
447 0.1118 postgres LockAcquire

I note that an important one has dropped down the list:
1 2.5e-04 postgres AtEOXact_Buffers

and this is nowhere now...
UnlockBuffers

StrategyDirtyBufferList is too high, so we can change that.

As a follow-on: We've got freelists for reuse of space. Do freelists
work for index/heap extension also, or does everybody read the same info
to get the next block....i.e. we are space conservative, rather than
emphasising concurrency? It would be good to have one freelist per
CPU....

--
Best Regards, Simon Riggs

#4Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#2)
Re: [Testperf-general] Re: ExclusiveLock

Tom,

I think you are right that these reflect heap or btree-index extension
operations. Those do not actually take locks on the *table* however,
but locks on a single page within it (which are completely orthogonal to
table locks and don't conflict). The pg_locks output leaves something
to be desired, because you can't tell the difference between table and
page locks.

Aside from foriegn keys, though, is there any way in which INSERT page locks
could block other inserts? I have another system (Lyris) where that
appears to be happening with 32 concurrent INSERT streams. It's possible
that the problem is somewhere else, but I'm disturbed by the possibility.

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#4)
Re: [Testperf-general] Re: ExclusiveLock

Josh Berkus <josh@agliodbs.com> writes:

Aside from foriegn keys, though, is there any way in which INSERT page locks
could block other inserts?

Not for longer than the time needed to physically add a tuple to a page.

regards, tom lane

#6Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#5)
Re: [Testperf-general] Re: ExclusiveLock

On Thu, 2004-11-18 at 22:12, Tom Lane wrote:

Josh Berkus <josh@agliodbs.com> writes:

Aside from foriegn keys, though, is there any way in which INSERT page locks
could block other inserts?

Not for longer than the time needed to physically add a tuple to a page.

The main problem on INSERTs is that it is usually the same few pages:
the lead data block and the lead index block. There are ways of
spreading the load out across an index, but I'm not sure what happens on
the leading edge of the data relation, but I think it hits the same
block each time.

Only an issue if you have more than one CPU...

--
Best Regards, Simon Riggs

#7Josh Berkus
josh@agliodbs.com
In reply to: Simon Riggs (#6)
Re: [Testperf-general] Re: ExclusiveLock

Simon, Tom,

The main problem on INSERTs is that it is usually the same few pages:
the lead data block and the lead index block. There are ways of
spreading the load out across an index, but I'm not sure what happens on
the leading edge of the data relation, but I think it hits the same
block each time.

I actually have several test cases for this, can you give me a trace or
profile suggestion that would show if this is happening?

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#6)
Re: [Testperf-general] Re: ExclusiveLock

Simon Riggs <simon@2ndquadrant.com> writes:

The main problem on INSERTs is that it is usually the same few pages:
the lead data block and the lead index block. There are ways of
spreading the load out across an index, but I'm not sure what happens on
the leading edge of the data relation, but I think it hits the same
block each time.

FSM does what it can to spread the insertion load across multiple pages,
but of course this is not going to help much unless your table has lots
of embedded free space. I think it would work pretty well on a table
with lots of update turnover, but not on an INSERT-only workload.

regards, tom lane

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#7)
Re: [Testperf-general] Re: ExclusiveLock

Josh Berkus <josh@agliodbs.com> writes:

The main problem on INSERTs is that it is usually the same few pages:
the lead data block and the lead index block. There are ways of
spreading the load out across an index, but I'm not sure what happens on
the leading edge of the data relation, but I think it hits the same
block each time.

I actually have several test cases for this, can you give me a trace or
profile suggestion that would show if this is happening?

If it is a problem, the LockBuffer calls in RelationGetBufferForTuple
would be the places showing contention delays.

It could also be that the contention is for the WALInsertLock, ie, the
right to stuff a WAL record into the shared buffers. This effect would
be the same even if you were inserting into N separate tables.

regards, tom lane

#10Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#8)
Re: [Testperf-general] Re: ExclusiveLock

On Thu, 2004-11-18 at 22:51, Tom Lane wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

The main problem on INSERTs is that it is usually the same few pages:
the lead data block and the lead index block. There are ways of
spreading the load out across an index, but I'm not sure what happens on
the leading edge of the data relation, but I think it hits the same
block each time.

FSM does what it can to spread the insertion load across multiple pages,
but of course this is not going to help much unless your table has lots
of embedded free space. I think it would work pretty well on a table
with lots of update turnover, but not on an INSERT-only workload.

OK, thats what I thought.

So with a table with an INSERT-only workload, the FSM is always empty,
so there only ever is one block that gets locked. That means we can't
ever go faster than 1 CPU can go - any other CPUs will just wait for the
block lock. [In Josh's case, 32 INSERT streams won't go significantly
faster than about 4 streams, allowing for some overlap of other
operations]

Would it be possible to: when a new block is allocated from the relation
file (rather than reused), we check the FSM - if it is empty, then we
allocate 8 new blocks and add them all to the FSM. The next few
INSERTers will then use the FSM blocks normally.

Doing that will definitely speed up DBT-2 and many other workloads. Many
tables have SERIAL defined, or use a monotonically increasing unique
key.

--
Best Regards, Simon Riggs

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#10)
Re: [Testperf-general] Re: ExclusiveLock

Simon Riggs <simon@2ndquadrant.com> writes:

Would it be possible to: when a new block is allocated from the relation
file (rather than reused), we check the FSM - if it is empty, then we
allocate 8 new blocks and add them all to the FSM. The next few
INSERTers will then use the FSM blocks normally.

Most likely that would just shift the contention to the WALInsertLock.

regards, tom lane

#12Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#11)
Re: [Testperf-general] Re: ExclusiveLock

On Thu, 2004-11-18 at 23:19, Tom Lane wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

Would it be possible to: when a new block is allocated from the relation
file (rather than reused), we check the FSM - if it is empty, then we
allocate 8 new blocks and add them all to the FSM. The next few
INSERTers will then use the FSM blocks normally.

Most likely that would just shift the contention to the WALInsertLock.

Well, removing any performance bottleneck shifts the bottleneck to
another place, though that is not an argument against removing it.

Can we subdivide the WALInsertLock so there are multiple entry points to
wal_buffers, based upon hashing the xid? That would allow wal to be
written sequentially by each transaction though slightly out of order
for different transactions. Commit/Abort would all go through the same
lock to guarantee serializability.

--
Best Regards, Simon Riggs

#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#12)
Re: [Testperf-general] Re: ExclusiveLock

Simon Riggs <simon@2ndquadrant.com> writes:

Can we subdivide the WALInsertLock so there are multiple entry points to
wal_buffers, based upon hashing the xid?

I don't think so; WAL is inherently a linear log. (Awhile ago there was
some talk of nonlinear log writing to get around the one-commit-per-
disk-revolution syndrome, but the idea basically got rejected as
unworkably complicated.) What's more, there are a lot of entries that
must remain time-ordered independently of transaction ownership.
Consider btree index page splits and sequence nextvals for two examples.

Certainly I'd not buy into any such project without incontrovertible
proof that it would solve a major bottleneck --- and right now we are
only speculating with no evidence.

regards, tom lane

#14Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#9)
Re: [Testperf-general] Re: ExclusiveLock

On Thu, 2004-11-18 at 22:55, Tom Lane wrote:

Josh Berkus <josh@agliodbs.com> writes:

The main problem on INSERTs is that it is usually the same few pages:
the lead data block and the lead index block. There are ways of
spreading the load out across an index, but I'm not sure what happens on
the leading edge of the data relation, but I think it hits the same
block each time.

I actually have several test cases for this, can you give me a trace or
profile suggestion that would show if this is happening?

If it is a problem, the LockBuffer calls in RelationGetBufferForTuple
would be the places showing contention delays.

You say this as if we can easily check that. My understanding is that
this would require a scripted gdb session to instrument the executable
at that point.

Is that what you mean? That isn't typically regarded as a great thing to
do on a production system.

You've mentioned about performance speculation, which I agree with, but
what are the alternatives? Compile-time changes aren't usually able to
be enabled, since many people from work RPMs.

It could also be that the contention is for the WALInsertLock, ie, the
right to stuff a WAL record into the shared buffers. This effect would
be the same even if you were inserting into N separate tables.

...and how do we check that also.

Are we back to simulated workloads and fully rigged executables?

--
Best Regards, Simon Riggs

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#14)
Re: [Testperf-general] Re: ExclusiveLock

Simon Riggs <simon@2ndquadrant.com> writes:

On Thu, 2004-11-18 at 22:55, Tom Lane wrote:

If it is a problem, the LockBuffer calls in RelationGetBufferForTuple
would be the places showing contention delays.

You say this as if we can easily check that.

I think this can be done with oprofile ...

regards, tom lane

#16Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#15)
Re: [Testperf-general] Re: ExclusiveLock

On Sat, 2004-11-20 at 16:14, Tom Lane wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

On Thu, 2004-11-18 at 22:55, Tom Lane wrote:

If it is a problem, the LockBuffer calls in RelationGetBufferForTuple
would be the places showing contention delays.

You say this as if we can easily check that.

I think this can be done with oprofile ...

OK, well thats where this thread started.

oprofile only tells us aggregate information. It doesn't tell us how
much time is spent waiting because of contention issues, it just tells
us how much time is spent and even that is skewed.

There really ought to be a better way to instrument things from inside,
based upon knowledge of the code.

--
Best Regards, Simon Riggs

#17Simon Riggs
simon@2ndquadrant.com
In reply to: Tom Lane (#13)
Re: [Testperf-general] Re: ExclusiveLock

On Thu, 2004-11-18 at 23:54, Tom Lane wrote:

I don't think so; WAL is inherently a linear log. (Awhile ago there was
some talk of nonlinear log writing to get around the one-commit-per-
disk-revolution syndrome, but the idea basically got rejected as
unworkably complicated.)

...this appears to still be on the TODO list... should it be removed?

- Find a way to reduce rotational delay when repeatedly writing last WAL
page

Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again. One idea is to write the WAL to different
offsets that might reduce the rotational delay.

--
Best Regards, Simon Riggs

#18Greg Stark
gsstark@mit.edu
In reply to: Simon Riggs (#17)
Re: [Testperf-general] Re: ExclusiveLock

Simon Riggs <simon@2ndquadrant.com> writes:

- Find a way to reduce rotational delay when repeatedly writing last WAL
page

Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again. One idea is to write the WAL to different
offsets that might reduce the rotational delay.

Once upon a time when you formatted hard drives you actually gave them an
interleave factor for a similar reason. These days you invariably use an
interleave of 1, ie, store the blocks continuously. Whether that's because
controllers have become fast enough to keep up with the burst rate or because
the firmware is smart enough to handle the block interleaving invisibly isn't
clear to me.

I wonder if formatting the drive to have an interleave >1 would actually
improve performance of the WAL log.

It would depend a lot on the usage pattern though. A heavily used system might
be able to generate enough WAL traffic to keep up with the burst rate of the
drive. And an less used system might benefit but might lose.

Probably now the less than saturated system gets close to the average
half-rotation-time latency. This idea would only really help if you have a
system that happens to be triggering pessimal results worse than that due to
unfortunate timing.

--
greg

#19Simon Riggs
simon@2ndquadrant.com
In reply to: Greg Stark (#18)
Re: [Testperf-general] Re: ExclusiveLock

On Mon, 2004-11-22 at 23:37, Greg Stark wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

- Find a way to reduce rotational delay when repeatedly writing last WAL
page

Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again. One idea is to write the WAL to different
offsets that might reduce the rotational delay.

Once upon a time when you formatted hard drives you actually gave them an
interleave factor for a similar reason. These days you invariably use an
interleave of 1, ie, store the blocks continuously. Whether that's because
controllers have become fast enough to keep up with the burst rate or because
the firmware is smart enough to handle the block interleaving invisibly isn't
clear to me.

I wonder if formatting the drive to have an interleave >1 would actually
improve performance of the WAL log.

It would depend a lot on the usage pattern though. A heavily used system might
be able to generate enough WAL traffic to keep up with the burst rate of the
drive. And an less used system might benefit but might lose.

Probably now the less than saturated system gets close to the average
half-rotation-time latency. This idea would only really help if you have a
system that happens to be triggering pessimal results worse than that due to
unfortunate timing.

I was asking whether that topic should be removed, since Tom had said it
had been rejected....

If you could tell me how to instrument the system to (better) show
whether such plans as you suggest are workable, I would be greatly
interested. Anything we do needs to be able to be monitored for
success/failure.

--
Best Regards, Simon Riggs

#20Bruce Momjian
pgman@candle.pha.pa.us
In reply to: Simon Riggs (#19)
Re: [Testperf-general] Re: ExclusiveLock

Simon Riggs wrote:

On Mon, 2004-11-22 at 23:37, Greg Stark wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

- Find a way to reduce rotational delay when repeatedly writing last WAL
page

Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again. One idea is to write the WAL to different
offsets that might reduce the rotational delay.

Once upon a time when you formatted hard drives you actually gave them an
interleave factor for a similar reason. These days you invariably use an
interleave of 1, ie, store the blocks continuously. Whether that's because
controllers have become fast enough to keep up with the burst rate or because
the firmware is smart enough to handle the block interleaving invisibly isn't
clear to me.

I wonder if formatting the drive to have an interleave >1 would actually
improve performance of the WAL log.

It would depend a lot on the usage pattern though. A heavily used system might
be able to generate enough WAL traffic to keep up with the burst rate of the
drive. And an less used system might benefit but might lose.

Probably now the less than saturated system gets close to the average
half-rotation-time latency. This idea would only really help if you have a
system that happens to be triggering pessimal results worse than that due to
unfortunate timing.

I was asking whether that topic should be removed, since Tom had said it
had been rejected....

The method used to fix it was rejected, but the goal of making it better
is still a valid one.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#21Tom Lane
tgl@sss.pgh.pa.us
In reply to: Greg Stark (#18)
Re: [Testperf-general] Re: ExclusiveLock

Greg Stark <gsstark@mit.edu> writes:

Once upon a time when you formatted hard drives you actually gave them an
interleave factor for a similar reason. These days you invariably use an
interleave of 1, ie, store the blocks continuously. Whether that's because
controllers have become fast enough to keep up with the burst rate or because
the firmware is smart enough to handle the block interleaving invisibly isn't
clear to me.

The impression I had was that disk drives no longer pay the slightest
attention to interleave specs, because the logical model implied by the
concept is too far removed from modern reality (on-disk buffering,
variable numbers of sectors per track, transparently remapped bad
sectors, yadda yadda).

And that's just at the hardware level ... who knows where the filesystem
is putting your data, or what the kernel I/O scheduler is doing with
your requests :-(

Basically I see the TODO item as a blue-sky research topic, not
something we have any idea how to implement. That doesn't mean it can't
be on the TODO list ...

regards, tom lane

#22Kenneth Marshall
ktm@it.is.rice.edu
In reply to: Simon Riggs (#19)
Re: [Testperf-general] Re: ExclusiveLock

On Tue, Nov 23, 2004 at 12:04:17AM +0000, Simon Riggs wrote:

On Mon, 2004-11-22 at 23:37, Greg Stark wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

- Find a way to reduce rotational delay when repeatedly writing last WAL
page

Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again. One idea is to write the WAL to different
offsets that might reduce the rotational delay.

Once upon a time when you formatted hard drives you actually gave them an
interleave factor for a similar reason. These days you invariably use an
interleave of 1, ie, store the blocks continuously. Whether that's because
controllers have become fast enough to keep up with the burst rate or because
the firmware is smart enough to handle the block interleaving invisibly isn't
clear to me.

I wonder if formatting the drive to have an interleave >1 would actually
improve performance of the WAL log.

It would depend a lot on the usage pattern though. A heavily used system might
be able to generate enough WAL traffic to keep up with the burst rate of the
drive. And an less used system might benefit but might lose.

Probably now the less than saturated system gets close to the average
half-rotation-time latency. This idea would only really help if you have a
system that happens to be triggering pessimal results worse than that due to
unfortunate timing.

I was asking whether that topic should be removed, since Tom had said it
had been rejected....

If you could tell me how to instrument the system to (better) show
whether such plans as you suggest are workable, I would be greatly
interested. Anything we do needs to be able to be monitored for
success/failure.

--
Best Regards, Simon Riggs

The disk performance has increased so much that the reasons for having
an interleave factor other than 1 (no interleaving) have all but disappeared.
CPU speed has also increased so much relative to disk speed that using some
CPU cycles to improve I/O is a reasonable approach. I have been considering
how this might be accomplished. As Simon so aptly pointed out, we need to
show that it materially affects the performance or it is not worth doing.
The simplest idea I had was to pre-layout the WAL logs in a contiguous fashion
on the disk. Solaris has this ability given appropriate FS parameters and we
should be able to get close on most other OSes. Once that has happened, use
something like the FSM map to show the allocated blocks. The CPU can keep track
of its current disk rotational position (approx. is okay) then when we need to
write a WAL block start writing at the next area that the disk head will be
sweeping. Give it a little leaway for latency in the system and we should be
able to get very low latency for the writes. Obviously, there would be wasted
space but you could intersperse writes to the granularity of space overhead
that you would like to see. As far as implementation, I was reading an
interesting article that used a simple theoretical model to estimate disk head
position to avoid latency.

Yours truly,
Ken Marshall

#23Bort, Paul
pbort@tmwsystems.com
In reply to: Kenneth Marshall (#22)
Re: [Testperf-general] Re: ExclusiveLock

The impression I had was that disk drives no longer pay the slightest
attention to interleave specs, because the logical model
implied by the
concept is too far removed from modern reality (on-disk buffering,
variable numbers of sectors per track, transparently remapped bad
sectors, yadda yadda).

Entirely true. Interleave was an issue back when the controller wasn't fast
enough to keep up with 3600 RPM disks, and is now completely obscured from
the bus. I don't know if the ATA spec includes interleave control; I suspect
it does not.

And that's just at the hardware level ... who knows where the
filesystem
is putting your data, or what the kernel I/O scheduler is doing with
your requests :-(

Basically I see the TODO item as a blue-sky research topic, not
something we have any idea how to implement. That doesn't
mean it can't
be on the TODO list ...

I think that if we also take into consideration various hardware and
software RAID configurations, this is just too far removed from the database
level to be at all practical to throw code at.

Perhaps this should be rewritten as a documentation change: recommendations
about performance hardware? What we recommend for our highest volume
customers (alas, on a proprietary RDBMS, and only x86) is something like
this:

- Because drive capacity is so huge now, choose faster drives over larger
drives. 15K RPM isn't three times faster than 5400, but there is a noticable
difference.

- More spindles reduce delays even further. Mirroring allows reads to happen
faster because they can come from either side of the mirror, and spanning
reduces problems with rotational delays.

- The ideal disk configuration that we recommend is a 14 drive chassis with
a split backplane. Run each backplane to a separate channel on the
controller, and mirror the channels. Use the first drive on each channel for
the OS and swap, the second drive for transaction logs, and the remaining
drives spanned (and already mirrored) for data. With a reasonable write
cache on the controller, this has proven to be a pretty fast configuration
despite a less than ideal engine.

One other thought: How does static RAM compare to disk speed nowadays? A 1Gb
flash drive might be reasonable for the WAL if it can keep up.

#24Doug McNaught
doug@mcnaught.org
In reply to: Bort, Paul (#23)
Re: [Testperf-general] Re: ExclusiveLock

"Bort, Paul" <pbort@tmwsystems.com> writes:

One other thought: How does static RAM compare to disk speed nowadays?
A 1Gb flash drive might be reasonable for the WAL if it can keep up.

Flash RAM "wears out"; it's not suitable for a continuously-updated
application like WAL.

-Doug

#25Bort, Paul
pbort@tmwsystems.com
In reply to: Doug McNaught (#24)
Re: [Testperf-general] Re: ExclusiveLock

From: Doug McNaught [mailto:doug@mcnaught.org]

"Bort, Paul" <pbort@tmwsystems.com> writes:

One other thought: How does static RAM compare to disk

speed nowadays?

A 1Gb flash drive might be reasonable for the WAL if it

can keep up.

Flash RAM "wears out"; it's not suitable for a continuously-updated
application like WAL.

-Doug

But if it's even 2x faster than a disk, that might be worth wearing them
out. Given that they have published write count limits, one could reasonably
plan to replace the memory after half of that time and be comfortable with
the lifecycle. I saw somewhere that even with continuous writes on USB 2.0,
it would take about twelve years to exhaust the write life of a typical
flash drive. Even an order-of-magnitude increase in throughput beyond that
only calls for a new drive every year. (Or every six months if you're
paranoid. If you're that paranoid, you can mirror them, too.)

Whether USB 2.0 is fast enought for the WAL is a separate discussion.

#26Bort, Paul
pbort@tmwsystems.com
In reply to: Bort, Paul (#25)
Re: [Testperf-general] Re: ExclusiveLock

From: Kenneth Marshall [mailto:ktm@is.rice.edu]

[snip]

The simplest idea I had was to pre-layout the WAL logs in a
contiguous fashion
on the disk. Solaris has this ability given appropriate FS
parameters and we
should be able to get close on most other OSes. Once that has
happened, use
something like the FSM map to show the allocated blocks. The
CPU can keep track
of its current disk rotational position (approx. is okay)
then when we need to
write a WAL block start writing at the next area that the
disk head will be
sweeping. Give it a little leaway for latency in the system
and we should be
able to get very low latency for the writes. Obviously, there
would be wasted
space but you could intersperse writes to the granularity of
space overhead
that you would like to see. As far as implementation, I was reading an
interesting article that used a simple theoretical model to
estimate disk head
position to avoid latency.

Ken,

That's a neat idea, but I'm not sure how much good it will do. As bad as
rotational latency is, seek time is worse. Pre-allocation isn't going to do
much for rotational latency if the heads also have to seek back to the WAL.

OTOH, pre-allocation could help two other performance aspects of the WAL:
First, if the WAL was pre-allocated, steps could be taken (by the operator,
based on their OS) to make the space allocated to the WAL contiguous.
Statistics on how much WAL is needed in 24 hours would help with that
sizing. This would reduce seeks involved in writing the WAL data.

The other thing it would do is reduce seeks and metadata writes involved in
extending WAL files.

All of this is moot if the WAL doesn't have its own spindle(s).

This almost leads back to the old-fashioned idea of using a raw partition,
to avoid the overhead of the OS and file structure.

Or I could be thoroughly demonstrating my complete lack of understanding of
PostgreSQL internals. :-)

Maybe I'll get a chance to try the flash drive WAL idea in the next couple
of weeks. Need to see if the hardware guys have a spare flash drive I can
abuse.

Paul

#27Kenneth Marshall
ktm@it.is.rice.edu
In reply to: Bort, Paul (#26)
Re: [Testperf-general] Re: ExclusiveLock

On Wed, Nov 24, 2004 at 11:00:30AM -0500, Bort, Paul wrote:

From: Kenneth Marshall [mailto:ktm@is.rice.edu]

[snip]

The simplest idea I had was to pre-layout the WAL logs in a
contiguous fashion
on the disk. Solaris has this ability given appropriate FS
parameters and we
should be able to get close on most other OSes. Once that has
happened, use
something like the FSM map to show the allocated blocks. The
CPU can keep track
of its current disk rotational position (approx. is okay)
then when we need to
write a WAL block start writing at the next area that the
disk head will be
sweeping. Give it a little leaway for latency in the system
and we should be
able to get very low latency for the writes. Obviously, there
would be wasted
space but you could intersperse writes to the granularity of
space overhead
that you would like to see. As far as implementation, I was reading an
interesting article that used a simple theoretical model to
estimate disk head
position to avoid latency.

Ken,

That's a neat idea, but I'm not sure how much good it will do. As bad as
rotational latency is, seek time is worse. Pre-allocation isn't going to do
much for rotational latency if the heads also have to seek back to the WAL.

OTOH, pre-allocation could help two other performance aspects of the WAL:
First, if the WAL was pre-allocated, steps could be taken (by the operator,
based on their OS) to make the space allocated to the WAL contiguous.
Statistics on how much WAL is needed in 24 hours would help with that
sizing. This would reduce seeks involved in writing the WAL data.

The other thing it would do is reduce seeks and metadata writes involved in
extending WAL files.

All of this is moot if the WAL doesn't have its own spindle(s).

This almost leads back to the old-fashioned idea of using a raw partition,
to avoid the overhead of the OS and file structure.

Or I could be thoroughly demonstrating my complete lack of understanding of
PostgreSQL internals. :-)

Maybe I'll get a chance to try the flash drive WAL idea in the next couple
of weeks. Need to see if the hardware guys have a spare flash drive I can
abuse.

Paul

Obviously, this whole process would be much more effective on systems with
separate WAL drives. But even on less busy systems, the lock-step of
write-a-WAL/wait-for-heads/write-a-WAL can dramatically decrease your
effective throughput to the drive. For example, the worst case would be
write one WAL block to disk. Then schedule another WAL block to be written
to disk. This block will need to wait for 1 full disk rotation to perform
the write. On a 10k drive, you will be able to log in this scenario 166
TPS assuming no piggy-backed syncs. Now look at the case where we can use
the preallocated WAL and write immediately. Assuming a 100% sequential disk
layout, if we can start writing within 25% of the full rotation we can now
support 664 TPS on the same hardware. Now look at a typical hard drive on
my desktop system with 150M sectors/4 heads/50000 tracks -> 3000 blocks/track
or 375 8K blocks. If we can write the next block within 10 8K blocks we can
perform 6225 TPS, within 5 8K blocks = 12450 TPS, within 2 8K blocks =
31125 TPS. This is just on a simple disk drive. As you can see, even small
improvements can make a tremendous difference in throughput. My analysis
is very simplistic and whether we can model the I/O quickly enough to be
useful is still to be determined. Maybe someone on the mailing list with
more experiance in how disk drives actually function can provide more
definitive information.

Ken