Interesting paper: Contention-Aware Lock Scheduling

Started by Thomas Munroalmost 8 years ago8 messages
#1Thomas Munro
thomas.munro@enterprisedb.com

Hi hackers,

I saw this today: http://www.vldb.org/pvldb/vol11/p648-tian.pdf

It describes the "LDSF" (largest-dependency-set-first) lock scheduling
algorithm and related work, as an alternative to the FIFO scheduling
used by PostgreSQL and most other RDBMSs. LDSF been implemented in
MySQL 8. The TPC-C results shown are impressive.

--
Thomas Munro
http://www.enterprisedb.com

#2Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#1)
Re: Interesting paper: Contention-Aware Lock Scheduling

On 31.01.2018 22:48, Thomas Munro wrote:

Hi hackers,

I saw this today: http://www.vldb.org/pvldb/vol11/p648-tian.pdf

It describes the "LDSF" (largest-dependency-set-first) lock scheduling
algorithm and related work, as an alternative to the FIFO scheduling
used by PostgreSQL and most other RDBMSs. LDSF been implemented in
MySQL 8. The TPC-C results shown are impressive.

Yet another another interesting article
http://cs-www.cs.yale.edu/homes/dna/papers/orthrus-sigmod16.pdf
with completely different approach: they deprive executors from
obtaining locks themselves and move all concurrency control to some
special workers,
with which executors are communicated using message-passing.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#3Garym
garym@oedata.com
In reply to: Konstantin Knizhnik (#2)
Re: Interesting paper: Contention-Aware Lock Scheduling

LDFS does show improvements for certain workloads, however it sacrifices temporal order and may interfere with historical analytics. If applications can tolerate ambiguous order of processing, it shows good gains.

Sent from my iPad

Show quoted text

On Apr 13, 2018, at 11:14 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:

On 31.01.2018 22:48, Thomas Munro wrote:
Hi hackers,

I saw this today: http://www.vldb.org/pvldb/vol11/p648-tian.pdf

It describes the "LDSF" (largest-dependency-set-first) lock scheduling
algorithm and related work, as an alternative to the FIFO scheduling
used by PostgreSQL and most other RDBMSs. LDSF been implemented in
MySQL 8. The TPC-C results shown are impressive.

Yet another another interesting article http://cs-www.cs.yale.edu/homes/dna/papers/orthrus-sigmod16.pdf
with completely different approach: they deprive executors from obtaining locks themselves and move all concurrency control to some special workers,
with which executors are communicated using message-passing.
--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#4Evgeniy Shishkin
itparanoia@gmail.com
In reply to: Garym (#3)
Re: Interesting paper: Contention-Aware Lock Scheduling

On Apr 13, 2018, at 20:46, Garym <garym@oedata.com> wrote:

LDFS does show improvements for certain workloads, however it sacrifices temporal order and may interfere with historical analytics. If applications can tolerate ambiguous order of processing, it shows good gains.

AFAIK, we don't guarantee order of processing anyway. Just some order.

#5Garym
garym@oedata.com
In reply to: Evgeniy Shishkin (#4)
Re: Interesting paper: Contention-Aware Lock Scheduling

I did testing on 9.6 and 10. Outside of slaves at distance, it does demonstrate consistent OOA operation whether intentional/enforced or not. :)

Sent from my iPad

Show quoted text

On Apr 13, 2018, at 11:50 AM, Evgeniy Shishkin <itparanoia@gmail.com> wrote:

On Apr 13, 2018, at 20:46, Garym <garym@oedata.com> wrote:

LDFS does show improvements for certain workloads, however it sacrifices temporal order and may interfere with historical analytics. If applications can tolerate ambiguous order of processing, it shows good gains.

AFAIK, we don't guarantee order of processing anyway. Just some order.

#6AJG
ayden@gera.co.nz
In reply to: Konstantin Knizhnik (#2)
Re: Interesting paper: Contention-Aware Lock Scheduling

Another interesting article from Jan 2018 (Tsinghua University and Microsoft
Research)

http://madsys.cs.tsinghua.edu.cn/publications/TS2018-liu.pdf

DudeTx: Durable Transactions Made Decoupled

"This paper presents DudeTx, a crash-consistent durable transaction system
that avoids the drawbacks of
both undo and redo logging. DudeTx uses shadowDRAM to decouple the execution
of a durable transaction into three fully asynchronous steps. The advantage
is that only minimal fences and no memory read instrumentation are required.
This design enables an out-of-the-box concurrency control mechanism,
transactional memory or fine-grained locks, to be used as an independent
component. The evaluation results show that DudeTx adds durability to a
software transactional memory system with only 7.4% ∼ 24.6% throughput
degradation.
Compared to typical existing durable transaction systems, DudeTx provides
1.7× ∼ 4.4× higher throughput.
Moreover, DudeTx can be implemented with hardware transactional memory or
lock-based concurrency
control, leading to a further 1.7× and 3.3× speedup, respectively."

--
Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html

#7Юрий Соколов
funny.falcon@gmail.com
In reply to: AJG (#6)
Re: Interesting paper: Contention-Aware Lock Scheduling

2018-05-04 23:45 GMT+03:00 AJG <ayden@gera.co.nz>:

Another interesting article from Jan 2018 (Tsinghua University and

Microsoft

Research)

http://madsys.cs.tsinghua.edu.cn/publications/TS2018-liu.pdf

DudeTx: Durable Transactions Made Decoupled

Cite from pdf:

The key insight of our solution is decoupling a durable transaction into

three

fully asynchronous steps. (1) Perform: execute the transaction on a shadow
memory, and produce a redo log for the transaction. (2) Persist: flush

redo

logs of transactions to persistent memory in an atomic manner. (3)

Reproduce:

modify original data in persistent memory according to the persisted redo

logs.

It is essential that we never directly write back dirty data from shadow

memory

to persistent memory – all the updates are realized via the logs.

It is exactly the same Amazon did for its Aurora!
Using this decoupling, Aurora provides great replication and failover (by
use of
replicated and versioned storage both for logs and for data pages).

regards,
Yura.

#8Ibrar Ahmed
ibrar.ahmad@gmail.com
In reply to: Юрий Соколов (#7)
Re: Interesting paper: Contention-Aware Lock Scheduling

On Mon, May 7, 2018 at 10:54 PM Юрий Соколов <funny.falcon@gmail.com> wrote:

2018-05-04 23:45 GMT+03:00 AJG <ayden@gera.co.nz>:

Another interesting article from Jan 2018 (Tsinghua University and

Microsoft

Research)

http://madsys.cs.tsinghua.edu.cn/publications/TS2018-liu.pdf

DudeTx: Durable Transactions Made Decoupled

Cite from pdf:

The key insight of our solution is decoupling a durable transaction into

three

fully asynchronous steps. (1) Perform: execute the transaction on a

shadow

memory, and produce a redo log for the transaction. (2) Persist: flush

redo

logs of transactions to persistent memory in an atomic manner. (3)

Reproduce:

modify original data in persistent memory according to the persisted

redo logs.

It is essential that we never directly write back dirty data from shadow

memory

to persistent memory – all the updates are realized via the logs.

It is exactly the same Amazon did for its Aurora!
Using this decoupling, Aurora provides great replication and failover (by
use of
replicated and versioned storage both for logs and for data pages).

regards,
Yura.

Hi,

Do we still want to see the patch of that paper (
http://www.vldb.org/pvldb/vol11/p648-tian.pdf)?

--
Ibrar Ahmed