Lock-free vs. wait-free concurrency

There are two types of non-blocking thread synchronization algorithms
- lock-free, and wait-free. Their meaning is often confused. In lock-free systems, while any particular computation may be blocked for some period of time, all CPUs are able to continue performing other computations. To put it differently, while a given thread
might be blocked by other threads in a lock-free system, all CPUs can continue doing other useful work without stalls. Lock-free algorithms increase the overall throughput of a system by occassionally increasing the latency of a particular transaction. Most
high-end(多终端) database systems are based on lock-free algorithms, to varying degrees.(一定程度上)

By contrast, wait-free algorithms ensure that in addition to all CPUs continuing to do useful work, no computation can ever be blocked by another computation. Wait-free algorithms have stronger guarantees than
lock-free algorithms, and ensure a high thorughput without sacrificing latency of a particular transaction. They‘re also much harder to implement, test, and debug. The lockless
page cache
 patches to the Linux kernel are an example of a wait-free system.

In a situation where a system handles dozens of concurrent transactions and has soft
latency requirements
, lock-free systems are a good compromise between development complexity and high concurrency requirements. A database server for a website is a good candidate for a lock-free design. While any given transaction might block, there are
always more transactions to process in the meantime, so the CPUs will never stay idle. The challenge is to build a transaction scheduler that maintains a good mean latency, and a well bounded standard deviation.

In a scenario where a system has roughly as many concurrent transactions as CPU cores, or has hard real-time requirements, the developers need to spend the extra time to build wait-free systems. In these cases
blocking a single transaction isn‘t acceptable - either because there are no other transactions for the CPUs to handle, minimizing the throughput, or a given transaction needs to complete with a well defined non-probabilistic time period. Nuclear reactor control
software is a good candidate for wait-free systems.

RethinkDB is a lock-free system. On a machine with N CPU cores, under most common workloads, we can gurantee that no core will stay idle and no IO pipeline capacity is wasted as long as there are roughly N * 4
concurrent transactions. For example, on an eight core system, no piece of hardware will sit idle if RethinkDB is handling roughly 32 concurrent transactions or more. If there are fewer than 32 transactions, you‘ve likely overpaid for some of the cores. (Of
course if you only have 32 concurrent transactions, you don‘t need an eight-core machine).

Wiki Explanation :http://en.wikipedia.org/wiki/Non-blocking_synchronization;

IBM Blog:http://www.ibm.com/developerworks/cn/linux/l-cn-lockfree/index.html;

STM(MVCC Implementation):http://blog.hongtium.com/software-transactional-memory;

Lock-free vs. wait-free concurrency

时间: 2024-08-12 20:09:55

Lock-free vs. wait-free concurrency的相关文章

线程同步机制(一)--Synchronized,Lock

多个执行线程共享一个资源的情形是最常见的并发编程情景之一.在并发应用中常常遇到这样的情景:多个线程读或者写相同的数据,或者访问相同的文件或者数据库连接.为了防止这些共享资源可能出现错误或者数据不一致,人们引入了临界区(critical section)概念.临界区是一个用以访问共享资源的代码块,这个代码块中同一时间只允许一个线程执行. 为了实现这个临界区,Java提供了同步机制.当一个线程试图访问一个临界区时,它将使用一种同步机制来查看是不是已经有其他线程进入临界区.如果没有,它就进入临界区,如

Library Cache: Lock, Pin and Load Lock

What is "Library cache lock" ? This event controls the concurrency between clients of the library cache. It acquires a lock on the object handle so that either: One client can prevent other clients from accessing the same object. The client can

Concurrent control in SQLite

This document describes the technologies to concurrent access to a SQLite database. There are also some code analysis to reveal the low level implementations. Multi-process See the official FAQ Can multiple applications or multiple instances of the s

Mongodb监控命令

Mongodb监控命令 一.监控工具 1.mongostat工具 默认为显示每秒的统计信息 # mongostat -uroot -ppassword --authenticationDatabase admin -h192.168.x.xx  --rowcount 10 1 connected to: 192.168.x.xx insert  query update delete getmore command flushes mapped  vsize    res faults  loc

LimitedConcurrencyLevelTaskScheduler

//-------------------------------------------------------------------------- // // Copyright (c) Microsoft Corporation. All rights reserved. // // File: LimitedConcurrencyTaskScheduler.cs // //---------------------------------------------------------

[Hive - LanguageManual] Hive Default Authorization - Legacy Mode

Disclaimer Prerequisites Users, Groups, and Roles Names of Users and Roles Creating/Dropping/Using Roles Create/Drop Role Grant/Revoke Roles Viewing Granted Roles Privileges Grant/Revoke Privileges Viewing Granted Privileges Hive Operations and Requi

第二章线程同步基础

Java 7 并发编程实战手册目录 代码下载(https://github.com/Wang-Jun-Chao/java-concurrency) 第二章线程同步基础 2.1简介 多个执行线程共享一个资源的情景,是最常见的并发编程情景之一.在并发应用中常常遇到这样的情景:多个线程读或者写相同的数据,或者访问相同的文件或数据库连接. 为了防止这些共享资源可能出现的错误或数据不一致,我们必须实现一些机制来防止这些错误的发生. 为了解决这些问题,引入了临界区(Critical Section)概念,临

实现一个显示锁

定义一个lock接口 package com.dwz.concurrency.chapter10; import java.util.Collection; public interface Lock { class TimeOutException extends Exception { public TimeOutException(String message) { super(message); } } void lock() throws InterruptedException; v

《深入浅出 Java Concurrency》—锁紧机构(一)Lock与ReentrantLock

转会:http://www.blogjava.net/xylz/archive/2010/07/05/325274.html 前面的章节主要谈谈原子操作,至于与原子操作一些相关的问题或者说陷阱就放到最后的总结篇来总体说明. 从这一章開始花少量的篇幅谈谈锁机制. 上一个章节 中谈到了锁机制,而且针对于原子操作谈了一些相关的概念和设计思想.接下来的文章中.尽可能的深入研究锁机制,而且理解里面的原理和实际应用场合. 虽然synchronized在语法上已经足够简单了.在JDK 5之前仅仅能借助此实现,

深入浅出 Java Concurrency (35): 线程池 part 8 线程池的实现及原理 (3)[转]

线程池任务执行结果 这一节来探讨下线程池中任务执行的结果以及如何阻塞线程.取消任务等等. 1 package info.imxylz.study.concurrency.future;2 3 public class SleepForResultDemo implements Runnable {4 5     static boolean result = false;6 7     static void sleepWhile(long ms) {8         try {9