Notes on the Asynchronous I/O implementation

Fixed Thread Pool

An asynchronous channel group associated with a fixed thread pool of size N, submits N tasks that wait on I/O or completion events from the kernel. Each task simply dequeues an event, does any necessary I/O completion, and then dispatches directly to the user‘s completion handler that consumes the result. When the completion handler terminates normally then the task returns back to waiting on a next event. If the completion handler terminates due to an uncaught error or runtime exception then the task terminates and is immediately replaced by a new task. This is depicted in the following diagram:

理解:一个异步的管道组和一个固定大小(N)的线程池相关联,这个管道组提交N个任务。每个任务等待来自内核的I/O事件或者需要实现的事件。一个任务仅仅弹出一个事件并完成必要的I/O操作,然后将结果直接派发给相应的用户回调函数。当回调函数正常结束的时候,任务会返回并等待下一个事件。如果在运行时意外终止的话,任务会直接终止并立即被一个新的任务所替代。

This configuration is relatively simple and delivers good performance for suitably designed applications. Note that it does not support the creation of threads on-demand or trimming back of the thread pool when idle. It is also not suitable for applications with completion handler implementations that block indefinitely; if all threads are blocked in completion handlers then I/O events cannot be serviced (forcing the operating system to queue accepted connections for example). Tuning requires choosing an appropriate value for N.

理解:这种配置相对简单而且在适合的应用场景中有较好的表现。注意这种设计不支持根据需求来创建合适大小的线程池或者当线程池闲置较多线程的时候对其进行消减。当回调函数会不确定阻塞的时候也不适合用这种模型。如果所有的线程都阻塞在回调函数的时候,I/O事件就无法被服务(操作系统会强制弹出已接受的链接)

User-supplied Thread Pool

An asynchronous channel group associated with a user-supplied thread pool submits tasks to the thread pool that simply invoke the user‘s completion handler. I/O and completion events from the kernel are handled by one or more internal threads that are not visible to the user application. This configuration is depicted in the following diagram:

This configuration works with most thread pools (cached or fixed) with the following exceptions:

  1. The thread pool must support unbounded queueing.
  2. The thread that invokes the execute method must never execute the task directly. That is, internal threads do not invoke completion handlers.
  3. Thread poool keep alive must be disabled on older editions of Windows. This restriction arises because I/O operations are tied to the initiating thread by the kernel.

This configuration delivers good performance despite the hand-off per I/O operation. When combined with a thread pool that creates threads on demand, it is suitable for use with applications that have completion handlers that occasionally need to block for long periods (or indefinitely). The value of M, the number of internal threads, is not exposed in the API and requires a system property to configure (default is 1).

Default Thread Pool

Simpler applications that do not create their own asynchronous channel group will use the default group that has an associated thread pool that is created automatically. This thread pool is a hybrid of the above configurations. It is a cached thread pool that creates threads on demand (as it is may be shared by different applications or libraries that use completion handlers that invoke blocking operations).

As with the fixed thread pool configuration it has N threads that dequeue events and dispatch directly to the user‘s completion handler. The value of N defaults to the number of hardware threads but may be configured by a system property. In addition to N threads, there is one additional internal thread that dequeues events and submits tasks to the thread pool to invoke completion handlers. This internal thread ensures that the system doesn‘t stall when all of the fixed threads are blocked, or otherwise busy, executing completion handlers.

What happens when an I/O operation completes immediately?

When an I/O operation completes immediately then the API allows for the completion handler to be invoked directly by the initiating thread if the initiating thread itself is one of the pooled threads. This creates the possibility that there may be several completion handlers on a thread‘s stack. The following diagram depicts a thread stack where a read or write method has completed immediately and the completion handler invoked directly. The completion handler, in turn, initiates another I/O operation that completes immediately and so its completion handler is invoked directly, and so on.

By default, the implementation allows up to 16 I/O operations to complete directly on the initiating thread before requiring that all completion handlers on the thread stack terminate. This policy helps to avoid stack overflow and also starvation that could arise if a thread initiates many I/O operations that complete immediately. This policy, and the maximum number of completion handler frames allowed on a thread stack is configured by a system property where required. An addition to the API, in the future, may allow an application to specify how I/O operations that complete immediately be handled.

Direct Buffers

The asynchronous I/O implementation is optimized for use with direct buffers. As with SocketChannels, all I/O operations are done using direct buffers. If an application initiates an I/O operation with a non-direct buffer then the buffer is transparently substituted with a direct buffer by the implementation.

By default, the maximum memory that may be allocated to direct buffers is equal to the maximum java heap size (Runtime.maxMemory). This may be configured, where required, using the MaxDirectMemorySize VM option (eg: -XX:MaxDirectMemorySize=128m).

The MBean browser in jconsole can be used to monitor the resources associated with direct buffers.

时间: 2024-10-11 23:03:04

Notes on the Asynchronous I/O implementation的相关文章

<译>Flink编程指南

Flink 的流数据 API 编程指南 Flink 的流数据处理程序是常规的程序 ,通过再流数据上,实现了各种转换 (比如 过滤, 更新中间状态, 定义窗口, 聚合).流数据可以来之多种数据源 (比如, 消息队列, socket 流, 文件). 通过sink组件落地流计算的最终结果,比如可以把数据落地文件系统,标准输出流比如命令行界面, Flink 的程序可以运行在多种上下文环境 ,可以单独只是Flink api,也可以嵌入其他程序. execution可以运行在本地的 JVM里, 也可以 运行

Rust Attribute的十三个分类包含的注释

Rust 的 Attribute 注释到目前为止(当前版本 rustc 1.7.0 (a5d1e7a59 2016-02-29)一共包括十三个种类. 一.只用于 crate 的 attribute crate_name - specify the crate's crate name. crate_type - see linkage. feature - see compiler features. no_builtins - disable optimizing certain code p

STM32 HAL drivers < STM32F7 >

Overview of HAL drivers The HAL drivers were designed to offer a rich set of APIs and to interact easily with the application upper layers.Each driver consists of a set of functions covering the most common peripheral features.The development of each

ajax(Asynchronous JavaScript + XML) 技术学习

参考文档:https://developer.mozilla.org/en-US/docs/AJAX 本文进行了大致翻译. Ajax 本身本不是一门技术,而是在2005年由Jesse James Garrett首创的描述为一个"新"途径来应用许多已存在的技术,包括:HTML 或者 XHTML, Cascading Style Sheets, JavaScript, The Document Object Model, XML, XSLT, 和最重要的 XMLHttpRequest ob

How to use Asynchronous Servlets to improve perfor

原文:https://plumbr.eu/blog/java/how-to-use-asynchronous-servlets-to-improve-performance 下文中第一个例子是每个servlet中sleep一个0到2000毫秒的随机数.Jmeter测试性能时候是并行去请求,可以认为这些servlet线程并行的sleep了不到2000ms.所以第二个异步的例子里,只sleep 2000ms了一次. 这里的异步其实就是servlet把请求交给一个线程池去处理,然后servlet线程结

Android Asynchronous Http Client

Features Make asynchronous HTTP requests, handle responses in anonymous callbacks HTTP requests happen outside the UI thread Requests use a threadpool to cap concurrent resource usage GET/POST params builder (RequestParams) Multipart file uploads wit

Efficient ticket lock synchronization implementation using early wakeup in the presence of oversubscription

A turn-oriented thread and/or process synchronization facility obtains a ticket value from a monotonically increasing ticket counter and waits until a memory location contains a value equal to the ticket value, yielding the processor between polls of

Flexible implementation of a system management mode (SMM) in a processor

A?system?management?mode?(SMM) of operating a processor includes only a basic set of hardwired hooks or mechanisms in the processor for supporting SMM. Most of SMM functionality, such as the processing actions performed when entering and exiting SMM,

AT91 USB Composite Driver Implementation

AT91 USB Composite Driver Implementation 1. Introduction The USB Composite Device is a general way to integrate two or more functions into one single device. It is defined in the USB Specification Revision 2.0, as "A device that has multiple interfac