反应堆模式最牛的那篇论文--由solidmango执笔翻译

The Reactor:An Object-Oriented Wrapper for Event-Driven Port Monitoring and Service Demultiplexing

反应堆模式:一种应用于事件驱动的端口监控和服务多路化的面向对象封装器

Douglas C. Schmidt

An earlier version of this paper appeared in the February 1993 issue of the C++ Report.

这篇文章的早期版本出现在1993年2月发表的C++ Report上。

1. Introduction

1绪论

This is part one of the third article in a series that describes techniques for encapsulating existing operating system (OS) interprocess communication (IPC) services within object oriented (OO) C++ wrappers.

这是描述使用C++面向对象技术封装操作系统进程间通信服务的一个系列的第三篇。

The first article explains the main principles and motivations for OO wrappers, which simplify the development of correct, concise, portable, and efficient applications.

第一篇文章解释了面向对象包装器的主要原则和动机,它简化了开发正确的,简明的,可移植的以及高效的应用程序。

The second article describes an OO wrapper called IPC SAP that encapsulates the BSD socket and System V TLI system call Application Programmatic Interfaces (APIs).

第二篇文章描述了一种被称为IPC SAP的面向对象包装器,它封装了BSD socket 和System V TLI 系统调用应用程序编程接口。

IPC SAP enables application programs to access local and remote IPC protocol families such as TCP/IP via a type-secure, object-oriented interface.

IPC SAP 使应用程序可以通过类型安全的面向对象的接口访问本地的和远程的IPC协议族,比如TCP/IP。

This third article presents an OO wrapper for the I/O port monitoring and timer-based event notification facilities provided by the select and poll system calls.

第三篇文章介绍了一种对于I/O端口监控和基于定时器的事件通知的由select 和poll系统调用提供的机制的面向对象包装器。

Both select and poll enable applications to specify a time-out interval to wait for the occurrence of different types of input and output events on one or more I/O descriptors.

select 和poll都允许应用程指定一个在一个或者多个I/O描述符上等待不同类型的输入输出事件的超时时间间隔。

Select and poll detect when certain I/O or timer events occur and demultiplex these events to the appropriate application(s).

select 和poll检测某个I/O 或者定时器事件的发生并且分发这些事件到适当的应用程序。

As with many other OS APIs, the event demultiplexing interfaces are complicated, error-prone, non-portable, and not easily extensible.

和其他许多操作系统应用程序编程接口一样,事件多路化系统非常复杂,容易出错,不易移植而且不容易扩展。

An extensible OO framework called the Reactor was developed to overcome these limitations.

一种被称为反应堆的可扩展的面向对象框架被开发出来以克服这些限制。

The Reactor provides a set of higher-level programming abstractions that simplify the design and implementation of event-driven distributed applications.

反应堆模式提供了一系列的高级的编程抽象,它简化了事件驱动的分布式系统的设计和实现。

The Reactor also shields developers from many error-prone details in the existing event demultiplexing APIs and improves application portability between different OS variants.

反应堆模式同时也将开发人员从容易出错的现存事件多路复用编程接口中解脱出来,同时改善了应用程序在不同的操作系统变种间的可移植性。

The Reactor is somewhat different than the IPC SAP class wrapper described in [2]. IPC SAP added a relatively“thin” OO veneer to the BSD socket and System V TLI APIs.

反应堆模式和IPC SAP类包装器有些不同,IPC SAP 添加了一个相对薄的面向对象封装在BSD socket and System V TLI应用程序编程接口之上。

On the other hand, the Reactor provides a significantly richer set of abstractions than those offered directly by select or poll.

另一方面,反应堆模式提供了一系列更加丰富的抽象相对于直接的select 和 poll所提供的。

In particular, the Reactor integrates I/O-based port monitoring together with timer-based event notification to provide a general framework for demultiplexing application communication services.

特别的,反应堆模式整合了基于I/O的端口监控和基于定时器的事件通知为多路化应用程序服务提供一种通用的框架。

Port monitoring is used by event-driven network servers that perform I/O on many connections simultaneously.

端口监控被用于事件驱动的同时为许多并发连接进行I/O的网络服务器。

Since these servers must handle multiple connections it is not feasible to perform blocking I/O on a single connection indefinitely.

由于这些服务器必须处理多个连接,对于单一的连接提供无限时间的阻塞I/O是不够灵活的。

Likewise, the timer-based APIs enable applications to register certain operations that are periodically or aperiodically activated via a centralized timer facility controlled by the Reactor.

同样的,基于定时器的应用程序编程接口使应用程序可以注册一些操作,它们通过一个由反应堆集中控制的定时器周期性或者非周期性的激活。

This topic is divided into two parts.

这个主题分为两部分。

Part one (presented in this article) describes a distributed logging facility that motivates the need for efficient event demultiplexing, examines several alternative solution approaches, evaluates the advantages and disadvantages of these alternatives, and compares them with the Reactor.

第一部分(在这篇文章介绍)描述一个分布式的日志系统,它激发了有效的事件多路化的需求,检查了其他的几个替代方案,评价了这些替代方案的优势和劣势,并且将他们和反应堆模式进行比较。

Part two (appearing in a subsequent issue of the C++ Report) focuses on the OO design aspects of the Reactor.

第二部分(出现在后来发表的C++ Report上)集中在反应堆模式的面向对象设计方面。

In addition, it discusses the design and implementation of the distributed logging facility.

另外,它讨论了分布式日志系统的设计和实现。

This example illustrates precisely how the Reactor simplifies the development of event-driven distributed applications.

这个例子精确的说明了反应堆模式是如何简化事件驱动的分布式应用程序的开发的。

2. Example: A Distributed Logging Facility

2. 例子:一个分布式日志系统

To motivate the utility of event demultiplexing mechanisms, this section describes the requirements and behavior of a distributed logging facility that handles event-driven I/O from multiple sources “simultaneously.”

为了演示事件多路复用机制的功能,这一节描述了一个处理从多个源头同时接收消息的事件驱动的I/O分布式日志系统。

As shown in Figure 1, the distributed logging facility offers several services to applications that operate concurrently throughout a network environment.

正如图1所示,这个分布式日志系统为应用程序提供几个服务,这些服务通过网络环境并发的操作。

First, it provides a centralized location for recording certain status information used to simplify the management and tracking of distributed application behavior.

首先,它为记录某些状态信息提供一个集中的位置以简化对于分布式应用程序行为的管理和跟踪。

To facilitate this, the client daemon time-stamps outgoing logging records to allow chronological tracing and reconstruction of the execution order of multiple concurrent processes executing on separate host machines.

为了完成这个功能,客户端的守护进程给发出去的日志记录打上时间戳从而可以按照时间顺序跟踪和重建多个并发进程在不同主机上的执行顺序。

Second, the facility also enables the prioritized delivery of logging records. These records are received and forwarded by the client daemon in the order of their importance, rather than in the order they were originally generated.

第二,这个功能也使按优先级投递日志记录称为可能。这些记录被客户端守护进程按他们的重要性接收和转发,而不是按照他们最初产生的顺序。

Centralizing the logging activities of many distributed applications within a single server is also useful since it serializes access to shared output devices such as consoles, printers, files, or network management databases.

集中多个分布式应用程序的产生日志到一个单一的服务器也是有用的,因为这使对输出设备比如控制台,打印机,文件或者网络管理数据库的访问得以串行化。

In contrast, without such a centralized facility, it becomes difficult to monitor and debug applications consisting of multiple concurrent processes.

相比之下,如果没有这样的集中设备,对于由多个并发进程组成的应用程序的监控和调试都将变得很困难。

For example, the output from ordinary C stdio library subroutines (such as fputs and printf) that are called simultaneously by multiple processes or threads is often scrambled together when it is displayed in a single window or console.

比如,一个普通的C stdio 库的子函数(比如fputs 和 printf)的输出,如果被多进程或者多线程同时调用,当显示在窗口或者控制台的时候经常会纠缠在一起。

The distributed logging facility is designed using a client/server architecture.

这个分布式日志系统被设计为使用客户端服务器架构。

The server logging daemon collects, formats, and outputs logging records forwarded from client logging daemons running on multiple hosts throughout a local and/or wide-area network.

服务器日志守护进程搜集,格式化并且输出从客户端日志守护进程转发过来的日志记录,这些客户端日志守护进程运行在多个主机上遍布于一个局域或者广域网。

Output from the logging server may be redirected to various devices such as printers, persistent storage repositories, or logging management consoles.

日志服务器的输出可能被重定向到各种设备,比如打印机,永久存储或者日志管理控制台。

As shown in Figure 1, the InterProcess Communication (IPC) structure of the logging facility involves several levels of demultiplexing.

正如图1所示,日志系统的进程间通信结构设计几个层次的多路化。

For instance, each client host in the network contains multiple application processes (such as P1 ; P2 ; andP3) that may participate with the distributed logging facility.

比如,网络上的每一个客户端主机包含多个应用程序进程(比如 P1;P2;P3),他们都可能参与分布式日志系统。

Each participating process uses the application logging API depicted in the rectangular boxes in Figure 1 to format debugging traces or error diagnostics into logging records.

每个参与进程都使用如图1的方框中所描述的应用程序日志API进行格式化调试跟踪或者错误诊断输入到日志记录系统中。

A logging record is an object containing several header fields and a payload with a maximum size of approximately 1K bytes.

一条日志记录是一个包含几个头域和一个有效载荷的最大值接近1K字节的对象。

When invoked by an application process, the Log Msg::log API prepends the current process identifier and program name to the record.

当Log Msg::log API被应用程序调用的时候, 它预先考虑将当前进程的ID和程序名字放进记录。

It then uses the “record-oriented” named pipe IPC mechanism to demultiplex these composite logging records onto a single client logging daemon running on each host machine.

然后使用基于记录的命名管道IPC机制多路化这些组合日志记录到一个单独的运行在每个主机上的日志客户端的守护进程上。

The client daemon prepends a time-stamp to the record and then employs a remote IPC service (such as TCP or RPC) to demultiplex the record into a server logging daemon running on a designated host in the network.

这个客户端守护进程为日志记录预先准备一个时间戳,然后发起一个远程的IPC服务(比如 TCP或者RPC)以多路化这个日志记录到一个运行在一个指定的网络主机上的守护进程中。

The server operates in an event-driven manner, processing logging records as they arrive from multiple client daemons.

服务器运行在一种事件驱动的方式下,同时处理来自多个客户端守护进程的日志记录。

Depending on the logging behavior of the participating applications, the logging records may be sent by arbitrary clients and arrive at the server daemon at arbitrary time intervals.

由于参与的应用程序产生日志的行为不同,日志记录可以来自于任意的客户端并且以任意的时间间隔到达服务器端。

A separate TCP stream connection is established between each client logging daemon and the designated server logging daemon.

一个单独的TCP流连接在客户端日志守护进程和指定的服务器端日志守护进程之间被创建出来。

Each client connection is represented by a unique I/O descriptor in the server.

每一个客户端的连接在服务器端以一个唯一的I/O描述符表示。

In addition, the server also maintains a dedicated I/O descriptor to accept new connection requests from client daemons that want to participate with the distributed logging facility.

另外,服务器也维护着一个专一的I/O描述符以接受从客户端守护进程发过来的想要参与分布式日志系统的连接请求。

During connection establishment the server caches the client’s host name (illustrated by the ovals in the logging server daemon), and uses this information to identify the client in the formatted records it prints to the output device(s).

在连接建立期间,服务器缓存客户端的主机名(在日志服务器守护进程的图中以椭圆说明),并且使用这些信息在格式化好的日志记录准备输出到输出设备中标识客户端。

The complete design and implementation of the distributed logging facility is described in [3].

分布式日志系统的完整的设计和实现在[3]中进行没描述。

The remainder of the current article presents the necessary background material by exploring several alternative mechanisms for handling I/O from multiple sources.

这篇文章的剩余部分通过探索几个其他的处理多路I/O的替代方案描述背景材料。

3. Operating System Event Demultiplexing

3. 操作系统事件多路化

Modern operating systems such as UNIX, Windows NT, and OS/2 offer several techniques that allow applications to perform I/O on multiple descriptors “simultaneously.”

现代操作系统比如UNIX,Windows NT, 和 OS/2都提供几种技术允许应用程序运行在多个描述符上的同时的I/O。

This section describes four alternatives and compares and contrasts their advantages and disadvantages.

这部分描述4种替代方案并且比较和对比他们的优势和劣势。

To focus the discussion, each alternative is characterized in terms of the distributed logging facility described in Section 2 above.

为了集中讨论,每个替代方案都被赋予上面第二部分中讨论的分布式日志系统的特性。

In particular, each section presents a skeletal server logging daemon implemented with the alternative being discussed.

特别的,每个部分表现为一个服务器端日志守护进程的使用替代技术的实现骨架。

To save space and increase clarity, the examples utilize the OO IPC SAP socket-wrapper library described in a previous C++ Report article [2].

为了节省空间并且是描述更加清楚,这些例子利用面向对象的IPC SAP套接字封装器库,这个库在先前的C++ Report文章中有描述。

The handle logging record function shown in Figure 2 is also invoked by all the example server daemons.

这个显示在图2中的处理日志记录的函数也被所有的服务端守护进程调用。

This function is responsible for receiving and processing the logging records and writing them to the appropriate output device.

这个函数负责接受和处理日志记录并且将他们写到适当的输出设备上。

Any synchronization mechanisms required to serialize access to the output device(s) are also performed in this function.

任何需要串行化访问输出设备的同步机制也在这个函数中进行操作。

In general, the concurrent multi-process and multi-thread approaches are somewhat more complicated to develop since output must be serialized to avoid scrambling the logging records generated from all the separate processes.

通常并发的多进程和多线程是相近的都使开发更加复杂,因为输出必须串行化以避免争夺日志记录从所有的分离的进程中产生。

To accomplish this, the concurrent server daemons cooperate by using some form of synchronization mechanisms (such as semaphores, locks, or other IPC mechanisms like FIFOs or message queues) in the handle logging record subroutine.

为了完成这个目标,这个并发服务器守护进程通过某种形式的同步机制(比如信号量,锁或者其他IPC机制比如FIFOs或者消息队列)来处理日志记录子进程。

4.Summary

4. 总结

 

This article presents the background material necessary to understand the behavior, advantages, and disadvantages of existing UNIX mechanisms for handling multiple sources of I/O in a network application.

这篇文章介绍了为了理解现存的UNIX处理多路网络I/O应用程序的机制的行为,优势和劣势的必要的背景资料。

An OO wrapper called the Reactor has been developed to encapsulate and overcome the limitations with the select and poll event demultiplexing system calls.

一个被称为反应堆的面向对象的封装器被开发出来以封装和克服select 和poll 事件多路化系统调用的限制。

The object-oriented design and implementation of the Reactor is explored in greater detail in part two of this article (appearing in the next C++ Report).

反应堆的面向对象设计和实现在这篇文章(在下一篇C++ Report上出现)的第二部分进行更加详细的讨论。

In addition to describing the class relationships and inheritance hierarchies, the follow-up article presents an extended example involving the distributed logging facility.

除了描述类的关系和继承层次,下一篇文章将介绍一个涉及分布式日志系统的可扩展的示例。

This example illustrates how the Reactor simplifies the development of event-driven network servers that manage multiple client connections simultaneously.

这个示例说明了反应堆是如何简化事件驱动的处理多个客户端同时连接的网络服务器开发的。

时间: 2024-10-15 23:10:53

反应堆模式最牛的那篇论文--由solidmango执笔翻译的相关文章

读完这100篇论文,你也是大数据高手!

引言 PayPal高级工程总监Anil Madan写了这篇大数据的文章,一共有100篇大数据的论文,涵盖大数据技术栈,全部读懂你将会是大数据的顶级高手.当然主要是了解大数据技术的整个框架,对于我们学习大数据有莫大好处. 开 源(Open Source)用之于大数据技术,其作用有二:一方面,在大数据技术变革之路上,开源在众人之力和众人之智推动下,摧枯拉朽,吐故纳新,扮演着非常重要的 推动作用.另一方面,开源也给大数据技术构建了一个异常复杂的生态系统.每一天,都有一大堆“新”框架.“新”类库或“新”

读完这100篇论文 就能成大数据高手(附论文下载)

100 open source Big Data architecture papers for data professionals. 读完这100篇论文 就能成大数据高手 作者 白宁超 2016年4月16日13:38:49 摘要:本文基于PayPal高级工程总监Anil Madan写的大数据文章,其中涵盖100篇大数据的论文,涵盖大数据技术栈(数据存储层.键值存储.面向列的存储.流式.交互式.实时系统.工具.库等),全部读懂你将会是大数据的顶级高手.作者通过引用Anil Madan原文和CS

反应堆模式

反应堆模式 he Reactor:An Object-Oriented Wrapper for Event-Driven Port Monitoring and Service Demultiplexing 反应堆模式:一种应用于事件驱动的端口监控和服务多路化的面向对象封装器 Douglas C. Schmidt An earlier version of this paper appeared in the February 1993 issue of the C++ Report. 这篇文章

大数据学校(二)hadoop概述及Google的三篇论文

学习大数据,学什么?怎么学? 1.原理和运行机制.体系结构(非常重要)2.动手:搭建环境.写程序 目的:1.学习内容 2.熟悉一些名词 一.各章概述(Hadoop部分) (一).Hadoop的起源与背景知识 1.什么是大数据?两个例子.大数据的核心问题是什么? 举例: (1)商品推荐:问题1:大量的订单如何存储? 问题2:大量的订单如何计算? (2)天气预报:问题1:大量的天气数据如何存储? 问题2:大量的天气数据如何计算? 大数据的核心问题: (1)数据的存储:分布式文件系统(分布式存储)(2

CVPR 2018 | 腾讯AI Lab入选21篇论文详解

近十年来在国际计算机视觉领域最具影响力.研究内容最全面的顶级学术会议CVPR,近日揭晓2018年收录论文名单,腾讯AI Lab共有21篇论文入选,位居国内企业前列,我们将在下文进行详解,欢迎交流与讨论. 去年CVPR的论文录取率为29%,腾讯AI Lab 共有6篇论文入选,点击 这里可以回顾.2017年,腾讯 AI Lab共有100多篇论文发表在AI顶级会议上,包括ICML(4篇).ACL(3篇).NIPS(8篇)等. 我们还坚持与学界.企业界和行业「共享AI+未来」,已与美国麻省理工大学.英国

谷歌的三篇论文

传说中的谷歌三篇论文 MapReduce: Simpli?ed Data Processing on Large Clusters The Google file system Bigtable: A Distributed Storage System for Structured Data 谷歌学术立搜可下,每日潜心研习,假以时日,便可大成.

linux lvs集群nat模式(比上一篇的lvs nat实用)

这是一篇以apcache服务为例介绍lvs的nat模式配合keepalived实现的方案.实验拓扑图如下所示,虚线上面的大图可以看成是虚线下面"服务器群"的放大版: 本实验共用到4台linux虚拟服务器,其中两台rhel5.6作为主从HA(keepalived)服务器,另外两台rhel4.6模拟Apache服务器--用两台Apache服务器模拟多台Apache服务器. 实验原理是,用Apache服务器为代表模拟实际用到的服务器,用两台Apache模拟多台Apache,所有的Apache

科学的发展不是靠这个或那个发现,也不是靠撰写或发表一篇论文,而是靠热忱的研究和大量的工作。

科学的发展不是靠这个或那个发现,也不是靠撰写或发表一篇论文,而是靠热忱的研究和大量的工作.--钱德拉塞卡 链接:钱德拉塞卡-科学之路上的一位优雅独行者

[翻译]怎么阅读一篇论文

原文pdf: http://101.96.8.164/blizzard.cs.uwaterloo.ca/keshav/home/Papers/data/07/paper-reading.pdf 还有一个PPT: https://wenku.baidu.com/view/01d5642c4b73f242336c5fa9.html ================================================================================= 201