之前自己做的hadoop项目是基于0.20.2版本的,查了一下资料,知道了自己以前学的是原map/reduce模型。
官方说明:
1.1.X - current stable version, 1.1 release
1.2.X - current beta version, 1.2 release
2.X.X - current alpha version
0.23.X - simmilar to 2.X.X but missing NN HA.
0.22.X - does not include security
0.20.203.X - old legacy stable version
0.20.X - old legacy version
说明:
0.20/0.22/1.1/CDH3 系列,原map/reduce模型,稳定版
0.23/2.X/CDH4系列,Yarn模型,新版
再一次打开hadoop官网,准备尝试翻译一下介绍YARN这个章节的内容,学习知识的同时提高外语能力。
YARN Apache Hadoop 的下一代MapReduce
在hadoop-0.23版本中, MapReduce已经做了一次全面的修改,这也正是我们现在所说的 MapReduce 2.0 (MRv2) 或者是 YARN.
MRv2的基本思想是将JobTracker的两个主要的功能,一个是资源管理,一个是作业的调度和监控,分成各自独立的后台进程。这个思想说的是拥有一个全局的资源管理器( ResourceManager (RM)),还有一个是每个应用程序都拥有的应用主控器(ApplicationMaster (AM))。一个应用程序可以是一个传统的Map-Reduce作业集合中一个单独的作业或者也可以是一个有向无环图的作业集合。
资源管理器(ResourceManager),每个节点的从属模块(per-node slave)和节点管理器( NodeManager (NM))形成了这个数据计算框架。资源管理器在整个系统中的所有应用程序中对资源具有最高的决定控制权。
每一个应用程序的应用主控器(ApplicationMaster)实际上是一个框架特定的类库,负责处理来自于资源管理器的资源并且协同节点管理器(NodeManager)去执行和监控作业。
资源管理器有两个主要的组件: 调度器(Scheduler), 应用集合管理器(ApplicationsManager)
调度器负责把资源分配给各种正在运行的应用程序,这些应用程序服从着容量,队列等相似的约束。调度器执行过程中不会监控和追踪应用程序的状态,从这一点上来看,它仅仅是一个单纯的调度器。同时,它也不保证一定会重启由于应用错误或者是硬件故障引起的失败的任务。调度器根据应用程序的资源的需求执行它的调度功能。其实它根据的是一个叫做资源容器(Container)的抽象概念,这个容器包含了例如内存,cpu,磁盘,网络等元素。在第一个版本中,只支持内存。
调度器拥有一个支持可插拔策略的插件程序,它负责分割存在于所有的各种队列,应用程序等等的集群资源。当前的Map-Reduce 调度器例如容量调度器(CapacityScheduler)和公平调度器(FairScheduler)是一些插件程序的典型例子。
容量调度器支持分级队列(hierarchical queues),目的是去允许集群资源的更多可预测的共享。
应用集合管理器负责接收作业提交,协商确定第一个容器,用于执行应用程序中特定的应用主控器以及为由于某种原因而失败的应用主控器容器的重启提供服务。
节点管理器是每一台计算机的框架代理,它负责各种容器,监控他们的资源使用情况(cpu,内存,磁盘,网络等),并且把相同的信息传递给资源管理器或者是调度器。
每一个应用程序的应用主控器负责协商确定合适的来自于调度器的资源容器,并且为进程追踪和监控他们的状态。
下一代MapReduce MRV2 和之前的稳定版本hadoop-1.x保持良好的API 兼容性。这样一来,就是说所有的Map-Reduce 作业不需要改变,只要重新编译一下就可以在MRv2上完美运行了。
官网原文,如有翻译不妥处,请多多指教,谢谢!
Apache Hadoop NextGen MapReduce (YARN)
MapReduce has undergone a complete overhaul in hadoop-0.23 and we now have, what we call, MapReduce 2.0 (MRv2) or YARN.
The fundamental idea of MRv2 is to split up the two major functionalities of the JobTracker, resource management and job scheduling/monitoring, into separate daemons. The idea is to
have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job in the classical sense of Map-Reduce jobs or a DAG of jobs.
The ResourceManager and per-node slave, the NodeManager (NM), form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources
among all the applications in the system.
The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s)
to execute and monitor the tasks.
The ResourceManager has two main components: Scheduler and ApplicationsManager.
The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler
in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees about restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based
the resource requirements of the applications; it does so based on the abstract notion of a resource Container which incorporates elements such as memory, cpu, disk, network etc. In the first version, only memory is supported.
The Scheduler has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various queues, applications etc. The current Map-Reduce schedulers
such as the CapacityScheduler and the FairScheduler would be some examples of the plug-in.
The CapacityScheduler supports hierarchical queues to allow for more predictable sharing of cluster resources
The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service
for restarting the ApplicationMaster container on failure.
The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler.
The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress.
MRV2 maintains API compatibility with previous stable release (hadoop-1.x). This means that all Map-Reduce jobs should still run unchanged on top of MRv2 with just
a recompile.