Storm worker 并行度等理解

Storm 调优是非常重要的, 仅次于写出正确的代码, 好在Storm官网上有关于worker executors tasks的介绍, http://storm.incubator.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html

这篇文章是收录自这个blog: http://www.michael-noll.com/blog/2012/10/16/understanding-the-parallelism-of-a-storm-topology/

将此翻译一下, 加强一下认识:

What makes a running topology: worker processes, executors and tasks

worker process executes a subset of a topology, and runs in its own JVM. A worker process belongs to a specific topology and may run one or more executors for one or more components (spouts or bolts) of this topology. A running topology consists of many such processes running on many machines within a Storm cluster.

An executor is a thread that is spawned by a worker process and runs within the worker’s JVM. An executor may run one or more tasks for the same component (spout or bolt). An executor always has one thread that it uses for all of its tasks, which means that tasks run serially on an executor.

一个worker进程负责执行一个Topology的子集, 有自己独立的JVM, 一个worker进程属于一个特定的Topology, 运行着一到多个storm的组件(spouts或者bolts)的线程executors.一个运行中的Topology包含许多这样的进程, 分布在storm集群的不同的物理机器上.
一个executors是一个由worker进行产生, 并运行在worker JVM的线程. 一个executor 可能跑着同一个storm组件(spout或者bolt)的一个或者多个tasks, 一个executor 总是只有一个线程用于执行当前任务, 也就意味着executor中的任务是串行的.

task performs the actual data processing and is run within its parent executor’s thread of execution. Each spout or bolt that you implement in your code executes as many tasks across the cluster. The number of tasks for a component is always the same throughout the lifetime of a topology, but the number of executors (threads) for a component can change over time. This means that the following condition holds true: #threads <= #tasks. By default, the number of tasks is set to be the same as the number of executors, i.e. Storm will run one task per thread (which is usually what you want anyways).

Also be aware that:

  • The number of executor threads can be changed after the topology has been started (see storm rebalance command below).
  • The number of tasks of a topology is static.

See Understanding the Internal Message Buffers of Storm for another view on the various threads that are running within the lifetime of a worker process and its associated executors and tasks.

一个task用于处理数据和执行, 每一个spout或者bolt都在集群上有许多tasks, 一个组件中task的数目总是跟这个Topology的吞吐量相同, 但是executors(threads)的数目是可以动态调整的. 这就意味着 threads<=tasks. 默认情况下, tasks=threads, storm将会为每一个task分配一个executor, 这也是用户想要的情况

注意:

1. executors threads的数目是可以在Topology启动后变动(storm rebanlance)

2. Topology的tasks数据是静态的.

可通过这篇blog Understanding the Internal Message Buffers of Storm 从另一个角度来看worker executors tasks, 稍后翻译

To be continued...

时间: 2024-10-10 17:24:52

Storm worker 并行度等理解的相关文章

storm源码之理解Storm中Worker、Executor、Task关系【转】

[原]storm源码之理解Storm中Worker.Executor.Task关系 Storm在集群上运行一个Topology时,主要通过以下3个实体来完成Topology的执行工作:1. Worker(进程)2. Executor(线程)3. Task 下图简要描述了这3者之间的关系:                                                    1个worker进程执行的是1个topology的子集(注:不会出现1个worker为多个topology服

Storm的并行度详解

Storm的并行度是非常重要的,通过提高并行度可以提高storm程序的计算能力. 那strom是如何提高并行度的呢? Strom程序的执行是由多个supervisor共同执行的.supervisor运行的是topology中的spout/bolt task task  是storm中进行计算的最小的运行单位,表示是spout或者bolt的运行实例. 程序执行的最大粒度的运行单位是进程,刚才说的task也是需要有进程来运行它的,在supervisor中,运行task的进程称为worker, Sup

【原】【译文】理解storm拓扑并行度

原文地址: http://storm.apache.org/releases/1.2.1/Understanding-the-parallelism-of-a-Storm-topology.html 什么构成一个运行的拓扑:工作进程,执行器和任务 storm区分以下三个用于在Storm集群中实际运行拓扑的主要实体: 1. 工作进程2. 执行器(线程)3. 任务 这是他们的关系的一个简单的说明 [译者理解:1个工作进程(worker)可包括1或多个执行器(executor/thread),1个执行

用实例的方式去理解storm的并行度

什么是storm的并发度 一个topology(拓扑)在storm集群上最总是以executor和task的形式运行在suppervisor管理的worker节点上.而worker进程都是运行在jvm虚拟机上面的,每个拓扑都会被拆开多个组件分布式的运行在worker节点上. 1.worker 2.executor 3.task 这三个简单关系图: 一个worker工作进程运行一个拓扑的子集(其实就是拓扑的组件),每个组件的都会以executor(线程)在worker进程上执行,一个worker进

Storm的并行度、Grouping策略以及消息可靠处理机制简介

概念: Workers (JVMs): 在一个节点上可以运行一个或多个独立的JVM 进程.一个Topology可以包含一个或多个worker(并行的跑在不同的machine上), 所以worker process就是执行一个topology的子集, 并且worker只能对应于一个topology Executors (threads): 在一个worker JVM进程中运行着多个Java线程.一个executor线程可以执行一个或多个tasks.但一般默认每个executor只执行一个task.

storm的acker机制理解。

转载请注明原创地址http://www.cnblogs.com/dongxiao-yang/p/6142356.html Storm 的拓扑有一些特殊的称为"acker"的任务,这些任务负责跟踪每个 Spout 发出的 tuple 的 DAG.开启storm tracker机制的前提有三个: 1. 在spout emit tuple的时候,要加上第3个参数messageid 2. 在配置中acker数目至少为1 3. 在bolt emit的时候,要加上第二个参数anchor tuple

[Storm] 并发度的理解

Tasks & executors relation Q1. However I'm a bit confused by the concept of "task". Is a task an running instance of the component(spout or bolt) ? An executor having multiple tasks actually is saying the same component is executed for multi

storm worker.xml 日志

<?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding

对Gearman中client,worker,jobserver的理解

在gearman的官网http://gearman.org/有以下的一段说明 A Gearman powered application consists of three parts: a client, a worker, and a job server. The client is responsible for creating a job to be run and sending it to a job server. The job server will find a suit