Spark版本:spark-1.1.0-bin-hadoop2.4 (下载:http://spark.apache.org/downloads.html)
服务器环境的情况,请参考上篇博文 hbase centOS生产环境配置笔记
(hbase-r是ResourceManager; hbase-1, hbase-2, hbase-3是NodeManager)
1. 安装和配置 (yarn-cluster模式的文档参考:http://spark.apache.org/docs/latest/running-on-yarn.html)
yarn-cluster模式运行程序,spark会将程序jar包保存至hdfs,然后自动通过yarn的配置,分布式运行在各台NodeManager上。故这个模式下,无须指定Spark的master和slaves。
(1) 安装scala
下载rpm包,安装
(1) 本次Spark将在所有机器上安装:hbase-0, hbase-r, hbase-1, hbase-2, hbase-3。
解压后目录中的文件拷贝至 /hbase/spark,以下配置文件的路径都是相对于该目录下。全部配置好了以后,安装目录,环境变量等,都将复制在所有机器上。
(2) 环境变量, ~/.bashrc
export SPARK_HOME="/hbase/spark" export SCALA_HOME="/usr/share/scala"
(3) 设置Spark Properties,conf/spark-defaults.conf
# options for Yarn-cluster mode spark.yarn.applicationMaster.waitTries 10 spark.yarn.submit.file.replication 1 spark.yarn.preserve.staging.files false spark.yarn.scheduler.heartbeat.interval-ms 5000 spark.yarn.max.executor.failures 6 spark.yarn.historyServer.address hbase-r:10020 spark.yarn.executor.memoryOverhead 512 spark.yarn.driver.memoryOverhead 512
(4) 在防火墙上设置所有机器之间互相可以内网访问所有端口(单独设置特定的端口范围太过麻烦了,hadoop, hbase, spark, yarn, zookeeper等各种监听端口太多了)。
(3) 测试 java example
./bin/spark-submit --class org.apache.spark.examples.JavaSparkPi --master yarn-cluster --num-executors 3 --driver-memory 1024m --executor-memory 1024m --executor-cores 1 lib/spark-examples*.jar 20
运行成功后,在控制台可以看到
yarnAppState: FINISHED distributedFinalState: SUCCEEDED appTrackingUrl: http://hbase-r:18088/proxy/application_1414738706972_0011/A
然后访问 appTrackingUrl,可以看到如下结果,可以看到 FinalStatus:SUCCEEDED
Application Overview User: webadmin Name: org.apache.spark.examples.JavaSparkPi Application Type: SPARK Application Tags: State: FINISHED FinalStatus: SUCCEEDED Started: 3-Nov-2014 15:17:19 Elapsed: 43sec Tracking URL: History Diagnostics: ApplicationMaster Attempt Number Start Time Node Logs 1 3-Nov-2014 15:17:19 hbase-1:8042 logs