Spark学习之路 (二)Spark2.3 HA集群的分布式安装[转]

下载Spark安装包

从官网下载

http://spark.apache.org/downloads.html

从微软的镜像站下载

http://mirrors.hust.edu.cn/apache/

从清华的镜像站下载

https://mirrors.tuna.tsinghua.edu.cn/apache/

安装基础

  1. Java8安装成功
  2. zookeeper安装成功
  3. hadoop2.7.5 HA安装成功
  4. Scala安装成功(不安装进程也可以启动)

Spark安装过程

上传并解压缩

[[email protected] ~]$ ls
apps     data      exam        inithive.conf  movie     spark-2.3.0-bin-hadoop2.7.tgz  udf.jar
cookies  data.txt  executions  json.txt       projects  student                        zookeeper.out
course   emp       hive.sql    log            sougou    temp
[[email protected] ~]$ tar -zxvf spark-2.3.0-bin-hadoop2.7.tgz -C apps/

为安装包创建一个软连接

[[email protected] ~]$ cd apps/
[[email protected] apps]$ ls
hadoop-2.7.5  hbase-1.2.6  spark-2.3.0-bin-hadoop2.7  zookeeper-3.4.10  zookeeper.out
[[email protected] apps]$ ln -s spark-2.3.0-bin-hadoop2.7/ spark
[[email protected] apps]$ ll
总用量 36
drwxr-xr-x. 10 hadoop hadoop  4096 3月  23 20:29 hadoop-2.7.5
drwxrwxr-x.  7 hadoop hadoop  4096 3月  29 13:15 hbase-1.2.6
lrwxrwxrwx.  1 hadoop hadoop    26 4月  20 13:48 spark -> spark-2.3.0-bin-hadoop2.7/
drwxr-xr-x. 13 hadoop hadoop  4096 2月  23 03:42 spark-2.3.0-bin-hadoop2.7
drwxr-xr-x. 10 hadoop hadoop  4096 3月  23 2017 zookeeper-3.4.10
-rw-rw-r--.  1 hadoop hadoop 17559 3月  29 13:37 zookeeper.out
[[email protected] apps]$

进入spark/conf修改配置文件

(1)进入配置文件所在目录

[[email protected] ~]$ cd apps/spark/conf/
[[email protected] conf]$ ll
总用量 36
-rw-r--r--. 1 hadoop hadoop  996 2月  23 03:42 docker.properties.template
-rw-r--r--. 1 hadoop hadoop 1105 2月  23 03:42 fairscheduler.xml.template
-rw-r--r--. 1 hadoop hadoop 2025 2月  23 03:42 log4j.properties.template
-rw-r--r--. 1 hadoop hadoop 7801 2月  23 03:42 metrics.properties.template
-rw-r--r--. 1 hadoop hadoop  865 2月  23 03:42 slaves.template
-rw-r--r--. 1 hadoop hadoop 1292 2月  23 03:42 spark-defaults.conf.template
-rwxr-xr-x. 1 hadoop hadoop 4221 2月  23 03:42 spark-env.sh.template
[[email protected] conf]$

(2)复制spark-env.sh.template并重命名为spark-env.sh,并在文件最后添加配置内容

[[email protected] conf]$ cp spark-env.sh.template spark-env.sh
[[email protected] conf]$ vi spark-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_73
#export SCALA_HOME=/usr/share/scala
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.7.5
export HADOOP_CONF_DIR=/home/hadoop/apps/hadoop-2.7.5/etc/hadoop
export SPARK_WORKER_MEMORY=500m
export SPARK_WORKER_CORES=1
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181 -Dspark.deploy.zookeeper.dir=/spark"

注:
export SPARK_MASTER_IP=hadoop1 这个配置要注释掉。
集群搭建时配置的spark参数可能和现在的不一样,主要是考虑个人电脑配置问题,如果memory配置太大,机器运行很慢。
说明:
-Dspark.deploy.recoveryMode=ZOOKEEPER #说明整个集群状态是通过zookeeper来维护的,整个集群状态的恢复也是通过zookeeper来维护的。就是说用zookeeper做了spark的HA配置,Master(Active)挂掉的话,Master(standby)要想变成Master(Active)的话,Master(Standby)就要像zookeeper读取整个集群状态信息,然后进行恢复所有Worker和Driver的状态信息,和所有的Application状态信息;
-Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181#将所有配置了zookeeper,并且在这台机器上有可能做master(Active)的机器都配置进来;(我用了4台,就配置了4台)
-Dspark.deploy.zookeeper.dir=/spark
这里的dir和zookeeper配置文件zoo.cfg中的dataDir的区别???
-Dspark.deploy.zookeeper.dir是保存spark的元数据,保存了spark的作业运行状态;
zookeeper会保存spark集群的所有的状态信息,包括所有的Workers信息,所有的Applactions信息,所有的Driver信息。

(3)复制slaves.template成slaves

[[email protected] conf]$ cp slaves.template slaves
[[email protected] conf]$ vi slaves

添加如下内容

hadoop1
hadoop2
hadoop3
hadoop4

(4)将安装包分发给其他节点

[[email protected] ~]$ cd apps/
[[email protected] apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop2:$PWD
[[email protected] apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop3:$PWD
[[email protected] apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop4:$PWD

创建软连接

[[email protected] ~]$ cd apps/
[[email protected] apps]$ ls
hadoop-2.7.5  hbase-1.2.6  spark-2.3.0-bin-hadoop2.7  zookeeper-3.4.10
[[email protected] apps]$ ln -s spark-2.3.0-bin-hadoop2.7/ spark
[[email protected] apps]$ ll
总用量 16
drwxr-xr-x 10 hadoop hadoop 4096 3月  23 20:29 hadoop-2.7.5
drwxrwxr-x  7 hadoop hadoop 4096 3月  29 13:15 hbase-1.2.6
lrwxrwxrwx  1 hadoop hadoop   26 4月  20 19:26 spark -> spark-2.3.0-bin-hadoop2.7/
drwxr-xr-x 13 hadoop hadoop 4096 4月  20 19:24 spark-2.3.0-bin-hadoop2.7
drwxr-xr-x 10 hadoop hadoop 4096 3月  21 19:31 zookeeper-3.4.10
[[email protected] apps]$ 

配置环境变量

所有节点均要配置

[[email protected] spark]$ vi ~/.bashrc 
#Spark
export SPARK_HOME=/home/hadoop/apps/spark
export PATH=$PATH:$SPARK_HOME/bin

保存并使其立即生效

[[email protected] spark]$ source ~/.bashrc

启动

先启动zookeeper集群

所有节点均要执行

[[email protected] ~]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[[email protected] ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[[email protected] ~]$

在启动HDFS集群

在一个节点上执行

[[email protected] ~]$ cd apps/spark/sbin/
[[email protected] sbin]$ start-all.sh

查看进程

问题

查看进程发现spark集群只有hadoop1成功启动了Master进程,其他3个节点均没有启动成功,需要手动启动,进入到/home/hadoop/apps/spark/sbin目录下执行以下命令,3个节点都要执行

[[email protected] ~]$ cd ~/apps/spark/sbin/
[[email protected] sbin]$ start-master.sh 

执行之后再次查看进程

Master进程和Worker进程都以启动成功

验证

查看Web界面Master状态

hadoop1是ALIVE状态,hadoop2、hadoop3和hadoop4均是STANDBY状态

hadoop1节点

hadoop2节点

hadoop3节点

hadoop4节点

验证HA的高可用

手动干掉hadoop1上面的Master进程,观察是否会自动进行切换

干掉hadoop1上的Master进程之后,再次查看web界面

hadoo1节点,由于Master进程被干掉,所以界面无法访问

hadoop2节点,Master被干掉之后,hadoop2节点上的Master成功篡位成功,成为ALIVE状态

hadoop3节点

hadoop4节点

执行Spark程序on standalone

执行第一个Spark程序

[[email protected] ~]$ /home/hadoop/apps/spark/bin/spark-submit > --class org.apache.spark.examples.SparkPi > --master spark://hadoop1:7077 > --executor-memory 500m > --total-executor-cores 1 > /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar > 100

其中的spark://hadoop1:7077是下图中的地址

运行结果

启动spark shell

[[email protected] ~]$ /home/hadoop/apps/spark/bin/spark-shell > --master spark://hadoop1:7077 > --executor-memory 500m > --total-executor-cores 1 

参数说明:
--master spark://hadoop1:7077 指定Master的地址
--executor-memory 500m:指定每个worker可用内存为500m
--total-executor-cores 1: 指定整个集群使用的cup核数为1个

注意:

如果启动spark shell时没有指定master地址,但是也可以正常启动spark shell和执行spark shell中的程序,其实是启动了spark的local模式,该模式仅在本机启动一个进程,没有与集群建立联系。

Spark Shell中已经默认将SparkContext类初始化为对象sc。用户代码如果需要用到,则直接应用sc即可

Spark Shell中已经默认将SparkSQl类初始化为对象spark。用户代码如果需要用到,则直接应用spark即可

在spark shell中编写WordCount程序

(1)编写一个hello.txt文件并上传到HDFS上的spark目录下

[[email protected] ~]$ vi hello.txt
[[email protected] ~]$ hadoop fs -mkdir -p /spark
[[email protected] ~]$ hadoop fs -put hello.txt /spark

hello.txt的内容如下

you,jump
i,jump
you,jump
i,jump
jump

(2)在spark shell中用scala语言编写spark程序

scala>
sc.textFile("/spark/hello.txt").flatMap(_.split(",")).map((_,1)).reduceByKey(_+_).saveAsTextFile("/spark/out")

说明:
sc是SparkContext对象,该对象是提交spark程序的入口
textFile("/spark/hello.txt")是hdfs中读取数据
flatMap(.split(" "))先map再压平
map((
,1))将单词和1构成元组
reduceByKey(+)按照key进行reduce,并将value累加
saveAsTextFile("/spark/out")将结果写入到hdfs中

(3)使用hdfs命令查看结果

[[email protected] ~]$ hadoop fs -cat /spark/out/p*
(jump,5)
(you,2)
(i,2)
[[email protected] ~]$ 

执行Spark程序on YARN

前提

成功启动zookeeper集群、HDFS集群、YARN集群

启动Spark on YARN

[[email protected] bin]$ spark-shell --master yarn --deploy-mode client

报错如下:

报错原因:内存资源给的过小,yarn直接kill掉进程,则报rpc连接失败、ClosedChannelException等错误。

解决方法:

先停止YARN服务,然后修改yarn-site.xml,增加如下内容

<property>
                <name>yarn.nodemanager.vmem-check-enabled</name>
                <value>false</value>
                <description>Whether virtual memory limits will be enforced for containers</description>
        </property>
        <property>
                <name>yarn.nodemanager.vmem-pmem-ratio</name>
                <value>4</value>
                <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
        </property>

将新的yarn-site.xml文件分发到其他Hadoop节点对应的目录下,最后在重新启动YARN。

重新执行以下命令启动spark on yarn

[[email protected] hadoop]$ spark-shell --master yarn --deploy-mode client

启动成功

打开YARN的web界面

打开YARN WEB页面:http://hadoop4:8088
可以看到Spark shell应用程序正在运行

单击ID号链接,可以看到该应用程序的详细信息

单击“ApplicationMaster”链接

运行程序

scala> val array = Array(1,2,3,4,5)
array: Array[Int] = Array(1, 2, 3, 4, 5)

scala> val rdd = sc.makeRDD(array)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:26

scala> rdd.count
res0: Long = 5                                                                  

scala> 

再次查看YARN的web界面

查看executors

执行Spark自带的示例程序PI

[[email protected] ~]$ spark-submit --class org.apache.spark.examples.SparkPi > --master yarn > --deploy-mode cluster > --driver-memory 500m > --executor-memory 500m > --executor-cores 1 > /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar > 10

执行过程

[[email protected] ~]$ spark-submit --class org.apache.spark.examples.SparkPi > --master yarn > --deploy-mode cluster > --driver-memory 500m > --executor-memory 500m > --executor-cores 1 > /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar > 10
2018-04-21 17:57:32 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-04-21 17:57:34 INFO  ConfiguredRMFailoverProxyProvider:100 - Failing over to rm2
2018-04-21 17:57:34 INFO  Client:54 - Requesting a new application from cluster with 4 NodeManagers
2018-04-21 17:57:34 INFO  Client:54 - Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
2018-04-21 17:57:34 INFO  Client:54 - Will allocate AM container, with 884 MB memory including 384 MB overhead
2018-04-21 17:57:34 INFO  Client:54 - Setting up container launch context for our AM
2018-04-21 17:57:34 INFO  Client:54 - Setting up the launch environment for our AM container
2018-04-21 17:57:34 INFO  Client:54 - Preparing resources for our AM container
2018-04-21 17:57:36 WARN  Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
2018-04-21 17:57:39 INFO  Client:54 - Uploading resource file:/tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720/__spark_libs__8262081479435245591.zip -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/__spark_libs__8262081479435245591.zip
2018-04-21 17:57:44 INFO  Client:54 - Uploading resource file:/home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/spark-examples_2.11-2.3.0.jar
2018-04-21 17:57:44 INFO  Client:54 - Uploading resource file:/tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720/__spark_conf__2498510663663992254.zip -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/__spark_conf__.zip
2018-04-21 17:57:44 INFO  SecurityManager:54 - Changing view acls to: hadoop
2018-04-21 17:57:44 INFO  SecurityManager:54 - Changing modify acls to: hadoop
2018-04-21 17:57:44 INFO  SecurityManager:54 - Changing view acls groups to:
2018-04-21 17:57:44 INFO  SecurityManager:54 - Changing modify acls groups to:
2018-04-21 17:57:44 INFO  SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
2018-04-21 17:57:44 INFO  Client:54 - Submitting application application_1524303370510_0005 to ResourceManager
2018-04-21 17:57:44 INFO  YarnClientImpl:273 - Submitted application application_1524303370510_0005
2018-04-21 17:57:45 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:45 INFO  Client:54 -
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1524304664749
     final status: UNDEFINED
     tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/
     user: hadoop
2018-04-21 17:57:46 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:47 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:48 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:49 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:50 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:51 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:52 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:53 INFO  Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)
2018-04-21 17:57:54 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:57:54 INFO  Client:54 -
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 192.168.123.104
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1524304664749
     final status: UNDEFINED
     tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/
     user: hadoop
2018-04-21 17:57:55 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:57:56 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:57:57 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:57:58 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:57:59 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:00 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:01 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:02 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:03 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:04 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:05 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:06 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:07 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:08 INFO  Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)
2018-04-21 17:58:09 INFO  Client:54 - Application report for application_1524303370510_0005 (state: FINISHED)
2018-04-21 17:58:09 INFO  Client:54 -
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 192.168.123.104
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1524304664749
     final status: SUCCEEDED
     tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/
     user: hadoop
2018-04-21 17:58:09 INFO  Client:54 - Deleted staging directory hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005
2018-04-21 17:58:09 INFO  ShutdownHookManager:54 - Shutdown hook called
2018-04-21 17:58:09 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720
2018-04-21 17:58:09 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-06de6905-8067-4f1e-a0a0-bc8a51daf535
[[email protected] ~]$

原文地址:https://www.cnblogs.com/cjunn/p/12232073.html

时间: 2024-08-29 17:52:57

Spark学习之路 (二)Spark2.3 HA集群的分布式安装[转]的相关文章

Spark学习之路 (二)Spark2.3 HA集群的分布式安装

一.下载Spark安装包 1.从官网下载 http://spark.apache.org/downloads.html 2.从微软的镜像站下载 http://mirrors.hust.edu.cn/apache/ 3.从清华的镜像站下载 https://mirrors.tuna.tsinghua.edu.cn/apache/ 二.安装基础 1.Java8安装成功 2.zookeeper安装成功 3.hadoop2.7.5 HA安装成功 4.Scala安装成功(不安装进程也可以启动) 三.Spar

Spark学习(四)Spark2.3 HA集群的分布式安装

一.下载Spark安装包 1.从官网下载 http://spark.apache.org/downloads.html 2.从微软的镜像站下载 http://mirrors.hust.edu.cn/apache/ 3.从清华的镜像站下载 https://mirrors.tuna.tsinghua.edu.cn/apache/ 回到顶部 二.安装基础 1.Java8安装成功 2.zookeeper安装成功 3.hadoop2.7.5 HA安装成功 4.Scala安装成功(不安装进程也可以启动) 回

Hadoop学习笔记_4_实施Hadoop集群 --伪分布式安装

实施Hadoop集群 --伪分布式安装 准备与配置安装环境 安装虚拟机和linux,虚拟机推荐使用vmware,PC可以使用workstation,服务器可以使用ESXi,在管理上比较方便.ESXi还可以通过拷贝镜像文件复制虚拟机,复制后自动修改网卡号和ip,非常快捷.如果只是实验用途,硬盘大约预留20-30G空间. 以Centos为例,分区可以选择默认[如果想要手动分区,请参考博客:http://blog.csdn.net/zjf280441589/article/details/175485

Hadoop学习之路(四)Hadoop集群搭建和简单应用

概念了解 主从结构:在一个集群中,会有部分节点充当主服务器的角色,其他服务器都是从服务器的角色,当前这种架构模式叫做主从结构. 主从结构分类: 1.一主多从 2.多主多从 Hadoop中的HDFS和YARN都是主从结构,主从结构中的主节点和从节点有多重概念方式: 1.主节点 从节点 2.master slave 3.管理者 工作者 4.leader follower Hadoop集群中各个角色的名称: 服务 主节点 从节点 HDFS NameNode DataNode YARN Resource

Flume NG 学习笔记(二)单机与集群Flume 配置

下面的内容基本来自官网:http://flume.apache.org/FlumeUserGuide.html 本文使用的是最新版本的apache flume 1.5,安装完Flume然后测试下Flume是否可以用,在Flume目录下用以下语句测试: bin/flume-ng agent -n$agent_name -c conf -f conf/flume-conf.properties.template 结果如图显示: Ok,我们接下去看下面常用架构.功能配置示例 一.最简单的单一代理Flu

Hadoop学习之路(五)Hadoop集群搭建模式和各模式问题

分布式集群的通用问题 当前的HDFS和YARN都是一主多从的分布式架构,主从节点---管理者和工作者 问题:如果主节点或是管理者宕机了.会出现什么问题? 群龙无首,整个集群不可用.所以在一主多从的架构中都会有一个通用的问题: 当集群中的主节点宕机之后,整个集群不可用.这个现象叫做:单点故障.SPOF 单点故障讲述的概念有两点 1.如果说宕机的那个节点是从节点,那么整个集群能够继续运行,并且对外提供正常的服务. 2.如果说宕机的那个节点是主节点,那么整个集群就处于宕机状态. 通用的解决方案:高可用

Hadoop学习之路(七)Hadoop集群shell常用命令

Hadoop常用命令 启动HDFS集群 [[email protected] ~]$ start-dfs.sh Starting namenodes on [hadoop1] hadoop1: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop1.out hadoop2: starting datanode, logging to /home/hadoop/

java痛苦学习之路[二] ---JSONObject使用

一.Strut2必须引入的包 要使程序可以运行必须引入JSON-lib包,JSON-lib包同时依赖于以下的JAR包: 1.commons-lang.jar 2.commons-beanutils.jar 3.commons-collections.jar 4.commons-logging.jar 5.ezmorph.jar 6.json-lib-2.2.2-jdk15.jar 当然除了这些包,strut2基础包也得引入 struts2-core-2.1.6.jar freemarker-2.

Spark学习之路 (七)Spark 运行流程

讨论QQ:1586558083 目录 一.Spark中的基本概念 二.Spark的运行流程 2.1 Spark的基本运行流程 三.Spark在不同集群中的运行架构 3.1 Spark on Standalone运行过程 3.2 Spark on YARN运行过程 正文 回到顶部 一.Spark中的基本概念 (1)Application:表示你的应用程序 (2)Driver:表示main()函数,创建SparkContext.由SparkContext负责与ClusterManager通信,进行资