Spark2.1.0单机模式无法启动master的问题

运行start-master.sh后,日志报错如下:

starting org.apache.spark.deploy.master.Master, logging to /home/hadoop/spark-2.1.0-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop1.out
[[email protected] sbin]# cat /home/hadoop/spark-2.1.0-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop1.out
Spark Command: /home/hadoop/hadoop/jdk1.8.0_101/bin/java -cp /home/hadoop/spark-2.1.0-bin-hadoop2.7/conf/:/home/hadoop/spark-2.1.0-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.master.Master --host hadoop1 --port 7077 --webui-port 8080
========================================
Using Spark‘s default log4j profile: org/apache/spark/log4j-defaults.properties
17/03/04 21:09:01 INFO Master: Started daemon with process name: 14373@hadoop1
17/03/04 21:09:01 INFO SignalUtils: Registered signal handler for TERM
17/03/04 21:09:01 INFO SignalUtils: Registered signal handler for HUP
17/03/04 21:09:01 INFO SignalUtils: Registered signal handler for INT
17/03/04 21:09:01 WARN MasterArguments: SPARK_MASTER_IP is deprecated, please use SPARK_MASTER_HOST
17/03/04 21:09:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/04 21:09:02 INFO SecurityManager: Changing view acls to: root
17/03/04 21:09:02 INFO SecurityManager: Changing modify acls to: root
17/03/04 21:09:02 INFO SecurityManager: Changing view acls groups to:
17/03/04 21:09:02 INFO SecurityManager: Changing modify acls groups to:
17/03/04 21:09:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7077. Attempting port 7078.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7078. Attempting port 7079.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7079. Attempting port 7080.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7080. Attempting port 7081.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7081. Attempting port 7082.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7082. Attempting port 7083.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7083. Attempting port 7084.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7084. Attempting port 7085.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7085. Attempting port 7086.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7086. Attempting port 7087.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7087. Attempting port 7088.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7088. Attempting port 7089.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7089. Attempting port 7090.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7090. Attempting port 7091.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7091. Attempting port 7092.
17/03/04 21:09:02 WARN Utils: Service ‘sparkMaster‘ could not bind on port 7092. Attempting port 7093.
Exception in thread "main" java.net.BindException: 无法指定被请求的地址: Service ‘sparkMaster‘ failed after 16 retries (starting from 7077)! Consider explicitly setting the appropriate port for the service ‘sparkMaster‘ (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries.
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:127)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:501)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1218)
    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:965)
    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:210)
    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:353)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:455)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
    at java.lang.Thread.run(Thread.java:745)

解决办法:

在spark-env.sh中配置:

export  SPARK_MASTER_HOST=127.0.0.1
export  SPARK_LOCAL_IP=127.0.0.1

再次运行启动脚本即可。

时间: 2024-10-05 22:56:21

Spark2.1.0单机模式无法启动master的问题的相关文章

CentOS下Hive2.0.0单机模式安装详解

本文环境如下: 操作系统:CentOS 6 32位 Hive版本:2.0.0 JDK版本:1.8.0_77 32位 Hadoop版本:2.6.4 1. 所需要的环境 Hive 2.0需要以下运行环境: Java 1.7以上(强烈建议使用Java 1.8) Hadoop 2.X 2. 下载.解压Hive安装包 Hive官网地址: http://hive.apache.org/ 例如: wget "http://mirrors.cnnic.cn/apache/hive/hive-2.0.0/apac

hbase0.96.0单机模式安装(win7 无需cygwin)

之前折腾了几天,想让hbase的单机模式在cygwin上跑起来,都不成功.正当我气馁之时,我无意中发现hbase0.96.0的bin和conf目录下有一些扩展名为cmd的文件.这难道是给windows用的?难道现在hbase可以直接在windows上运行了?抱着这样的想法,我尝试了不用cygwin的方法运行hbase,还真成功了.特此记录下来,给需要的人做一个参考. 1. 环境: Win7 64bit JDK1.6.0_43 64bit hbase-0.96.0-hadoop1 没错,不需要cy

Hadoop2.6.0单机模式搭建

1.下载安装jdk 下载jdk1.6.0_39_x64.bin ./jdk1.6.0_39_x64.bin 执行文件 2.配置环境变量 vim /etc/profile 追加: export JAVA_HOME=/root/hadoop/jdk1.6.0_39export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport PATH=$PATH:$JAVA_HOME/bin 让环境变量生效 source /etc/prof

hadoop2.0单机模式部署

ssh免密码登陆配置 ssh localhost ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 修改/etc/hosts文件(vi或gedit都可以,需要sudo) 添加YARN001这一行 解压jdk和Hadoop文件(目录为/home/llh/hadoop/) 添加可执行权限 tar -zxvf jdk-7u75-linux-i586.tar.gz tar -

[Hadoop] 在Ubuntu系统上一步步搭建Hadoop(单机模式)

1 创建Hadoop用户组和Hadoop用户 Step1:创建Hadoop用户组: ~$ sudo addgroup hadoop Step2:创建Hadoop用户: ~$ sudo adduser -ingroup hadoop hadoop 回车后会提示输入密码,这是新建Hadoop的密码,输入两次密码敲回车即可.如下图所示: Step3:为Hadoop用户添加权限: ~$ sudo gedit /etc/sudoers 点击回车后,打开sudoers文件,在 root ALL=(ALL:A

ZooKeeper的安装、配置、启动和使用(一)——单机模式

ZooKeeper的安装非常简单,它的工作模式分为单机模式.集群模式和伪集群模式,本博客旨在总结ZooKeeper单机模式下如何安装.配置.启动和使用: 一.安装配置ZooKeeper(在Windows操作系统下) a.下载ZooKeeper压缩安装文件,这里下载稳定版--zookeeper-3.4.5.tar.gz b.解压压缩文件,这里将其解压到C盘根目录下,打开解压后的文件夹,得到下图: c.点击上图名为"conf"的文件夹,可以看到下图: d.用记事本打开上图名为"z

一、Ubuntu14.04下安装Hadoop2.4.0 (单机模式)

一.在Ubuntu下创建hadoop组和hadoop用户 增加hadoop用户组,同时在该组里增加hadoop用户,后续在涉及到hadoop操作时,我们使用该用户. 1.创建hadoop用户组 2.创建hadoop用户 sudo adduser -ingroup hadoop hadoop 回车后会提示输入新的UNIX密码,这是新建用户hadoop的密码,输入回车即可. 如果不输入密码,回车后会重新提示输入密码,即密码不能为空. 最后确认信息是否正确,如果没问题,输入 Y,回车即可. 3.为ha

Ubuntu 14.04下安装Hadoop2.4.0 (单机模式)

转自 http://www.linuxidc.com/Linux/2015-01/112370.htm 一.在Ubuntu下创建Hadoop组和hadoop用户 增加hadoop用户组,同时在该组里增加hadoop用户,后续在涉及到hadoop操作时,我们使用该用户. 1.创建hadoop用户组 2.创建hadoop用户 sudo adduser -ingroup hadoop hadoop 回车后会提示输入新的UNIX密码,这是新建用户hadoop的密码,输入回车即可. 如果不输入密码,回车后

【Hadoop基础教程】2、Hadoop之单机模式搭建

单机模式所需要的系统资源是最少的,这种安装模式下,Hadoop的core-site.xml.mapred-site.xml.hdfs-site.xml配置文件均为空.默认情况下,官方hadoop-1.2.1.tar.gz文件默认使用的就是单机安装模式.当配置文件为空时,Hadoop完全运行在本地,不与其他节点交互,也不使用Hadoop文件系统,不加载任何守护进程,该模式主要用于开发调试MapReduce应用程序的逻辑,不与任何守护进程交互进而避免复杂性.以hadoop用户远程登录K-Master