Spark跑在Yarn上出现错误,原因是jdk的版本问题

./bin/spark-shell --master yarn

  

2019-07-01 12:20:13 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2019-07-01 12:20:29 WARN  Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
2019-07-01 12:20:55 z
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)
        at $line3.$read$$iw$$iw.<init>(<console>:15)
        at $line3.$read$$iw.<init>(<console>:43)
        at $line3.$read.<init>(<console>:45)
        at $line3.$read$.<init>(<console>:49)
        at $line3.$read$.<clinit>(<console>)
        at $line3.$eval$.$print$lzycompute(<console>:7)
        at $line3.$eval$.$print(<console>:6)
        at $line3.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
        at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
        at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
        at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
        at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SparkILoop.scala:79)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79)
        at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:78)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:77)
        at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:110)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
        at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
        at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
        at org.apache.spark.repl.Main$.doMain(Main.scala:76)
        at org.apache.spark.repl.Main$.main(Main.scala:56)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2019-07-01 12:20:55 WARN  YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Attempted to request executors before the AM has registered!
2019-07-01 12:20:55 WARN  MetricsSystem:66 - Stopping a MetricsSystem that is not running
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
  at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
  at scala.Option.getOrElse(Option.scala:121)

  主要原因在与spark2+的版本对jdk进行了检查导致的,换了低版本的jdk之后,发现版本不支持,spark2.+需要使用jdk1.8+以上的版本,把jdk版本切换过来。在yarn的配置文件添加一下配置即可。

vi yarn-site.xml  

# 添加以下配置

  

<property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
</property>

<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
</property>

  最后,最后,最后,不要忘记重启hadoop,不然在去执行还是会报错的。

2019-07-01 12:31:36 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2019-07-01 12:32:02 WARN  Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Spark context Web UI available at http://master:4040
Spark context available as ‘sc‘ (master = yarn, app id = application_1561955386005_0001).
Spark session available as ‘spark‘.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  ‘_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.3
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_211)
Type in expressions to have them evaluated.
Type :help for more information.

  

原文地址:https://www.cnblogs.com/hanwen1014/p/11113385.html

时间: 2024-10-08 09:08:43

Spark跑在Yarn上出现错误,原因是jdk的版本问题的相关文章

spark执行在yarn上executor内存不足异常ERROR YarnScheduler: Lost executor 542 on host-bigdata3: Container marked as failed: container_e40_1550646084627_1007653_01_000546 on host: host-bigdata3. Exit status: 143.

当spark跑在yarn上时 单个executor执行时,数据量过大时会导致executor的memory不足而使得rdd  最后lost,最终导致任务执行失败 其中会抛出如图异常信息 如图中异常所示 对应解决方法可以加上对应的参数调优(这个配置可以在总的处理数据量在几百TB或者1~3PB级别的数据处理时解决executor-memory不足问题) --num-executors=512 --executor-cores=8 --executor-memory=32g --driver-memo

虚拟机中Spark运行在Yarn上

初始化Centos7vi /etc/sysconfig/network-scripts/ifcfg-ens33 DEVICE=ens33TYPE=EthernetONBOOT=yesNM_CONTROLLED=yesBOOTPROTO=staticIPADDR=192.168.3.131GATEWAY=192.168.3.2NETMASK=255.255.255.0DNS1=192.168.3.2 hostnamectl set-hostname node-03 配置环境变量export JAV

Apache Spark源码走读之10 -- 在YARN上运行SparkPi

y欢迎转载,转载请注明出处,徽沪一郎. 概要 “spark已经比较头痛了,还要将其运行在yarn上,yarn是什么,我一点概念都没有哎,再怎么办啊.不要跟我讲什么原理了,能不能直接告诉我怎么将spark在yarn上面跑起来,I'm a dummy, just told me how to do it.” 如果你和我一样是一个对形而上的东西不是太感兴趣,而只纠结于怎么去做的话,看这份guide保证不会让你失望, :). 前期准备 本文所有的操作基于arch linux,保证下述软件已经安装 jdk

PL/SQL跑在Oracle 64位数据库上初始化错误

安装完Oracle(64位).PL/SQL后运行PL/SQL出现如下的错误: 网上查资料说,我的PL/SQL与ORACLE不兼容,即PL/SQL不支持64位的ORACLE,因此得下一个32位的ORCALE客户端并配置相应的参数: 解决步骤小记: 一.下载ORACLE 32位客户端 下载地址:http://www.onlinedown.net/soft/102902.htm(Oracle 10g客户端精简绿色版) 二.解压到ORACLE 安装目录下一个叫product的目录下,并重命名一下(命名不

Spark在Yarn上运行Wordcount程序

前提条件 1.CDH安装spark服务 2.下载IntelliJ IDEA编写WorkCount程序 3.上传到spark集群执行 一.下载IntellJ IDEA编写Java程序 1.下载IDEA 官网地址:http://www.jetbrains.com/idea/  下载IntlliJ IDEA后,进行安装. 2.新建Java项目 1.点击File 2.点击New Project 3.点击Java 注意:Project SDK要选择本机安装的JDK的位置,由于我的JDK是1.7,所以下面的

【译】Yarn上常驻Spark-Streaming程序调优

作者从容错.性能等方面优化了长时间运行在yarn上的spark-Streaming作业 对于长时间运行的Spark Streaming作业,一旦提交到YARN群集便需要永久运行,直到有意停止.任何中断都会引起严重的处理延迟,并可能导致数据丢失或重复.YARN和Apache Spark都不是为了执行长时间运行的服务而设计的.但是,它们已经成功地满足了近实时数据处理作业的常驻需求.成功并不一定意味着没有技术挑战. 这篇博客总结了在安全的YARN集群上,运行一个关键任务且长时间的Spark Strea

再谈Segmentation fault (core dumped)问题 -查找段错误原因

再谈Segmentation fault (core dumped)问题 -查找段错误原因    在前一篇文章"Segmentation fault (core dumped) "有说了具体core dumped产生的原因. 下面主要来介绍下问题的解决与查找,在linux下一般都使用gdb进行调试,那今天我就以Ubuntu 14.04环境作为介绍 来查找正在的core dumped的原因.需要说明的是,你在编译程序的时候要加调试选项 -g. $ gcc -o app reverse.c

coreseek常见错误原因及解决方法

coreseek常见错误原因及解决方法 Coreseek 中文全文检索引擎 Coreseek 是一款中文全文检索/搜索软件,以GPLv2许可协议开源发布,基于Sphinx研发并独立发布,专攻中文搜索和信息处理领域,适用于行业/垂直搜索.论坛/站内搜索.数据库搜索.文档/文献检索.信息检索.数据挖掘等应用场景,用户可以免费下载使用 本文为大家整理了coreseek/sphinx中文检索引擎的常见问题和解决方法,感兴趣的同学参考下. Coreseek 是一款中文全文检索/搜索软件,以GPLv2许可协

Samza在YARN上的启动过程 =》 之一

运行脚本,提交job 往YARN提交Samza job要使用run-job.sh这个脚本. samza-example/target/bin/run-job.sh  --config-factory=samza.config.factories.PropertiesConfigFactory  --config-path=file://$PWD/config/hello-world.properties 这脚本的内容是什么呢? exec $(dirname $0)/run-class.sh or