spark-shell启动spark报错

前言

  离线安装好CDH、Coudera Manager之后,通过Coudera Manager安装所有自带的应用,包括hdfs、hive、yarn、spark、hbase等应用,过程很是波折,此处就不抱怨了,直接进入主题。

描述

  在安装有spark的节点上,通过spark-shell启动spark,满怀期待的启动spark,but,来了个晴天霹雳,报错了,报错了!错误信息如下:

18/06/11 17:40:27 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of ‘yarn.scheduler.maximum-allocation-mb‘ and/or ‘yarn.nodemanager.resource.memory-mb‘.
    at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:281)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:140)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:158)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:538)
    at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1022)
    at $line3.$read$$iwC$$iwC.<init>(<console>:15)
    at $line3.$read$$iwC.<init>(<console>:25)
    at $line3.$read.<init>(<console>:27)
    at $line3.$read$.<init>(<console>:31)
    at $line3.$read$.<clinit>(<console>)
    at $line3.$eval$.<init>(<console>:7)
    at $line3.$eval$.<clinit>(<console>)
    at $line3.$eval.$print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
....................后面还有很多错误信息

spark启动错误提示1

     仔细查看错误信息之后发现,原来是yarn配置的内存不够,spark启动需要1024+384 MB的内存,但是我的yarn配置仅有1024 MB,不够满足spark启动要求,所以抛出异常,关键错误信息如下图所示:

解决方法

  登录Cloudera Manager,找到YARN (MR2 Included),点击进如(不要在意我的集群有那么多警告和报错,解决spark问题是关键),如下图所示:

  在导航栏找到 配置 选项,如下图所示:

  点击进入 配置 页面,在搜索栏中输入yarn.scheduler.maximum-allocation-mb,如下图所示:

  可以看到,该配置参数的值正如spark启动时抛出的异常所示,为1GB,将其修改为2GB即可,点击保存更改,如下图所示:

   按照上述的步骤,继续修改yarn.nodemanager.resource.memory-mb 参数的值为2GB,如下图所示,点击保存更改,重启yarn使设置生效。

   返回到spark节点命令行里面执行spark-shell命令,奇怪,仍然报错,但错误为其他,不再是上面的错误,错误信息为

18/06/11 17:46:46 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
    at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
    at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)
    at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:162)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3530)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3513)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:3495)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6649)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4420)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4390)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4363)
...........................后面还有很多不重要的

spark启动报错2

  关键错误信息如下图所示:

  原因是启动spark的用户权限不够,我是使用root命令启动spark,需要hdfs用户启动spark(注:hdfs是hadoop的超级用户),所以报错,切换到hdfs用户下,再次启动是spark,成功。

补充

  yarn.scheduler.maximum-allocation-mb 参数的作用:该参数在yarn-site.xml配置文件中配置,设置yarn容器的最大分配内存,以MB为单位,如果yarn资源管理器(RM/ResourceManager)中的容器请求的资源大于此处设置的值,就会抛出无效资源请求异常(InvalidResourceRequestException)。
  yarn.nodemanager.resource.memory-mb参数的作用:该参数在yarn-site.xml配置文件中配置,设置yarn节点上可用的物理内存,默认大小为8192(MB),该内存可用于分配给yarn容器。

原文地址:https://www.cnblogs.com/cenwei/p/9168490.html

时间: 2024-10-10 04:45:18

spark-shell启动spark报错的相关文章

windows平台在tomcat中启动cas报错解决

windows平台在tomcat中启动cas报错: Caused by: java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no jansi in java.library.path, 系统找不到指定的路径.] 解决办法:将jansi.dll文件放到C:\Windows\System32目录下即可. 注意:jansi.dll文件在使用gradle编译打包cas时会下载到C:\Users\${用户名}\.gradle

436启动cman报错

rhel6.2启动cman报错 Waiting for quorum... Timed-out waiting for cluster 解决方法: 修改/etc/init.d/cman配置文件 vim /etc/init.d/cman CMAN_QUORUM_TIMEOUT=45 改为 CMAN_QUORUM_TIMEOUT=0 然后重启cman即可

Linux系统下启动MySQL报错:Neither host &#39;localhost.localdomain&#39; nor &#39;localhost&#39; could be looked up with

Linux系统下启动MySQL报错:Neither host 'localhost.localdomain' nor 'localhost' could be looked up with 摘要 Linux系统下安装完MySQL,启动MySQL报错:Neither host 'localhost.localdomain' nor 'localhost' could be looked up with... Linux系统下装完MySQL,然后重启动时报错: 解决方法:  查看cat /etc/h

启动Myeclipse报错“Failed to create the Java Virtual Machine”的解决办法

我安装的是Myeclipse 10.7.1.装上好久没用,今天启动突然报错:Failed to create the Java Virtual Machine. 检查Myeclipse安装好使用时好的啊,近期也没用,可能是近期升级了本地单独安装的jre版本导致的吧(Myeclipse使用自己的jre的). 整理了如下2个解决办法,可以选择一个使用,我选择的是第2个.经测试都ok. 方法一: 找到Myeclpise路径下的myeclipse.ini文件: 编辑将Xmx(JVM Heap最大允许的尺

ceph升级到10.2.3 版本启动服务报错:Unknown lvalue &#39;TasksMax&#39; in section &#39;Service&#39;

#### ceph软件包升级完成,执行命令重启服务 sudo systemctl restart [email protected]"$HOSTNAME" #### 故障现象 服务可以启动,启动后显示有报错信息: Nov 23 17:14:45 ceph-6-12 systemd[1]:        [/usr/lib/systemd/system/[email protected]:18] Unknown lvalue 'TasksMax' in section 'Service'

Linux系统下启动MySQL报错:Neither host &amp;#39;localhost.localdomain&amp;#39; nor &amp;#39;localhost&amp;#39; could be looked up with

Linux系统下启动MySQL报错:Neither host 'localhost.localdomain' nor 'localhost' could be looked up with 摘要 Linux系统下安装完MySQL,启动MySQL报错:Neither host 'localhost.localdomain' nor 'localhost' could be looked up with... Linux系统下装完MySQL,然后重新启动动时报错: 解决方法:  查看cat /etc

启动httpd报错-已解决

[[email protected] ~]# service httpd startStarting httpd: (13)Permission denied: make_sock: could not bind to address 192.168.1.153:28080no listening sockets available, shutting downUnable to open logs [FAILED][[email protected] ~]# 临时办法: [[email pro

&lt;&lt;&lt; tomcat启动是报错StandardServer.await: create[8005]

启动tomcat的时候出现异常 ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 严重: StandardServer.await: create[8005]: java.net.BindException: Address already in use: JVM_Bind         at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(Plai

Svn启动窗口报错 Could not load file or assembly &#39;SharpSvn.dll&#39; or one of its

win10 64位系统生成没问题,测试都没问题,结果换到win7 64位系统上,点开就出现,网上搜了下,通过以下方式解决, 必须把bin 文件夹全部删除,重新生成.要不还是会报错. Solve it. Change my platform to x86 Output path to "bin\" Downloaded the x86 of SharpSVN (1.7008.2243.14245) Add SharpSVN.dll in GAC. Delete the bin folde

[Android]模拟器不能启动,报错:Cannot set up guest memory &#39;android_arm&#39;: Invalid argument

[错误] 模拟器无法启动,报错:Cannot set up guest memory 'android_arm': Invalid argument [解决办法] 在AVD中(Android Virtual Device Manager)将模拟器的RAM调成512. 参考:http://stackoverflow.com/questions/26620765/cannot-set-up-guest-memory-android-arm-invalid-argument 版权声明:本文为博主原创文