spark启动后出现“JAVA_HOME not set” 异常和"org.apache.hadoop.security.AccessControlException"异常

/home/bigdata/hadoop/spark-2.1.1-bin-hadoop2.7/sbin/start-all.sh

启动后执行jps命令,主节点上有Master进程,其他子节点上有Work进行,登录Spark管理界面查看集群状态(主节点):http://master01:8080/

到此为止,Spark集群安装完毕.

1.注意:如果遇到 “JAVA_HOME not set” 异常,可以在sbin目录下的spark-config.sh 文件中加入如下配置:

export JAVA_HOME=XXXX

2.如果遇到Hadoop HDFS的写入权限问题:

org.apache.hadoop.security.AccessControlException

解决方案: 在hdfs-site.xml中添加如下配置,关闭权限验证

<property>
        <name>dfs.permissions</name>
        <value>false</value>
  </property>

原文地址:https://www.cnblogs.com/alexzhang92/p/10799478.html

时间: 2024-10-04 11:57:56

spark启动后出现“JAVA_HOME not set” 异常和"org.apache.hadoop.security.AccessControlException"异常的相关文章

异常-Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hdfs, access=WRITE, inode=&quot;/hbase&quot;:root:supergroup:drwxr-xr-x

1 详细异常 Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hdfs, access=WRITE, inode="/hbase":root:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAu

org.apache.hadoop.security.AccessControlException: Permission denied: user=?, access=WRITE, inode=&quot;/&quot;:hadoop:supergroup:drwxr-xr-x 异常解决

进行如下更改: vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml [我的hadoop目录在/usr/local下,具体的是修改你的hadoop目录中的/etc/hadoop/hdfs-site.xml]添加一个property:<property>      <name>dfs.permissions</name>      <value>false</value></property> 然

[转] ASP.NET WEB API程序在VS启动或发布到IIS后启动后发生 - Could not load file or assembly &#39;System.Web.Http.WebHost’异常,无法正常访问

Just do Copy Local = true in the properties for the assembly(System.Web.Http.WebHost) and then do a redeploy, it should work fine. http://stackoverflow.com/questions/20323107/could-not-load-file-or-assembly-system-web-http-webhost-after-published-to-

【异常】org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 1 missing files; e.g.:

1 详细异常 org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 1 missing files; e.g.: /wm1/link/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/003993.sst at org.apache.hadoop.service.Servi

【原创】问题定位分享(16)spark写数据到hive外部表报错ClassCastException: org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat cannot be cast to org.apache.hadoop.hive.ql.io.HiveOutputFormat

spark 2.1.1 spark在写数据到hive外部表(底层数据在hbase中)时会报错 Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat cannot be cast to org.apache.hadoop.hive.ql.io.HiveOutputFormat at org.apache.spark.sql.hive.SparkHiveWrit

【异常】org.apache.hadoop.hdfs.server.common.InconsistentFSStateException

1 异常信息 05-30 07:53:45,204 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Maximum size of an xattr: 16384 2019-05-30 07:53:45,204 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /mnt/software/hadoop-2.6.0-cdh5.16.1/data

HBASE启动失败,Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster

Master日志错误:2015-12-02 06:34:32,394 ERROR [main] master.HMasterCommandLine: Master exitingjava.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterat org.apache.hadoop.hbase.master.HMaster.constructMaster(

Hadoop HA HDFS启动错误之org.apache.hadoop.ipc.Client: Retrying connect to server问题解决

近日,在搭建Hadoop HA QJM集群的时候,出现一个问题,如本文标题. 网上有很多HA的博文,其实比较好的博文就是官方文档,讲的已经非常详细.所以,HA的搭建这里不再赘述. 本文就想给出一篇org.apache.hadoop.ipc.Client: Retrying connect to server错误的解决的方法. 因为在搜索引擎中输入了错误问题,没有找到一篇解决问题的.这里写一篇备忘,也可以给出现同样问题的朋友一个提示. 一.问题描述 HA按照规划配置好,启动后,NameNode不能

HBase中此类异常解决记录org.apache.hadoop.ipc.RemoteException(java.io.IOException):

ERROR: Can't get master address from ZooKeeper; znode data == null   一定注意这只是问题的第一层表象,真的问题是: File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplica 网上很多都是叫用两种方式解决 stop/start  重启hbase 格式化 hdfs namenode -format,不能随随便便就格