java.io.IOException: Can't get Master Kerberos principal for use as renewer 错误解决

最近在集群中执行调度任务,或者是 在集群中执行 hadoop distcp 命令都会报这样的问题。

java.io.IOException: Can‘t get Master Kerberos principal for use as renewer
- at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:133)
 at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
 at org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:191)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
 at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
 at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
 at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:429)
 at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:91)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:181)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:143)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:493)

经过源码分析:

代码中会去读取yarn的配置文件,如果读不到yarn的配置文件,就会报这样的错误。

解决方法:

与机器的HADOOP_CLASSPATH 环境变量有关,我们是有同事添加了错误的HADOOP_CLASSPATH 环境变量,最终在机器上移掉该环境变量,就可以工作了

java.io.IOException: Can't get Master Kerberos principal for use as renewer 错误解决

原文地址:https://www.cnblogs.com/zuoql/p/12195548.html

时间: 2024-10-15 04:00:47

java.io.IOException: Can't get Master Kerberos principal for use as renewer 错误解决的相关文章

java.io.IOException: read failed, socket might closed or timeout, read ret: -1

近期项目中连接蓝牙之后接收蓝牙设备发出的指令功能,在连接设备之后,创建RfcommSocket连接时候报java.io.IOException: read failed, socket might closed or timeout, read ret: -1错误.以下说一下我的解决方法,希望对各位有一点帮助. private BluetoothSocket mSocket; <span style="white-space:pre"> </span>priva

hadoop异常 java.io.IOException: Job status not available

[[email protected] conf]$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount /user/lizeyi/people.txt  /user/lizeyi/wordcount7 15/06/08 18:36:16 INFO client.RMProxy: Connecting to ResourceManager at master.hadoop/10.3.4.35:80

HBase中此类异常解决记录org.apache.hadoop.ipc.RemoteException(java.io.IOException):

ERROR: Can't get master address from ZooKeeper; znode data == null   一定注意这只是问题的第一层表象,真的问题是: File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplica 网上很多都是叫用两种方式解决 stop/start  重启hbase 格式化 hdfs namenode -format,不能随随便便就格

Hive报错 Failed with exception java.io.IOException:java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:user.name%7D

报错信息如下 Failed with exception java.io.IOException:java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:user.name%7D 解决方法: 编辑 hive-site.xml 文件,添加下边的属性 <property> <name>system:java.io.tmpdir<

java.io.IOException: No space left on device 错误

今天碰到比较奇怪的问题: 7/05/14 19:20:24 INFO util.Utils: Fetching http://192.168.31.160:33039/jars/spark_study_java-0.0.1-SNAPSHOT-jar-with-dependencies.jar to /tmp/spark-446068a4-aaa4-4277-b009-908bf0d4ecac/executor-dcc3175b-7d19-4485-81e1-bf31a83a66b4/spark-

java.io.IOException: Attempted read from closed stream

代码如下,执行的时候提示“java.io.IOException: Attempted read from closed stream.” @Test public void test_temp(){ String url="http://ssov1.59iedu.com/login?TARGET=http://med.ihbedu.com:80/gateway/web/sso/auth&js&callback=loginThen&1470491151264&no

spark程序异常:Exception in thread &quot;main&quot; java.io.IOException: No FileSystem for scheme: hdfs

命令: java -jar myspark-1.0-SNAPSHOT.jar myspark-1.0-SNAPSHOT.jar hdfs://single:9000/input/word.txt hdfs://single:9000/output/out1 错误信息: .......... 14/11/23 06:14:18 INFO SparkDeploySchedulerBackend: Granted executor ID app-20141123061418-0011/0 on hos

android studio java.io.IOException:setDataSourse fail.

这一次是针对Android开发中的一个小问题,权限获取的问题. 在写了一个一个小Android程序的时候,有时候普需要获取本机的文件(Audio&Video),这时候如果不加权限就会出现这种情况: java.io.IOException:setDataSourse failed. 此时应该在 注册一下权限: 这样,问题解决.

hadoop错误Could not obtain block blk_XXX_YYY from any node:java.io.IOException:No live nodes contain current block

错误: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry 原因: Datanode 有一个同时处理文件