hadoop若干错误

初学hadoop,今日遇到的问题记录如下

1 util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

  这只是一个警告,不影响使用,如果想消除这个警告,那就在正确的系统上重新编译hadoop源码,替换libhadoop.so.1.0.0,在lib/native文件下。

2 

 1 16/07/27 16:03:14 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.16.60:8032
 2
 3 16/07/27 16:03:14 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.16.60:8032
 4
 5 16/07/27 16:03:15 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
 6
 7 16/07/27 16:03:16 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
 8
 9 16/07/27 16:03:17 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
10
11 16/07/27 16:03:18 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
12
13 16/07/27 16:03:19 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
14
15 16/07/27 16:03:20 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16
17 16/07/27 16:03:21 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18
19 16/07/27 16:03:22 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20
21 16/07/27 16:03:23 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22
23 16/07/27 16:03:24 INFO ipc.Client: Retrying connect to server: master/192.168.16.60:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

连接不到master:8032,查看yarn是否开启,如果已经开启则更改yarn配置文件yarn-site.xml

<property>  

    <name>yarn.resourcemanager.address</name>  

    <value>master:8032</value>  

  </property>  

  <property>  

    <name>yarn.resourcemanager.scheduler.address</name>  

    <value>master:8030</value>  

  </property>  

  <property>  

    <name>yarn.resourcemanager.resource-tracker.address</name>  

    <value>master:8031</value>  

  </property>  

此次过程中使用到Hadoop调试,下面记录下调试方法以便以后学习,来自:http://blog.csdn.net/mango_song/article/details/8502394

方法1:修改$HADOOP_CONF_DIR/log4j.properties文件 hadoop.root.logger=ALL,console
      将该文件添加到项目classpath目录,如conf/log4j.properties
      在代码中加入:PropertyConfigurator.configure("conf/log4j.properties");

方法2:开启:export HADOOP_ROOT_LOGGER=DEBUG,console

关闭:export HADOOP_ROOT_LOGGER=INFO,console

  实时查看和修改Hadoop日志级别
  Hadoop的日志界面可以通过Hadoop命令和Web界面来修改。
  Hadoop命令格式:

    hadoop daemonlog -getlevel <host:port> <name>

    hadoop daemonlog --setlevel <host:port> <name> <level>

    <name>为类名,如:TaskTracker
    <level>为日志级别,如:debug和info等

时间: 2024-11-14 02:43:56

hadoop若干错误的相关文章

hadoop配置错误

经过上一周的郁闷期(拖延症引发的郁闷),今天终于开始步入正轨了.今天主要是解决hadoop配置的错误以及网络时断时续的问题. 首先说明一下之前按照这篇文章的方法配置完全没有问题,但是等我配置好了发现hadoop的版本和我的需求有点不一样,于是重新安装低版本的hadoop,结果就遇到问题了. 一,Hadoop错误 1. dataNode总是启动不了?  no datanode to stop 怎么解决的呢.不需要hadoop namenode -format:把 dfs/data 删除即可,res

hadoop常见错误

hadoop常见错误集锦: 1.DataXceiver error processing WRITE_BLOCK operation ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 192-168-11-58:50010:DataXceiver error processing WRITE_BLOCK operation src: 1)修改进程最大文件打开数 vi /etc/security/limits.conf 添加: # End

[转]hadoop新手错误解决方法

解决Exception: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z 等一系列问题,ljavalangstring 一.简介 Windows下的 Eclipse上调试Hadoop2代码,所以我们在windows下的Eclipse配置hadoop-eclipse-plugin-2.6.0.jar插件,并在运行Hadoop代码时出现了一系列的问题,搞了好几天终于能运行起代码.接下来我们来看看

hadoop 常见错误汇总

1:Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out Answer: 程序里面需要打开多个文件,进行分析,系统一般默认数量是1024,(用ulimit -a可以看到)对于正常使用是够了,但是对于程序来讲,就太少了. 修改办法: 修改2个文件.        /etc/security/limits.conf vi /etc/security/limits.conf 加上: * soft nofile 102400 * h

hadoop配置错误总结

2016-06-02 17:33:04,163 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: RECEIVED SIGNAL 15: SIGTERM2016-06-02 17:33:04,173 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemove

Hadoop常见错误及处理方法

1.Hadoop-root-datanode-master.log 中有如下错误: ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in 导致datanode启动不了. 原因:每次namenode format会重新创建一个namenodeId,而dfs.data.dir参数配置的目录中包含的是上次format创建的id,和dfs.name.

hadoop常见错误及解决办法整理

1:Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out Answer:程序里面需要打开多个文件,进行分析,系统一般默认数量是1024,(用ulimit -a可以看到)对于正常使用是够了,但是对于程序来讲,就太少了.修改办法:修改2个文件.       /etc/security/limits.confvi /etc/security/limits.conf加上:* soft nofile 102400* hard nofi

python中若干错误

今天在运行的django的时候一直提示”系统错误“,如下 except Exception, ex: logger.error(printException()) return render_string("系统错误!") 便想当然的加入 except Exception, ex: logger.error(printException()) print ex return render_string("系统错误!") 可是一直没有输出错误原因,反复多次,竟然出现a

学习Hadoop的资料

1)Cygwin相关资料 (1)Cygwin上安装.启动ssh服务失败.ssh localhost失败的解决方案 地址:http://blog.163.com/pwcrab/blog/static/16990382220107267443810/ (2)windows2003+cygwin+ssh 地址:http://wenku.baidu.com/view/3777b8bc960590c69ec3765e.html (3)Cygwin 安裝和SSH設定教學 地址:http://blog.faq