hadoop常见错误集锦:
1.DataXceiver error processing WRITE_BLOCK operation
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 192-168-11-58:50010:DataXceiver error processing WRITE_BLOCK operation src:
1)修改进程最大文件打开数
vi /etc/security/limits.conf
添加:
# End of file * - nofile 1000000 * - nproc 1000000
2)修改数据传输线程个数
vi hdfs-site.xml
添加:
<property> <name>dfs.datanode.max.transfer.threads</name> <value>8192</value> <description> Specifies the maximum number of threads to use for transferring data in and out of the DN. </description> </property>
复制到其它节点,并重启datanode。
2.jobhistory无法启动,日志如下
FATAL org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: Error starting JobHistoryServer org.apache.hadoop.yarn.YarnException: Error creating done directory: [hdfs://192.168.11.61:8020/tmp/hadoop-yarn/staging/history/done] at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.init(HistoryFileManager.java:424) at org.apache.hadoop.mapreduce.v2.hs.JobHistory.init(JobHistory.java:87) at org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.init(JobHistoryServer.java:87) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:145) Caused by: java.net.NoRouteToHostException: No Route to Host from hadoop-62/192.168.11.62 to hadoop-61:8020 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
解决方法:关闭各节点防火墙;
3.Hadoop集群内存通过8088端口查看显示只有16GB,实际物理内存为64GB/节点。
Hadoop 2.X之后的版本默认的NodeManager总的可用物理内存为8GB(8192MB),是写死的,需要通过在yarn-site.xml配置文件中添加yarn.nodemanager.resource.memory-mb项,并改成你需要设置的物理内存大小。注意,一旦设置,整个运行过程中不可动态修改,通过配置文件进行修改以后,需要重启NodeManager服务。另外,该参数的默认值是8192MB,即使你的机器内存不够8192MB,YARN也会按照这些内存来使用,因此,这个值一定要配置。不过,Apache已经正在尝试将该参数做成可动态修改的。可能后续版本中会有所改善。
时间: 2024-10-14 20:46:48