DataManager启动失败
启动过程中发现一个问题:slave1,slave2,slave3都是只启动了DataNode,而DataManager并没有启动:
[[email protected] hadoop-2.9.0]$ jps 2497 Jps 2052 DataNode [[email protected] hadoop-2.9.0]$ jps 2497 Jps 2052 DataNode [[email protected] hadoop-2.9.0]$ jps 2497 Jps 2052 DataNode
这里一个错误原因可以从: slaves各自的nodemanager日志查看。
查看slave1虚拟机的/opt/hadoop-2.9.0/logs/yarn-spark-nodemanager-slave1.log,错误信息为:
2018-06-30 08:58:06,824 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Unexpected error starting NodeStatusUpdater org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager, Registration of NodeManager failed, Message from ResourceManager: NodeManager from slave1 doesn‘t sa tisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager. at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:374) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:252) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:454) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:837) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:897)
解决问题:
错误原因是hadoop dfs启动内存默认为1024G,而我的配置yarn-site.xml中分配内存资源为512,导致内存不足。
[[email protected] hadoop-2.9.0]$ grep "HEAP" /opt/hadoop-2.9.0/etc/hadoop/yarn-env.sh JAVA_HEAP_MAX=-Xmx1000m # For setting YARN specific HEAP sizes please use this # YARN_HEAPSIZE=1000 if [ "$YARN_HEAPSIZE" != "" ]; then JAVA_HEAP_MAX="-Xmx""$YARN_HEAPSIZE""m" # If not specified, the default value will be picked from either YARN_HEAPMAX # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two. #export YARN_RESOURCEMANAGER_HEAPSIZE=1000 # If not specified, the default value will be picked from either YARN_HEAPMAX # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two. #export YARN_TIMELINESERVER_HEAPSIZE=1000 # If not specified, the default value will be picked from either YARN_HEAPMAX # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two. #export YARN_NODEMANAGER_HEAPSIZE=1000
解决方案,修改yarn-site.xml中的配置项:
<property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value> </property>
备注:之前“yarn.nodemanager.resource.memory-mb”配置的是512,修改为2048或者更大,集群环境中肯能要上2G~8G左右。
重新启动hadoop,在slaves节点上验证是否slave启动成功:
[[email protected] hadoop-2.9.0]$ jps 2624 DataNode 2731 NodeManager 2875 Jps [[email protected] hadoop-2.9.0]$
原文地址:https://www.cnblogs.com/yy3b2007com/p/9247708.html
时间: 2024-09-30 16:05:21