Datanode启动问题 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering>

2017-04-15 21:21:15,423 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2017-04-15 21:21:15,467 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000
2017-04-15 21:21:15,486 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2017-04-15 21:21:15,511 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2017-04-15 21:21:15,521 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: mycluster
2017-04-15 21:21:15,551 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: mycluster
2017-04-15 21:21:15,559 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdp265m.test.com/192.168.56.104:53310 starting to offer service
2017-04-15 21:21:15,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdp265m2.test.com/192.168.56.107:53310 starting to offer service
2017-04-15 21:21:15,585 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-04-15 21:21:15,586 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2017-04-15 21:21:15,903 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hdp/hadoop/data/in_use.lock acquired by nodename 12483@hdp265s1.test.com
2017-04-15 21:21:15,904 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /home/hdp/hadoop/data: namenode clusterID = CID-bf7ff1f1-680c-4bbf-958b-bda65fb409de; datanode clusterID = c1
2017-04-15 21:21:15,951 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hdp/hadoop/data/in_use.lock acquired by nodename 12483@hdp265s1.test.com
2017-04-15 21:21:15,952 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /home/hdp/hadoop/data: namenode clusterID = CID-bf7ff1f1-680c-4bbf-958b-bda65fb409de; datanode clusterID = c1
2017-04-15 21:21:15,952 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to hdp265m2.test.com/192.168.56.107:53310. Exiting.
java.io.IOException: All specified directories are failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1342)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1308)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:226)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:867)
        at java.lang.Thread.run(Thread.java:745)

原因是clusterID不一致

删除tmp下的内容

从/home/hdp/hadoop/name/current/VERSION 获得clusterID

修改到

/home/hdp/hadoop/data/current/VERSION

修改保持一致,然后重启服务

时间: 2024-11-03 21:38:01

Datanode启动问题 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering>的相关文章

hadoop错误FATAL org.apache.hadoop.hdfs.server.namenode.NameNode Exception in namenode join java.io.IOException There appears to be a gap in the edit log

错误: FATAL org.apache.hadoop.hdfs.server.namenode.NameNode Exception in namenode join java.io.IOException There appears to be a gap in the edit log 原因: namenode元数据被破坏,需要修复 解决:     恢复一下namenode hadoop namenode –recover 一路选择c,一般就OK了 如果,您认为阅读这篇博客让您有些收获,不

Initialization failed for Block pool &lt;registering&gt; (Datanode Uuid unassigned) service to IP1:8020 Invalid volume failure config value: 1

2017-02-27 16:19:44,739 ERROR datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to IP1:8020 Invalid volume failure  config value: 12017-02-27 16:19:44,740 FATAL datanode.DataNode: Initialization

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Incompatible namespaceIDs

用三台centos操作系统的机器搭建了一个hadoop的分布式集群.启动服务后失败,查看datanode的日志,提示错误:ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /var/lib/hadoop-0.20/cache/hdfs/dfs/data: namenode namespaceID = 240012870; datanode

HDFS超租约异常总结(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

HDFS超租约异常总结(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException) 转载 2014年02月22日 14:40:58 9686 异常信息: 13/09/11 12:12:06 INFO hdfs.DFSClient: SMALL_BUFFER_SIZE is 512 org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode

【异常】org.apache.hadoop.hdfs.server.common.InconsistentFSStateException

1 异常信息 05-30 07:53:45,204 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Maximum size of an xattr: 16384 2019-05-30 07:53:45,204 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /mnt/software/hadoop-2.6.0-cdh5.16.1/data

Secondarynamenode无法正常备份:ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint

原先使用hadoop默认设置(hadoop1.2.1),secondarynamenode会正常进行备份,定时从namenode拷贝image文件到SNN.但是具体SNN备份的时间周期和log文件的大小无法定制,后来楼主就修改了SNN的设置,将fs.checkpoint.period修改为3600s,fs.checkpoint.size修改为64兆.在core-site.xml配置文件中添加这两个参数之后,却发现SNN总是无法备份.后来google查找发现还是配置文件没有配置完整造成的,修改配置

Ambari安装过程的错误-hdfs初始化错误-initialization failed for block pool

1131 cd /apps/hadoop/hdfs/namenode/ 1132 rm -rf current in_use.lock 1133 cd /apps/hadoop/hdfs/data/ 1134 rm -rf current in_use.lock 1135 cd /hadoop/hdfs/journal/mycluster 1136 rm -rf current edits.sync in_use.lock 1137 cd /usr/hdp/3.1.4.0-315/hadoop/

启动hadoop报ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed to load image from FSImageFile

不知道怎么回事,今天在启动集群时通过jps查看进程时始终有一个standby namenode进程无法启动.查看日志时报的是不能加载fsimage文件.日志截图如下: 日志报的很明显了是不能加载元数据信息,解决方案: 解决办法: 1.手动copy namenode(active)所在的那台服务器上XXX/dfs/name/current/下的所有文件到namenode(standby) 所在的那台服务器的对应文件夹下. 2. 重新格式化namenode(active),然后再把格式化后的元数据复

Hadoop HA HDFS启动错误之org.apache.hadoop.ipc.Client: Retrying connect to server问题解决

近日,在搭建Hadoop HA QJM集群的时候,出现一个问题,如本文标题. 网上有很多HA的博文,其实比较好的博文就是官方文档,讲的已经非常详细.所以,HA的搭建这里不再赘述. 本文就想给出一篇org.apache.hadoop.ipc.Client: Retrying connect to server错误的解决的方法. 因为在搜索引擎中输入了错误问题,没有找到一篇解决问题的.这里写一篇备忘,也可以给出现同样问题的朋友一个提示. 一.问题描述 HA按照规划配置好,启动后,NameNode不能