【异常】org.apache.hadoop.hdfs.server.common.InconsistentFSStateException

1 异常信息

05-30 07:53:45,204 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Maximum size of an xattr: 16384
2019-05-30 07:53:45,204 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /mnt/software/hadoop-2.6.0-cdh5.16.1/data/tmp/dfs/name does not exist
2019-05-30 07:53:45,223 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /mnt/software/hadoop-2.6.0-cdh5.16.1/data/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
2019-05-30 07:53:45,272 INFO org.mortbay.log: Stopped [email protected]0.0.0.0:50070
2019-05-30 07:53:45,273 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2019-05-30 07:53:45,274 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2019-05-30 07:53:45,274 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2019-05-30 07:53:45,274 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /mnt/software/hadoop-2.6.0-cdh5.16.1/data/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
2019-05-30 07:53:45,275 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2019-05-30 07:53:45,279 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/

2 原因

  因为core-site.xml 和 hdfs-site.xml配置了不一致的Hadoop.tmp.dir 目录,导致总是出现问题,删除掉core-site.xml中的tmp.dir配置,统一配置到hdfs-site.xml中

  hdfs-site.xml中的dir配置

<property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///mnt/software/hadoop-2.6.0-cdh5.16.1/data/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///mnt/software/hadoop-2.6.0-cdh5.16.1/data/dfs/data</value>
    </property>

  

原文地址:https://www.cnblogs.com/QuestionsZhang/p/10952101.html

时间: 2024-10-02 23:23:36

【异常】org.apache.hadoop.hdfs.server.common.InconsistentFSStateException的相关文章

Datanode启动问题 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool &lt;registering&gt;

2017-04-15 21:21:15,423 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup 2017-04-15 21:21:15,467 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity:

HDFS超租约异常总结(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

HDFS超租约异常总结(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException) 转载 2014年02月22日 14:40:58 9686 异常信息: 13/09/11 12:12:06 INFO hdfs.DFSClient: SMALL_BUFFER_SIZE is 512 org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode

hadoop错误FATAL org.apache.hadoop.hdfs.server.namenode.NameNode Exception in namenode join java.io.IOException There appears to be a gap in the edit log

错误: FATAL org.apache.hadoop.hdfs.server.namenode.NameNode Exception in namenode join java.io.IOException There appears to be a gap in the edit log 原因: namenode元数据被破坏,需要修复 解决:     恢复一下namenode hadoop namenode –recover 一路选择c,一般就OK了 如果,您认为阅读这篇博客让您有些收获,不

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Incompatible namespaceIDs

用三台centos操作系统的机器搭建了一个hadoop的分布式集群.启动服务后失败,查看datanode的日志,提示错误:ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /var/lib/hadoop-0.20/cache/hdfs/dfs/data: namenode namespaceID = 240012870; datanode

Secondarynamenode无法正常备份:ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint

原先使用hadoop默认设置(hadoop1.2.1),secondarynamenode会正常进行备份,定时从namenode拷贝image文件到SNN.但是具体SNN备份的时间周期和log文件的大小无法定制,后来楼主就修改了SNN的设置,将fs.checkpoint.period修改为3600s,fs.checkpoint.size修改为64兆.在core-site.xml配置文件中添加这两个参数之后,却发现SNN总是无法备份.后来google查找发现还是配置文件没有配置完整造成的,修改配置

启动hadoop报ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed to load image from FSImageFile

不知道怎么回事,今天在启动集群时通过jps查看进程时始终有一个standby namenode进程无法启动.查看日志时报的是不能加载fsimage文件.日志截图如下: 日志报的很明显了是不能加载元数据信息,解决方案: 解决办法: 1.手动copy namenode(active)所在的那台服务器上XXX/dfs/name/current/下的所有文件到namenode(standby) 所在的那台服务器的对应文件夹下. 2. 重新格式化namenode(active),然后再把格式化后的元数据复

Apache Hadoop hdfs源码分析

FileSystem.get --> 通过反射实例化了一个DistributedFileSystem --> new DFSCilent()把他作为自己的成员变量 在DFSClient构造方法里面,调用了createNamenode,使用了RPC机制,得到了一个NameNode的代理对象,就可以和NameNode进行通信了 FileSystem --> DistributedFileSystem --> DFSClient --> NameNode的代理

Java 向Hbase表插入数据异常org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.client.HTable

出错代码如下: //1.create HTablePool HTablePool hp=new HTablePool(con, 1000); //2.get HTable from HTablepool HTable ht=(HTable)hp.getTable(tName); 出错原因,主要是版本更新了,所以旧的调用方式会报错:如今应用的api版本中pool.getTable返回的类型是HTableInterface ,无法强转为HTable 解决方法:跳过转换为中间变量,直接调用 //ht.

spark error Caused by: java.io.NotSerializableException: org.apache.hadoop.hdfs.DistributedFileSystem

序列化问题多事rdd遍历过程中使用了没有序列化的对象. 1.将未序列化的变量定义到rdd遍历内部.如定义入数据库连接池. 2.常量定义里包含了未序列化对象 ,提出去吧 如下常量要放到main里,不能放到rdd的遍历中. val HBASE_TABLE = sparkModel.getUserParamsVal("hbbase_table", "default_table_name") 原文地址:https://www.cnblogs.com/shaozhiqi/p/