【异常】org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 1 missing files; e.g.:

1 详细异常

org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 1 missing files; e.g.: /wm1/link/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/003993.sst
        at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:181)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:245)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:562)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:609)
Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 1 missing files; e.g.: /wm1/link/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/003993.sst
        at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
        at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
        at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
        at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.openDatabase(NMLeveldbStateStoreService.java:950)
        at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:937)
        at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:210)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        ... 5 more
2020-01-06 10:14:24,136 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NodeManager at ****。****
************************************************************/

  

发现疑似目录:/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state下存在: 005615.sst 005638.log 005640.log CURRENT LOCK MANIFEST-004397移除所有文件。重启nodemanager 成功。 回顾错误原因可能是,我在该nodemanager终止情况下,在集群中添加了新的nodemanager,使得角色数目增加,而启动失败的nodemanager时,它使用存储的状态来恢复,在和数据库校验过程中发现数目不符合而启动失败。因此删除上述目录下的文件。

原文地址:https://www.cnblogs.com/QuestionsZhang/p/12182374.html

时间: 2024-08-30 13:52:04

【异常】org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 1 missing files; e.g.:的相关文章

Java 向Hbase表插入数据异常org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.client.HTable

出错代码如下: //1.create HTablePool HTablePool hp=new HTablePool(con, 1000); //2.get HTable from HTablepool HTable ht=(HTable)hp.getTable(tName); 出错原因,主要是版本更新了,所以旧的调用方式会报错:如今应用的api版本中pool.getTable返回的类型是HTableInterface ,无法强转为HTable 解决方法:跳过转换为中间变量,直接调用 //ht.

HDFS超租约异常总结(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

HDFS超租约异常总结(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException) 转载 2014年02月22日 14:40:58 9686 异常信息: 13/09/11 12:12:06 INFO hdfs.DFSClient: SMALL_BUFFER_SIZE is 512 org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode

HBase中此类异常解决记录org.apache.hadoop.ipc.RemoteException(java.io.IOException):

ERROR: Can't get master address from ZooKeeper; znode data == null   一定注意这只是问题的第一层表象,真的问题是: File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplica 网上很多都是叫用两种方式解决 stop/start  重启hbase 格式化 hdfs namenode -format,不能随随便便就格

spark启动后出现“JAVA_HOME not set” 异常和"org.apache.hadoop.security.AccessControlException"异常

/home/bigdata/hadoop/spark-2.1.1-bin-hadoop2.7/sbin/start-all.sh 启动后执行jps命令,主节点上有Master进程,其他子节点上有Work进行,登录Spark管理界面查看集群状态(主节点):http://master01:8080/ 到此为止,Spark集群安装完毕. 1.注意:如果遇到 “JAVA_HOME not set” 异常,可以在sbin目录下的spark-config.sh 文件中加入如下配置: export JAVA_

【异常】org.apache.hadoop.hdfs.server.common.InconsistentFSStateException

1 异常信息 05-30 07:53:45,204 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Maximum size of an xattr: 16384 2019-05-30 07:53:45,204 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /mnt/software/hadoop-2.6.0-cdh5.16.1/data

异常-Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hdfs, access=WRITE, inode="/hbase":root:supergroup:drwxr-xr-x

1 详细异常 Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hdfs, access=WRITE, inode="/hbase":root:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAu

Apache Hadoop集群安装(NameNode HA + SPARK + 机架感知)

1.主机规划 序号 主机名 IP地址 角色 1 nn-1 192.168.9.21 NameNode.mr-jobhistory.zookeeper.JournalNode 2 nn-2 192.168.9.22 Secondary NameNode.JournalNode 3 dn-1 192.168.9.23 DataNode.JournalNode.zookeeper.ResourceManager.NodeManager 4 dn-2 192.168.9.24 DataNode.zook

Datanode启动问题 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering>

2017-04-15 21:21:15,423 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup 2017-04-15 21:21:15,467 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity:

Win下Eclipse提交Hadoop程序出错:org.apache.hadoop.security.AccessControlException: Permission denied: user=D

描述:在Windows下使用Eclipse进行Hadoop的程序编写,然后Run on hadoop 后,出现如下错误: 11/10/28 16:05:53 INFO mapred.JobClient: Running job: job_201110281103_000311/10/28 16:05:54 INFO mapred.JobClient: map 0% reduce 0%11/10/28 16:06:05 INFO mapred.JobClient: Task Id : attemp