Hadoop重启,Hbase出现Master exiting的错误:org.apache.hadoop.hbase.master.HMasterCommandLine: Master exiting

hadoop hdfs 重新启动或者重新格式话,可能会出现一些数据丢失,报错hbase的。

我的是试验环境,所以需要重新清理这些数据。

首先要重新创建hbase在hdfs里面的文件夹:

并把赋予拥有者权限

$ sudo -u hdfs hadoop fs -mkdir /hbase
$ sudo -u hdfs hadoop fs -chown hbase /hbase

清理hbase在zookeeper中的缓存数据,缓存的目录在hbase-site.xml中有。

hbase启动出现错误:

2016-05-11 10:01:23,124 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Master exiting

java.lang.RuntimeException: HMaster Aborted

2016-05-11 10:01:23,007 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2016-05-11 10:01:23,007 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2016-05-11 10:01:23,007 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2016-05-11 10:01:23,007 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 2 on 60000: exiting
2016-05-11 10:01:23,007 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting
2016-05-11 10:01:23,008 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 0 on 60000: exiting
2016-05-11 10:01:23,007 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 1 on 60000: exiting
2016-05-11 10:01:23,008 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting
2016-05-11 10:01:23,008 INFO org.apache.hadoop.hbase.master.HMaster: Stopping infoServer
2016-05-11 10:01:23,008 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting
2016-05-11 10:01:23,008 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000
2016-05-11 10:01:23,009 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting
2016-05-11 10:01:23,009 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting
2016-05-11 10:01:23,008 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2016-05-11 10:01:23,008 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting
2016-05-11 10:01:23,008 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting
2016-05-11 10:01:23,011 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2016-05-11 10:01:23,011 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting
2016-05-11 10:01:23,009 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting
2016-05-11 10:01:23,009 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting
2016-05-11 10:01:23,013 INFO org.mortbay.log: Stopped [email protected]:60010
2016-05-11 10:01:23,124 INFO org.apache.zookeeper.ZooKeeper: Session: 0x3547fc1b0e5000b closed
2016-05-11 10:01:23,124 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting
2016-05-11 10:01:23,124 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2016-05-11 10:01:23,124 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
	at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:160)
	at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
	at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2124)

解决方法:http://stackoverflow.com/questions/28563167/hbase-master-not-starting-correctly

Step 1:stop Hbase.

Step 2:运行

hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair

//修复metadata

Step 3:运行zookeeper,这里会丢失旧的数据。

zkCli.sh

ls /

rmr
/hbase

Step 4:重启hbase

执行

[[email protected] bin]# sh zkCli.sh
Connecting to localhost:2181
...
Welcome to ZooKeeper!
...

[zk: localhost:2181(CONNECTED) 0] ls /
[hbase, zookeeper]
[zk: localhost:2181(CONNECTED) 1] rmr hbase
Command failed: java.lang.IllegalArgumentException: Path must start with / character
[zk: localhost:2181(CONNECTED) 2] rmr /hbase
[zk: localhost:2181(CONNECTED) 3] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 4]
时间: 2024-10-20 16:17:36

Hadoop重启,Hbase出现Master exiting的错误:org.apache.hadoop.hbase.master.HMasterCommandLine: Master exiting的相关文章

HBase中此类异常解决记录org.apache.hadoop.ipc.RemoteException(java.io.IOException):

ERROR: Can't get master address from ZooKeeper; znode data == null   一定注意这只是问题的第一层表象,真的问题是: File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplica 网上很多都是叫用两种方式解决 stop/start  重启hbase 格式化 hdfs namenode -format,不能随随便便就格

【原创】问题定位分享(16)spark写数据到hive外部表报错ClassCastException: org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat cannot be cast to org.apache.hadoop.hive.ql.io.HiveOutputFormat

spark 2.1.1 spark在写数据到hive外部表(底层数据在hbase中)时会报错 Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat cannot be cast to org.apache.hadoop.hive.ql.io.HiveOutputFormat at org.apache.spark.sql.hive.SparkHiveWrit

Apache Hadoop YARN: Moving beyond MapReduce and Batch Processing with Apache Hadoop 2

Apache Hadoop YARN: Moving beyond MapReduce and Batch Processing with Apache Hadoop 2 .mobi: http://www.t00y.com/file/79497801 Apache Hadoop YARN: Moving beyond MapReduce and Batch Processing with Apache Hadoop 2.pdf: http://www.t00y.com/file/8034244

hadoop错误org.apache.hadoop.util.DiskChecker$DiskErrorException Could not find any valid local directory for

错误: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for 原因: 两种可能,hadoop.tmp.dir或者data目录存储空间不足 解决办法: 看了一下我的dfs状态,data使用率不到40%,所以推测是hadoop.tmp.dir空间不足,导致无法创建Jog临时文件.查看core-site.xml发现没有配置hadoop.tmp.dir,因此使

hadoop错误org.apache.hadoop.yarn.exceptions.YarnException Unauthorized request to start container

错误: 14/04/29 02:45:07 INFO mapreduce.Job: Job job_1398704073313_0021 failed with state FAILED due to: Application application_1398704073313_0021 failed 2 times due to Error launching appattempt_1398704073313_0021_000002. Got exception:     org.apache

hadoop错误org.apache.hadoop.mapred.TaskAttemptListenerImpl Progress of TaskAttempt

错误: org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt 原因: 错误很明显,磁盘空间不足,但郁闷的是,进各节点查看,磁盘空间使用不到40%,还有很多空间. 郁闷很长时间才发现,原来有个map任务运行时输出比较多,运行出错前,硬盘空间一路飙升,直到100%不够时报错.随后任务执行失败,释放空间,把任务分配给其它节点.正因为空间被释放,因此虽然报空间不足的错误,但查看当时磁盘还有很多剩余空间. 这个问

Hadoop: No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).解决办法

在eclipse中运行Hadoop程序时出现如下问题: log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#

org.apache.hadoop.hbase.master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted

前一篇的问题解决了,是 hbase 下面lib 包的jar问题,之前写MR的时候加错了包,替换掉了原来的包后出现另一问题:@ubuntu:/home/hadoop/hbase-0.94.6-cdh4.5.0/bin$ ./start-hbase.sh starting master, logging to /home/hadoop/hbase-0.94.6-cdh4.5.0/logs/hbase-master-ubuntu.outlocalhost: starting regionserver,

HBASE启动失败,Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster

Master日志错误:2015-12-02 06:34:32,394 ERROR [main] master.HMasterCommandLine: Master exitingjava.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterat org.apache.hadoop.hbase.master.HMaster.constructMaster(