Hadoop退出安全模式

启动hive时,在发现 HDFS已经处于安全模式【安全模式什么网上讲解很多】了,没有办法操作了!

 1 Logging initialized using configuration in jar:file:/apps/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties
 2 Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/root/a6844814-495e-45f6-a665-ff006311cdc1. Name node is in safe mode.
 3 The reported blocks 0 needs additional 22 blocks to reach the threshold 0.9990 of total blocks 22.
 4 The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
 5     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1366)
 6     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4258)
 7     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4233)
 8     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:853)
 9     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
10     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
11     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
12     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
13     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
14     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
15     at java.security.AccessController.doPrivileged(Native Method)
16     at javax.security.auth.Subject.doAs(Subject.java:415)
17     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
18     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
19
20     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
21     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
22     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
23     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
24     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
25     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
26     at java.lang.reflect.Method.invoke(Method.java:606)
27     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
28     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
29 Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/root/a6844814-495e-45f6-a665-ff006311cdc1. Name node is in safe mode.
30 The reported blocks 0 needs additional 22 blocks to reach the threshold 0.9990 of total blocks 22.
31 The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
32     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1366)
33     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4258)
34     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4233)
35     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:853)
36     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
37     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
38     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
39     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
40     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
41     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
42     at java.security.AccessController.doPrivileged(Native Method)
43     at javax.security.auth.Subject.doAs(Subject.java:415)
44     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
45     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
46
47     at org.apache.hadoop.ipc.Client.call(Client.java:1469)
48     at org.apache.hadoop.ipc.Client.call(Client.java:1400)
49     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
50     at com.sun.proxy.$Proxy19.mkdirs(Unknown Source)
51     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
52     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
53     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
54     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
55     at java.lang.reflect.Method.invoke(Method.java:606)
56     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
57     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
58     at com.sun.proxy.$Proxy20.mkdirs(Unknown Source)
59     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2742)
60     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2713)
61     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
62     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
63     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
64     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
65     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
66     at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:639)
67     at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:574)
68     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
69     ... 8 more

于是使用下面命令退出安全模式~

1 1 hadoop dfsadmin -safemode leave

但是又出现了这样的问题 让我如何是好~

时间: 2024-11-09 04:48:12

Hadoop退出安全模式的相关文章

hadoop退出安全模式Name node is in safe mode

18/01/12 09:04:34 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. rm: Cannot delete /spark/data/netflow/201801120325.txt. Name node is in safe mode. hadoop 处于安全模式,所以需要退出安全模式,一般以如下

自问自答-hadoop在安全模式下究竟干了什么见不得人的事

本来想自己从网上搜集整理的,但是发现吴超写的刚刚好,不多不少,所以直接转载(图片失效了,用自己的图片) http://www.superwu.cn/2013/08/23/548/ 在hadoop集群的时候,集群的运行会进入到安全模式(safeMode)下.在安全模式下运行一段时间后,自动退出. 那么,系统在安全模式下干什么了? 当集群启动的时候,会首先进入到安全模式.系统在安全模式下,会检查数据块的完整性.假设我们设置的副本数(即参数dfs.replication)是5,那么在dataNode上

hadoop关闭安全模式

hadoop dfsadmin -safemode leave hadoop关闭安全模式,布布扣,bubuko.com

hadoop的安全模式

在安全模式下:不能增.删.改操作:但可以查看. 查看hadoop是否i处于安全模式下: 执行命令:hadoop dfsadmin -safemode get 进入hadoop的安全模式下: 执行命令:hadoop dfsadmin -safemode enter 离开hadoop的安全模式: 执行命令:hadoop dfsadmin -safemode leave

hive的hiveserver2模式启动不起来,发现Hadoop一直处于安全模式

hive的hiveserver2模式启动不起来,发现Hadoop一直处于安全模式 命令介绍 命令hadoop fs –safemode get 查看安全模式状态 命令hadoop fs –safemode enter 进入安全模式状态 命令hadoop fs –safemode leave 离开安全模式状态 用Hadoop fsck查看破坏丢失的文件位置 hadoop fsck Usage: DFSck <path> [-move | -delete | -openforwrite] [-fi

学习笔记-hadoop的安全模式和目录快照

安全模式 1.namenode启动时,合并image和edit成新的image,并产生新的edit log 2.整个智能safe模式下,客户端只能读取 3.查看nameode是否位于安全模式 hdfs dfsadmin -safemode get    //查看安全模式 hdfs dfsadmin -safemode enter    //进入安全模式 hdfs dfsadmin -safemode leave    //离开安全模式 hdfs dfsadmin -safemode wait  

hadoop+hbase+zookeeper+spark+phoenix相关实施报错处理

排错日志: 解决办法:可能是修改 机器名导致的,修改hosts,写入hostname和IP,然后,try it agin! 解决办法: 当引入hadoop-common-2.2.0.jar包进行二次开发,比如读写HDFS文件时,初次运行报错. java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)

Util.FSUtils: Waiting for dfs to exit safe mode

有好几次,启动Hadoop和HBase之后,执行jps命令,已经看到有HMaster的进程, 但是进入到HBase的shell,执行一个命令,会出现下面的错误: ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times 进入到logs目录查看master的日志:发现一直显示下面的内容: 2013-04-13 17:13:17,374 INFO org.apache.hadoop.hbase.util.FSU

hive-1.2.1安装步骤

一.Hive安装和配置 1.先决条件 已经安装好hadoop-2.4.1,hbase-1.0.0. 2.下载Hive安装包 当前Hive可到apache官网下载,选择的是hive-1.2.1.运行: wget http://www-us.apache.org/dist/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz          然后将其解压到Hadoop所在的目录:/opt下. 解压:tar -zvxf apache-hive-1.2.1-bin.