Hadoop安装遇到的各种异常及解决办法(1)

异常一:

2014-03-13 11:10:23,665 INFO org.apache.Hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000.
Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:24,667 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
sleepTime=1 SECONDS)

2014-03-13 11:10:25,667 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:26,669 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:27,670 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:28,671 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:29,672 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:30,674 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:31,675 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:32,676 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-13 11:10:32,677 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: Linux-hadoop-38/10.10.208.38:9000

解决办法:

1、ping Linux-hadoop-38能通,telnet Linux-hadoop-38 9000不能通,说明开启了防火墙

2、去Linux-hadoop-38主机关闭防火墙/etc/init.d/iptables stop,显示:

iptables:清除防火墙规则:[确定]

iptables:将链设置为政策 ACCEPT:filter [确定]

iptables:正在卸载模块:[确定]

3、重启

异常二:

2014-03-13 11:26:30,788 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1257313099-10.10.208.38-1394679083528 (storage id DS-743638901-127.0.0.1-50010-1394616048958)
service to Linux-hadoop-38/10.10.208.38:9000

java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/tmp/dfs/data: namenode clusterID = CID-8e201022-6faa-440a-b61c-290e4ccfb006; datanode clusterID = clustername

at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)

at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)

at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)

at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:916)

at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:887)

at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:309)

at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:218)

at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:660)

at java.lang.Thread.run(Thread.java:662)

解决办法:

1、在hdfs-site.xml配置文件中,配置了dfs.namenode.name.dir,在master中,该配置的目录下有个current文件夹,里面有个VERSION文件,内容如下:

#Thu Mar 13 10:51:23 CST 2014

namespaceID=1615021223

clusterID=CID-8e201022-6faa-440a-b61c-290e4ccfb006

cTime=0

storageType=NAME_NODE

blockpoolID=BP-1257313099-10.10.208.38-1394679083528

layoutVersion=-40

2、在core-site.xml配置文件中,配置了hadoop.tmp.dir,在slave中,该配置的目录下有个dfs/data/current目录,里面也有一个VERSION文件,内容

#Wed Mar 12 17:23:04 CST 2014

storageID=DS-414973036-10.10.208.54-50010-1394616184818

clusterID=clustername

cTime=0

storageType=DATA_NODE

layoutVersion=-40

3、一目了然,两个内容不一样,导致的。删除slave中的错误内容,重启,搞定!

参考资料: http://www.linuxidc.com/Linux/2014-03/98598.htm

异常三:

2014-03-13 12:34:46,828 FATAL org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Failed to initialize mapreduce_shuffle

java.lang.RuntimeException: No class defiend for mapreduce_shuffle

at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.init(AuxServices.java:94)

at org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)

at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.init(ContainerManagerImpl.java:181)

at org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.init(NodeManager.java:185)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:328)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:351)

2014-03-13 12:34:46,830 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager

java.lang.RuntimeException: No class defiend for mapreduce_shuffle

at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.init(AuxServices.java:94)

at org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)

at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.init(ContainerManagerImpl.java:181)

at org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.init(NodeManager.java:185)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:328)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:351)

2014-03-13 12:34:46,846 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: ResourceCalculatorPlugin is unavailable on this system. org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
is disabled.

解决办法:

1、yarn-site.xml配置错误:

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

2、修改为:

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce.shuffle</value>

</property>

3、重启服务

警告:

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

解决办法:

参考  http://www.linuxidc.com/Linux/2014-03/98599.htm

异常四:

14/03/13 17:25:41 ERROR lzo.GPLNativeCodeLoader: Could not load native gpl library

java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path

at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734)

at java.lang.Runtime.loadLibrary0(Runtime.java:823)

at java.lang.System.loadLibrary(System.java:1028)

at com.hadoop.compression.lzo.GPLNativeCodeLoader.<clinit>(GPLNativeCodeLoader.java:32)

at com.hadoop.compression.lzo.LzoCodec.<clinit>(LzoCodec.java:67)

at com.hadoop.compression.lzo.LzoIndexer.<init>(LzoIndexer.java:36)

at com.hadoop.compression.lzo.LzoIndexer.main(LzoIndexer.java:134)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

14/03/13 17:25:41 ERROR lzo.LzoCodec: Cannot load native-lzo without native-hadoop

14/03/13 17:25:43 INFO lzo.LzoIndexer: [INDEX] LZO Indexing file /test2.lzo, size 0.00 GB...

Exception in thread "main" java.lang.RuntimeException: native-lzo library not available

at com.hadoop.compression.lzo.LzopCodec.createDecompressor(LzopCodec.java:91)

at com.hadoop.compression.lzo.LzoIndex.createIndex(LzoIndex.java:222)

at com.hadoop.compression.lzo.LzoIndexer.indexSingleFile(LzoIndexer.java:117)

at com.hadoop.compression.lzo.LzoIndexer.indexInternal(LzoIndexer.java:98)

at com.hadoop.compression.lzo.LzoIndexer.index(LzoIndexer.java:52)

at com.hadoop.compression.lzo.LzoIndexer.main(LzoIndexer.java:137)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

解决办法:很明显,没有native-lzo

编译安装/编译lzo,http://www.linuxidc.com/Linux/2014-03/98601.htm

异常五:

14/03/17 10:23:59 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1394702706596_0003

java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.

at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:134)

at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:174)

at org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:58)

at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:276)

at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:468)

at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:485)

at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:369)

at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1269)

at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1266)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:1266)

at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1287)

at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)

at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)

at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found

at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1680)

at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:127)

... 26 more

临时解决办法:

将/usr/local/hadoop/lib/hadoop-lzo-0.4.10.jar拷贝到/usr/local/jdk/lib下,重启linux

异常六:

14/03/17 10:35:03 ERROR security.UserGroupInformation: PriviledgedActionException as:Hadoop (auth:SIMPLE)
cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /tmp/hadoop-yarn/staging/hadoop/.staging/job_1395023531587_0001. Name node is in safe mode.

The reported blocks 0 needs additional 12 blocks to reach the threshold 0.9990 of total blocks 12. Safe mode will be turned off automatically.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:2905)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:2872)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2859)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:642)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:408)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44968)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746)

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /tmp/hadoop-yarn/staging/hadoop/.staging/job_1395023531587_0001. Name node is in safe mode.

The reported blocks 0 needs additional 12 blocks to reach the threshold 0.9990 of total blocks 12. Safe mode will be turned off automatically.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:2905)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:2872)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2859)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:642)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:408)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44968)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746)

at org.apache.hadoop.ipc.Client.call(Client.java:1238)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)

at $Proxy9.delete(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)

at $Proxy9.delete(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:408)

at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1487)

at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:355)

at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:418)

at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1269)

at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1266)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:1266)

at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1287)

at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)

at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)

at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

解决办法:

明显是安全模式:

原因是reboot机器的时候,防火墙起来了,导致NodeManager启动不起来,导致一直是安全模式。关闭防火墙重启hadoop,ok!

异常七:

2014-03-20 10:35:10,447 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times

2014-03-20 10:35:10,450 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033

2014-03-20 10:35:10,450 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

2014-03-20 10:35:10,450 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000

2014-03-20 10:35:10,476 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/hdfs/name/in_use.lock acquired by nodename [email protected]

2014-03-20 10:35:10,479 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...

2014-03-20 10:35:10,480 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.

2014-03-20 10:35:10,480 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.

2014-03-20 10:35:10,480 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join

java.io.IOException: NameNode is not formatted.

at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:217)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:728)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:521)

at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)

at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:437)

at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:613)

at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:598)

at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)

at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)

2014-03-20 10:35:10,484 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2014-03-20 10:35:10,501 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

解决办法:

hadoop namenode -format,不是hdfs namenode -format

启动JobHistoryServer

sbin/mr-jobhistory-daemon.sh start historyserver

时间: 2024-10-10 15:55:49

Hadoop安装遇到的各种异常及解决办法(1)的相关文章

Tomcat异常及解决办法——持续更新中

公司项目,开发语言为java,中间件为Tomcat,运行过程中,从Tomcat出现了一些异常,现将异常及解决办法记录如下,仅供参考.(不断在补充中.......) 异常一: 1.日志内容 org.apache.coyote.http11.AbstractHttp11Processor.process Error parsing HTTP request header Note: further occurrences of HTTP header parsing errors will be l

使用maven时出现Failure to transfer 异常的解决办法

> 使用maven时出现Failure to transfer 错误的解决方法 在eclipse里使用maven,连接nexus私服. 添加依赖之后,总是报添加的依赖jar文件找不到,但是在nexus的库里面能找到这个依赖的jar文件,但是在本地的maven库里面找不到,于是我将本地库里面这个依赖对应的文件夹删掉,然后在eclipse里面执行update dependencies.成功解决问题! 右键单击项目->maven->update dependencies. 引起的原因是由于本地

python安装markupsafe模块时卡死的解决办法

起因: 升级OS X从10.8到10.9,会发现在安装python的markupsafe模块时一直卡住. 当时的机器环境是: OSX 10.9, XCode 4.6.2, Python 2.7.6, Apple LLVM version 4.2 (clang-425.0.28) (based on LLVM 3.2svn) 界面一直停留在下面的情况 mbp:MarkupSafe-0.23 $ python setup.py install running install running bdis

使用vMware workstation 10安装操作系统显示内部错误的解决办法

在打开vMware workstation 10以后,准备安装Solaris 10操作系统,但是当点击"创建新的虚拟机"的时候,弹出一个对话框,显示"内部错误". 百度了一下,说去程序和功能里面找到vMware workstation 10的安装程序,里面有个修复,修复它既可.可是我试了几遍,都未能成功. 后来才发现,服务项里的vMware有启动状态,有停止状态,把停止状态启动它就OK了. 重新启动vMware workstation 10,正常运行. 使用vMwa

编译安装PHP使用session_start()时报错&解决办法

系统上的PHP是编译安装的,在session_start()的时候报错! A 错误信息: Warning: session_start(): open(/var/lib/php/session/sess_qavhhacl7lrdbggauasf1qdlo5, O_RDWR) failed: No such file or directory (2) in /www/tool/classes/service/User.php on line 75 1Warning: Unknown: open(/

个人电脑安装windows server 2008 r2驱动解决办法

近日在研究学习微软下的虚拟化技术,由于种种原因不想在VMware workstation下实验,所以将个人电脑换成了Windows server 2008 r2系统,中途遇到一些问题,现在和大家分享下 首先,刻盘.引导,装系统这里我就不多说了,想换系统的肯定都给这些东西都研究透了,现在我说的是系统装好后,驱动的解决办法. 在这里我给大家提供两套方案: 1.官方下载驱动包(我也是通过这种方法解决了驱动的问题),有人会问了:"官方没有提供服务器版本的驱动呀?"没错,刚开始我一样有这样的困惑

Git服务器代理上网安装出现问题的几个解决办法。

1.gem安装出现下面错误 [email protected]:/home/git/gitlab# sudo gem install bundler --no-ri --no-rdoc ERROR:  Could not find a valid gem 'bundler' (>= 0), here is why: Unable to download data from https://rubygems.org/ - Errno::ETIMEDOUT: Connection timed out

cocos2d-x发生undefined reference to `XX&#39;异常 一劳永逸解决办法

cocos2d-x发生undefined reference to `XX'错误 一劳永逸解决方法 参考文章: http://blog.csdn.net/kafeidev/article/details/9157895 http://blog.csdn.net/fu_zk/article/details/12836431 eclipse cocos2dx项目,出现错误 E:/Acocos2d-x/cocos2d-1.0.1-x-0.11.0/MyBilliard/android/jni/../.

Android Eclipse工程开发中的常见调试问题(二)android.os.NetworkOnMainThreadException 异常的解决办法

android.os.NetworkOnMainThreadException 异常的解决办法, 刚开是把HttpURLConnectionnection 打开连接这个方法放在UI线程里了,可能不是线程安全的,而且这个方法请求是需要等待的,所以就抛出了这个异常,后来用子线程打开的HttpURLConnection, 一切就都正常了,只要在主线程里开启子线程就行了.子线程利用URL 问题解决.下面贴一段代码 String file1 = SERVER_PATH; URL url = new URL