hadoop启动后jps没有namenode
一般都是由于两次或两次以上格式化NameNode造成的,有两种方法可以解决:
1.删除DataNode的所有资料
2.修改每个DataNode的namespaceID(位于/home/hdfs/data/current/VERSION文件中)或修改NameNode的namespaceID(位于/home/hdfs/name/current/VERSION文件中),
目的是两者一致。
但是查看后,两者的ID是一样的,
于是查看/usr/local/hadoop/logs下的namenode日志文件,发现错误是java.io.FileNotFoundException: /home/hadoop/hdfs/name/current/VERSION (Permission denied)
在网上搜索后,发现是/home/hadoop/hdfs/name/current/VERSION的权限问题,于是
[email protected]:/usr/local/hadoop/bin$ sudo chmod -R 777 /home/hadoop/hdfs
再进行格式化:[email protected]:/usr/local/hadoop/bin$ hadoop namenode -format
启动:[email protected]:/usr/local/hadoop/bin$ start-all.sh
jps:[email protected]:/usr/local/hadoop/bin$ jps
6692 JobTracker
6259 NameNode
6601 SecondaryNameNode
6810 Jps
由于看到datanode和tasktracker还没有启动,就用了以下命令进行启动,但是发现确实已经启动了。
jobtracker running as process 6692. Stop it first.
192.168.1.3: tasktracker running as process 4959. Stop it first.
192.168.1.4: tasktracker running as process 5042. Stop it first.
[email protected]:/usr/local/hadoop/bin$ jps
6692 JobTracker
6259 NameNode
6601 SecondaryNameNode
7391 Jps
[email protected]:/usr/local/hadoop/bin$ start-dfs.sh
Warning: $HADOOP_HOME is deprecated.
namenode running as process 6259. Stop it first.
192.168.1.3: datanode running as process 4757. Stop it first.
192.168.1.4: datanode running as process 4828. Stop it first.
192.168.1.2: secondarynamenode running as process 6601. Stop it first.
http://www.cnblogs.com/linjiqin/archive/2013/03/07/2948078.html
从百度知道里看到这么一句话,
提示都说了,不建议使用这个脚本,使用start-dfs.sh和start-mapred.sh来替代它。这说明脚本的作者或者维护人也觉得这个脚本可能有问题…… 你要是有兴趣也可以自己改改……