HDFS_1.2.1_0: ./bin/hadoop namenode -format

又回来看HDFS 1.2.1 版本了,首先要执行hadoop namenode -format

---

执行脚本./hadoop namenode -format 后,脚本最后执行的核心部分是:

exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS -classpath "$CLASSPATH" $CLASS "[email protected]"

打印出内容就是

/usr/java/jdk1.8.0_45/bin/java -Dproc_namenode -Xmx1000m -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/root/hadoop-1.2.1/libexec/../logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/hadoop-1.2.1/libexec/.. -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Dhadoop.security.logger=INFO,DRFAS -Djava.library.path=/root/hadoop-1.2.1/libexec/../lib/native/Linux-i386-32 -Dhadoop.policy.file=hadoop-policy.xml -classpath /root/hadoop-1.2.1/libexec/../conf:/usr/java/jdk1.8.0_45/lib/tools.jar:/root/hadoop-1.2.1/libexec/..:/root/hadoop-1.2.1/libexec/../hadoop-core-1.2.1.jar:/root/hadoop-1.2.1/libexec/../lib/asm-3.2.jar:/root/hadoop-1.2.1/libexec/../lib/aspectjrt-1.6.11.jar:/root/hadoop-1.2.1/libexec/../lib/aspectjtools-1.6.11.jar:/root/hadoop-1.2.1/libexec/../lib/commons-beanutils-1.7.0.jar:/root/hadoop-1.2.1/libexec/../lib/commons-beanutils-core-1.8.0.jar:/root/hadoop-1.2.1/libexec/../lib/commons-cli-1.2.jar:/root/hadoop-1.2.1/libexec/../lib/commons-codec-1.4.jar:/root/hadoop-1.2.1/libexec/../lib/commons-collections-3.2.1.jar:/root/hadoop-1.2.1/libexec/../lib/commons-configuration-1.6.jar:/root/hadoop-1.2.1/libexec/../lib/commons-daemon-1.0.1.jar:/root/hadoop-1.2.1/libexec/../lib/commons-digester-1.8.jar:/root/hadoop-1.2.1/libexec/../lib/commons-el-1.0.jar:/root/hadoop-1.2.1/libexec/../lib/commons-httpclient-3.0.1.jar:/root/hadoop-1.2.1/libexec/../lib/commons-io-2.1.jar:/root/hadoop-1.2.1/libexec/../lib/commons-lang-2.4.jar:/root/hadoop-1.2.1/libexec/../lib/commons-logging-1.1.1.jar:/root/hadoop-1.2.1/libexec/../lib/commons-logging-api-1.0.4.jar:/root/hadoop-1.2.1/libexec/../lib/commons-math-2.1.jar:/root/hadoop-1.2.1/libexec/../lib/commons-net-3.1.jar:/root/hadoop-1.2.1/libexec/../lib/core-3.1.1.jar:/root/hadoop-1.2.1/libexec/../lib/hadoop-capacity-scheduler-1.2.1.jar:/root/hadoop-1.2.1/libexec/../lib/hadoop-fairscheduler-1.2.1.jar:/root/hadoop-1.2.1/libexec/../lib/hadoop-thriftfs-1.2.1.jar:/root/hadoop-1.2.1/libexec/../lib/hsqldb-1.8.0.10.jar:/root/hadoop-1.2.1/libexec/../lib/jackson-core-asl-1.8.8.jar:/root/hadoop-1.2.1/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/root/hadoop-1.2.1/libexec/../lib/jasper-compiler-5.5.12.jar:/root/hadoop-1.2.1/libexec/../lib/jasper-runtime-5.5.12.jar:/root/hadoop-1.2.1/libexec/../lib/jdeb-0.8.jar:/root/hadoop-1.2.1/libexec/../lib/jersey-core-1.8.jar:/root/hadoop-1.2.1/libexec/../lib/jersey-json-1.8.jar:/root/hadoop-1.2.1/libexec/../lib/jersey-server-1.8.jar:/root/hadoop-1.2.1/libexec/../lib/jets3t-0.6.1.jar:/root/hadoop-1.2.1/libexec/../lib/jetty-6.1.26.jar:/root/hadoop-1.2.1/libexec/../lib/jetty-util-6.1.26.jar:/root/hadoop-1.2.1/libexec/../lib/jsch-0.1.42.jar:/root/hadoop-1.2.1/libexec/../lib/junit-4.5.jar:/root/hadoop-1.2.1/libexec/../lib/kfs-0.2.2.jar:/root/hadoop-1.2.1/libexec/../lib/log4j-1.2.15.jar:/root/hadoop-1.2.1/libexec/../lib/mockito-all-1.8.5.jar:/root/hadoop-1.2.1/libexec/../lib/oro-2.0.8.jar:/root/hadoop-1.2.1/libexec/../lib/servlet-api-2.5-20081211.jar:/root/hadoop-1.2.1/libexec/../lib/slf4j-api-1.4.3.jar:/root/hadoop-1.2.1/libexec/../lib/slf4j-log4j12-1.4.3.jar:/root/hadoop-1.2.1/libexec/../lib/xmlenc-0.52.jar:/root/hadoop-1.2.1/libexec/../lib/jsp-2.1/jsp-2.1.jar:/root/hadoop-1.2.1/libexec/../lib/jsp-2.1/jsp-api-2.1.jar org.apache.hadoop.hdfs.server.namenode.NameNode -format

去掉若干修饰词,最后的核心部分就是

java org.apache.hadoop.hdfs.server.namenode.NameNode -format

下面就来分析NameNode这个类。

时间: 2024-10-31 03:54:44

HDFS_1.2.1_0: ./bin/hadoop namenode -format的相关文章

对hadoop namenode -format执行过程的探究

  引言 本文出于一个疑问:hadoop namenode -format到底在我的linux系统里面做了些什么? 步骤 第1个文件bin/hadoop Hadoop脚本位于hadoop根目录下的bin目录下, 打开之后阅读源代码: 在这里$1即为参数namenode 将COMMAND赋值为$1,那么COMMAND=namenode 条件判断语句的执行流到达#hdfs下的一行: 因为这一行判断COMMAND是否等于namenode secondarynamenode等之一: 接着往下读: 判断"

hadoop namenode -format Couldn'tload main class "-Djava.library.path=.home.hadoop.hadoop-2.5.2.lib"

<pre name="code" class="sql">[[email protected] ~]$ hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. Error: Could not find or load main class "

&quot;hadoop namenode -format&quot;命令的作用和影响的文件

在hadoop部署好了之后是不能马上应用的,而是对配置的文件系统进行格式化.这里的文件系统,在物理上还未存在,或者用网络磁盘来描述更加合适:还有格式化,并不是传统意义上的磁盘清理,而是一些清除与准备工作. namemode是hdfs系统中的管理者,它负责管理文件系统的命名空间,维护文件系统的文件树以及所有的文件和目录的元数据,元数据的格式如下: 同时为了保证操作的可靠性,还引入了操作日志,所以,namenode会持久化这些数据到本地.对于第一次使用HDFS时,需要执行-format命令才能正常使

hadoop 2.5 hdfs namenode –format 出错Usage: java NameNode [-backup] |

在 cd  /home/hadoop/hadoop-2.5.2/bin 下 执行的./hdfs namenode -format 报错[[email protected] bin]$ ./hdfs namenode –format 16/07/11 09:21:21 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starti

HDFS-2.7.0系列3: hdfs namenode -format

上一节,讲过了,执行hadoop namenode -format后 实际上是执行 /root/hadoop-2.7.0-bin/bin/hdfs namenode -format 下面就来分析这个脚本 --- bin=`which $0` bin=`dirname ${bin}` bin=`cd "$bin" > /dev/null; pwd` 打印 bin=/root/hadoop-2.7.0-bin/bin --- DEFAULT_LIBEXEC_DIR="$bi

Hadoop Namenode不能启动

自己在虚拟机上建立伪分布环境,第一天还一切正常,后来发现每次重新开机以后都不能正常启动,在start-dfs.sh之后jps一下发现namenode不能正常启动,按提示找到logs目录下namenode的启动log发现如下异常. [email protected]:~$ jps 5096 ResourceManager 5227 NodeManager 5559 Jps 4742 DataNode 4922 SecondaryNameNode org.apache.hadoop.hdfs.ser

Hadoop namenode无法启动

最近遇到了一个问题,执行start-all.sh的时候发现JPS一下namenode没有启动        每次开机都得重新格式化一下namenode才可以        其实问题就出在tmp文件,默认的tmp文件每次重新开机会被清空,与此同时namenode的格式化信息就会丢失        于是我们得重新配置一个tmp文件目录        首先在home目录下建立一个hadoop_tmp目录                sudo mkdir ~/hadoop_tmp        然后修

Hadoop NameNode is not formatted.

2014-08-26 20:27:22,712 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimagejava.io.IOException: NameNode is not formatted. 1.启动Hadoop [email protected]_160_34_centos:/usr/local/hadoop-2.4.0> sbin/start-all.

hadoop namenode多次格式化后,导致datanode启动不了

jps hadoop namenode -format dfs directory : /home/hadoop/dfs --data --current/VERSION #Wed Jul 30 20:41:03 CST 2014 storageID=DS-ab96ad90-7352-4cd5-a0de-7308c8a358ff clusterID=CID-aa2d4761-974b-4451-8858-bbbcf82e1fd4 cTime=0 datanodeUuid=a3356a09-780