上一节,讲过了,执行hadoop namenode -format后
实际上是执行
/root/hadoop-2.7.0-bin/bin/hdfs namenode -format
下面就来分析这个脚本
---
bin=`which $0` bin=`dirname ${bin}` bin=`cd "$bin" > /dev/null; pwd`
打印
bin=/root/hadoop-2.7.0-bin/bin
---
DEFAULT_LIBEXEC_DIR="$bin"/../libexec
打印’
DEFAULT_LIBEXEC_DIR=/root/hadoop-2.7.0-bin/bin/../libexec
---
cygwin=false case "$(uname)" in CYGWIN*) cygwin=true;; esac
这个不会执行,过滤
---
接下来执行一个脚本
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} . $HADOOP_LIBEXEC_DIR/hdfs-config.sh
实际上执行的是
/root/hadoop-2.7.0-bin/libexec/hdfs-config.sh
这个脚本其实是调用另外一个脚本,调用的哪个脚本?读者可以自己去探索一下:)
---回到hdfs脚本
function print_usage(){ echo "Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND" echo " where COMMAND is one of:" echo " dfs run a filesystem command on the file systems supported in Hadoop." echo " classpath prints the classpath" echo " namenode -format format the DFS filesystem" echo " secondarynamenode run the DFS secondary namenode" echo " namenode run the DFS namenode" echo " journalnode run the DFS journalnode" echo " zkfc run the ZK Failover Controller daemon" echo " datanode run a DFS datanode" echo " dfsadmin run a DFS admin client" echo " haadmin run a DFS HA admin client" echo " fsck run a DFS filesystem checking utility" echo " balancer run a cluster balancing utility" echo " jmxget get JMX exported values from NameNode or DataNode." echo " mover run a utility to move block replicas across" echo " storage types" echo " oiv apply the offline fsimage viewer to an fsimage" echo " oiv_legacy apply the offline fsimage viewer to an legacy fsimage" echo " oev apply the offline edits viewer to an edits file" echo " fetchdt fetch a delegation token from the NameNode" echo " getconf get config values from configuration" echo " groups get the groups which users belong to" echo " snapshotDiff diff two snapshots of a directory or diff the" echo " current directory contents with a snapshot" echo " lsSnapshottableDir list all snapshottable dirs owned by the current user" echo " Use -help to see options" echo " portmap run a portmap service" echo " nfs3 run an NFS version 3 gateway" echo " cacheadmin configure the HDFS cache" echo " crypto configure HDFS encryption zones" echo " storagepolicies list/get/set block storage policies" echo " version print the version" echo "" echo "Most commands print help when invoked w/o parameters." # There are also debug commands, but they don‘t show up in this listing. } if [ $# = 0 ]; then print_usage exit fi
这个太简单,就是一个函数而已,告诉用途
---
接下来到了最关键的时刻了,就是执行命令
if [ "$COMMAND" = "namenode" ] ; then CLASS=‘org.apache.hadoop.hdfs.server.namenode.NameNode‘ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS"
其中
HADOOP_OPTS= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/root/hadoop-2.7.0-bin/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/hadoop-2.7.0-bin -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/root/hadoop-2.7.0-bin/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
---
剩下的一段是cgwin,忽略
---
export CLASSPATH=$CLASSPATH HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,NullAppender}"
赋值语句不多说
---
接下来的一个if-else语句,实际上执行的是最后一个分支
else # run it exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "[email protected]" fi
庐山真面目要出来了,打印执行语句
/usr/java/jdk1.8.0_45/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/root/hadoop-2.7.0-bin/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/hadoop-2.7.0-bin -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/root/hadoop-2.7.0-bin/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.hdfs.server.namenode.NameNode -format
哟,不错喔。
终于揭开了庐山真面目。
下一节,我们开始分析NameNode的源码。
时间: 2024-12-14 18:45:11