以下为我的操作记录,还未整理格式
hqvm-L118 192.168.1.118 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)
hqvm-L138 192.168.1.138 jdk、hadoop、zookeeper NameNode、DFSZKFailoverController(zkfc)、DataNode、NodeManager、JournalNode、QuorumPeerMain
hqvm-L144 192.168.1.144 jdk、hadoop、zookeeper ResourceManager、DataNode、NodeManager、JournalNode、QuorumPeerMain
hqvm-L174 192.168.1.174 jdk、hadoop、zookeeper ResourceManager、DataNode、NodeManager、JournalNode、QuorumPeerMain
--查看当前操作系统
cat /proc/version
Linux version 2.6.32-431.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Fri Nov 22 03:15:09 UTC 2013
--每一台新建hadoop用户
useradd hadoop
passwd hadoop
usermod -g appl hadoop
--每一台配置java
切换到hadoop
vi .bash_profile
JAVA_HOME="/opt/appl/wildfly/jdk1.7.0_72"
HADOOP_HOME="/home/hadoop/hadoop-2.4.1"
JRE_HOME=$JAVA_HOME/jre
PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH:$HOME/bin
export JRE_HOME
export JAVA_HOME
export PATH
退出重登,java -version查看是否配置成功
--配置hostname
每一台用root用户vi /etc/hosts加入如下主机名
172.30.0.118 hqvm-L118
172.30.0.138 hqvm-L138
172.30.0.144 hqvm-L144
172.30.0.174 hqvm-L174
--sudo /etc/init.d/networking restart
--配置ssh
先在本机172.30.0.118
cd
ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
vi /etc/ssh/sshd_config
讲以下注释打开
RSAAuthentication yes # 启用 RSA 认证
PubkeyAuthentication yes # 启用公钥私钥配对认证方式
AuthorizedKeysFile .ssh/authorized_keys # 公钥文件路径(和上面生成的文件同)
service sshd restart
分发118的公钥
在172.30.0.118上将id_dsa.pub公钥发给138
scp id_dsa.pub [email protected]:~/
在138上
cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
完成后在118上ssh 172.30.0.138成功
如上步骤按需要配置ssh登陆,此四台我全配了两两都能相互访问
--将hadoop-2.6.0.tar.gz和Zookeeper3.4.6复制到/home/hadoop下面
--安装zookeeper
在hqvm-L138上
tar -zxvf zookeeper-3.4.6.tar.gz
mv zookeeper-3.4.6/ zookeeper
cd zookeeper/conf/
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
修改dataDir=/home/hadoop/zookeeper/zkData
最后添加
server.1=hqvm-L138:2888:3888
server.2=hqvm-L144:2888:3888
server.3=hqvm-L174:2888:3888
保存退出
mkdir /home/hadoop/zookeeper/zkData
touch /home/hadoop/zookeeper/zkData/myid
echo 1 > /home/hadoop/zookeeper/zkData/myid
scp -r /home/hadoop/zookeeper/ hqvm-L144:/home/hadoop/
scp -r /home/hadoop/zookeeper/ hqvm-L174:/home/hadoop/
144中:echo 2 > /home/hadoop/zookeeper/zkData/myid
174中:echo 3 > /home/hadoop/zookeeper/zkData/myid
--安装hadoop
118上
tar -zxvf hadoop-2.6.0.tar.gz
vi .bash_profile
添加
HADOOP_HOME=/home/hadoop/hadoop-2.6.0
PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$PATH:$HOME/bin
export HADOOP_HOME
修改hadoop配置文件
cd hadoop-2.6.0/etc/hadoop/
vi hadoop-env.sh
JAVA_HOME=/opt/appl/wildfly/jdk1.7.0_72
vi core-site.xml
添加
<configuration>
<!-- 指定hdfs的nameservice为masters -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://masters</value>
</property>
<!-- 指定hadoop临时目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-2.6.0/tmp</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>hqvm-L138:2181,hqvm-L144:2181,hqvm-L174:2181</value>
</property>
</configuration>
vi hdfs-site.xml
<configuration>
<!--指定hdfs的nameservice为masters,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>masters</value>
</property>
<!-- Masters下面有两个NameNode,分别是hqvm-L118,hqvm-L138 -->
<property>
<name>dfs.ha.namenodes.masters</name>
<value>hqvm-L118,hqvm-L138</value>
</property>
<!-- hqvm-L118的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.masters.hqvm-L118</name>
<value>hqvm-L118:9000</value>
</property>
<!-- hqvm-L118的http通信地址 -->
<property>
<name>dfs.namenode.http-address.masters.hqvm-L118</name>
<value>hqvm-L118:50070</value>
</property>
<!-- hqvm-L138的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.masters.hqvm-L138</name>
<value>hqvm-L138:9000</value>
</property>
<!-- hqvm-L138的http通信地址 -->
<property>
<name>dfs.namenode.http-address.masters.hqvm-L138</name>
<value>hqvm-L138:50070</value>
</property>
<!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hqvm-L138:8485;hqvm-L144:8485;hqvm-L174:8485/masters</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/hadoop-2.6.0/journal</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.masters</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
<configuration>
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
vi yarn-site.xml
<configuration>
<!-- 开启RM高可靠 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>RM_HA_ID</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hqvm-L144</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hqvm-L174</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hqvm-L138:2181,hqvm-L144:2181,hqvm-L174:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
vi slaves
hqvm-L138
hqvm-L144
hqvm-L174
scp -r /home/hadoop/hadoop-2.6.0/ hqvm-L138:/home/hadoop/
scp -r /home/hadoop/hadoop-2.6.0/ hqvm-L144:/home/hadoop/
scp -r /home/hadoop/hadoop-2.6.0/ hqvm-L174:/home/hadoop/
--启动zookeeper集群,分别在hqvm-L138,hqvm-L144,hqvm-L174上
cd /home/hadoop/zookeeper/bin
./zkServer.sh start
./zkServer.sh status查看状态
--启动journalnode,分别在hqvm-L138,hqvm-L144,hqvm-L174上
cd /home/hadoop/hadoop-2.6.0/
sbin/hadoop-daemon.sh start journalnode
运行jps命令检验,hqvm-L138,hqvm-L144,hqvm-L174上上多了JournalNode进程
--格式化hdfs
在118上
hdfs namenode -format
scp -r /home/hadoop/hadoop-2.6.0/tmp/ hqvm-L138:/home/hadoop/hadoop-2.6.0/
--格式化ZK,在118上
hdfs zkfc -formatZK
--启动hdfs,在118上
sbin/start-dfs.sh
用jps查看各个节点是否都已启动
--启动yarn
在hqvm-L144上,namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,就分开在不同机器上
/home/hadoop/hadoop-2.6.0/sbin/start-yarn.sh
在hqvm-L174上,启动resourcemanager,/home/hadoop/hadoop-2.6.0/sbin/yarn-daemon.sh start resourcemanager
到此配置完毕,可以用浏览器访问
主namenode
http://172.30.0.118:50070
备namenode
http://172.30.0.138:50070
--验证HDFS HA
首先向hdfs上传一个文件
hadoop fs -put /etc/profile /profile
然后再kill掉active的NameNode,用jps查看pid或者ps -ef|grep hadoop
发现http://172.30.0.118:50070/不能访问,http://172.30.0.138:50070/变成active
发现hadoop fs -ls /还是可以用的
手动启动那个挂掉的namenode118
/home/hadoop/hadoop-2.6.0/sbin/hadoop-daemon.sh start namenode
发现这时候http://172.30.0.118:50070/可以访问了,为standby
--验证YARN:
运行hadoop提供的demo中的wordCount程序:
hadoop jar /home/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /profile /out
hadoop HA集群搭建完成
http://172.30.0.144:8088
http://172.30.0.174:8088