本文使用前提:从noha到ha
zk作用:维护共享锁保证只有一个active的nn
journal:在两个nn间同步元数据
机器分配:
nn1 | namenode,DFSZKFailoverController |
nn2 | namenode,DFSZKFailoverController |
slave1 | datanode,zookeeper,journalnode |
slave2 | datanode,zookeeper,journalnode |
slave3 | datanode,zookeeper,journalnode |
1、 配置core-site.xml,添加zk
<property> <name>ha.zookeeper.quorum</name> <value>slave1:2181,slave2:2181,slave3:2181</value> </property>
2、配置 hdfs-site.xml
<!--指定hdfs的nameservice为masters,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>masters</value> </property> <!-- masters下面有两个NameNode,分别是nn1,nn2--> <property> <name>dfs.ha.namenodes.masters</name> <value>nn1,nn2</value> </property> <!-- Master的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.masters.nn1</name> <value>nn1:9000</value> </property> <!-- Master的http通信地址 --> <property> <name>dfs.namenode.http-address.masters.nn1</name> <value>nn1:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.masters.nn2</name> <value>nn2:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.masters.nn2</name> <value>nn2:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://slave1:8485;slave2:8485;slave3:8485/masters</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置--> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/journal</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.masters</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间--> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property>
3、 修改yarn-site.xml,开启rm的ha
<configuration> <!-- 开启RM高可靠 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>RM_HA_ID</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>nn1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>nn2</value> </property> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>slave1:2181,slave2:2181,slave3:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
4、 启动步骤
(1) 启动zk
(2) 启动journalnode hadoop-daemon.shstart journalnode
(3) 格式化hdfs hdfs namenode –format; 将格式化的元数据拷贝到nn2对应目录
(4) 格式化zk hdfs zkfc –formatZK
(5) 启动hdfs sbin/start-dfs.sh
(6) 启动yarn sbin/start-yarn.sh ;查看resourcemanager是否启动,yarn-daemon.shstart resourcemanager
验证HDFSHA:
在nn1 上kill -9 <pid of NN> ,nn2变为active
然后再重新启动nn1 sbin/hadoop-daemon.sh start namenode
注:
手动切换namenode:./hdfs haadmin -transitionToActive --forcemanual nn1
手动切换rm: yarn rmadmin -transitionToActive--forcemanual r1
问题:
目前看,rm的ha并没有起作用,暂时不调了
版权声明:本文为博主原创文章,未经博主允许不得转载。
时间: 2024-10-14 00:52:46