1.apache提供的hadoop-2.4.1的安装包是在32位操作系统编译的,因为hadoop一些C++的本地库,所以如果在64位的操作上安装hadoop-2.4.1就需要重新在64操作系统上重新编译
2.本次搭建使用了2.7.1,hadoop2.7.1是稳定版。
3.节点包括了namenode的高可用,jobtracker的高可用,zookeeper高可用集群(后期更新)
架构图:
主机信息 |
IP |
主机名 |
MASTER |
|
Mycat |
SLAVE |
|
Haproxy |
SLAVE |
|
Haproxy_slave |
Hadoop版本 |
Version2.7.1 |
|
JDK版本 |
Version1.7.0_55 |
##三个节点的/etc/hosts一致
- 添加ssh 之间的互信:sshkeygen -t rsa
###若是原来存在的建议删除重新设置一次
# cd
#cd .ssh
#rm –rf ./*
1、 生成authorized_keys 文件
cat id_rsa.pub>> authorized_keys
2、 把其他节点的id_rsa.pub的内容拷贝到第一节点的authorized_keys文件里
3、 然后把第一节点的authorized_keys复制到2个SLAVE中去:
#scpauthorized_keys [email protected]:~/.ssh/
scp authorized_keys [email protected]:~/.ssh/
4、 设置.ssh的权限为700,authorized_keys的权限为600
#chmod 700 ~/.ssh
#chmod 600 ~/.ssh/authorized_keys
- 设置jdk环境变量vim/etc/profile
#注意jdk和hadoop包的路径,3个节点配置文件一致
export JAVA_HOME=/usr/local/jdk
export HADOOP_INSTALL=/usr/local/hadoop
设置环境立即生效:source /etc/profile
#ln –s /usr/local/jdk/bin/*/usr/bin/
测试jdk是否OK
java -version
java version "1.7.0_09-icedtea"
OpenJDK Runtime Environment(rhel-2.3.4.1.el6_3-x86_64)
OpenJDK 64-Bit Server VM (build 23.2-b09,mixed mode)
- 配置、添加用户hadoop
# useradd hadoop
#配置之前,先在本地文件系统创建以下文件夹:
/home/hadoop/tmp、/home/dfs/data、/home/dfs/name,3个节点一样
Mkdir /home/hadoop/tmp
mkdir /home/hadoop/dfs/data -p
mkdir /home/hadoop/dfs/name -p
#hadoop的配置文件在程序目录下的etc/hadoop,主要涉及的配置文件有7个:都在/hadoop/etc/hadoop文件夹下,可以用gedit命令对其进行编辑。
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
/usr/local/hadoop /etc/hadoop/yarn-env.sh
/usr/local/hadoop /etc/hadoop/slaves
/usr/local/hadoop/etc/hadoop/core-site.xml
/usr/local/hadoop/etc/hadoop/hdfs-site.xml
/usr/local/hadoop/etc/hadoop/mapred-site.xml
/usr/local/hadoop/etc/hadoop/yarn-site.xml
1、 修改hadoop-env.sh,设置jdk路径,在第25行中修改:
export JAVA_HOME=/usr/local/jdk
2、 修改core-site.xml
fs.default.name是NameNode的URI。hdfs://主机名:端口/hadoop.tmp.dir:Hadoop的默认临时路径,这个最好配置,如果在新增节点或者其他情况下莫名其妙的DataNode启动不了,就删除此文件中的tmp目录即可。不过如果删除了NameNode机器的此目录,那么就需要重新执行NameNode格式化的命令。
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://haproxy:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/tmp</value>
<description>abase for othertemporary directories</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
3、配置 hdfs-site.xml 文件-->>增加hdfs配置信息(namenode、datanode端口和目录位置)
dfs.name.dir是NameNode持久存储名字空间及事务日志的本地文件系统路径。当这个值是一个逗号分割的目录列表时,nametable数据将会被复制到所有目录中做冗余备份。
dfs.data.dir是DataNode存放块数据的本地文件系统路径,逗号分割的列表。当这个值是逗号分割的目录列表时,数据将被存储在所有目录下,通常分布在不同设备上。
dfs.replication是数据需要备份的数量,默认是3,如果此数大于集群的机器数会出错。
注意:此处的name1、name2、data1、data2目录不能预先创建,hadoop格式化时会自动创建,如果预先创建反而会有问题。
<configuration>
<property>
<name> dfs.namenode.name.dir </name>
<value>/home/dfs/name/name1</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/dfs/data/data1</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<final>true</final>
</property>
</configuration>
4、mapred-site.xml文件
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>haproxy:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>haproxy:19888</value>
</property>
</configuration>
4、修改yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shufle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>haproxy:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>haproxy:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>haproxy:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>haproxy:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>haproxy:8088</value>
</property>
5、配置masters和slaves主从结点
6、格式文件系统
这一步在主结点master上进行操作
报错
./hdfs: /usr/local/jdk/bin/java: /lib/ld-linux.so.2: bad ELFinterpreter: No such file or directory
yum install glibc.i686
#cd /usr/local/hadoop/bin/
./hdfs namenode -format
SHUTDOWN_MSG: Shutting down NameNode athaproxy/192.168.1.107
7、启动主节点
#/usr/local/hadoop/sbin/start-dfs.sh
问题1:
启动的时候日志有:
It‘s highly recommended that you fix the library with‘execstack -c <libfile>‘, or link it with ‘-z noexecstack‘.
经过修改主要是环境变量设置问题:
#vi /etc/profile或者vi~/.bash_profile
export HADOOP_HOME=/usr/local/hadoop
exportHADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
exportHADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
问题2:
WARN util.NativeCodeLoader: Unable toload native-hadoop library for your platform... using builtin-java classeswhere applicable
测试发现:
/usr/local/hadoop/bin/hadoop fs -ls /
16/11/16 16:16:42 WARN util.NativeCodeLoader: Unable to loadnative-hadoop library for your platform... using builtin-java classes whereapplicable
ls: Call From haproxy/192.168.1.107 to haproxy:9000 failed onconnection exception: java.net.ConnectException: Connection refused; For moredetails see: http://wiki.apache.org/hadoop/ConnectionRefused
增加调试信息设置
$ export HADOOP_ROOT_LOGGER=DEBUG,console
启动日志:标志红色的需要按上面的错误提示做相应的处理
16/11/19 15:45:27 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable
Starting namenodes on [mycat]
The authenticity of host ‘mycat(127.0.0.1)‘ can‘t be established.
RSA key fingerprint is3f:44:d6:f4:31:b0:5b:ff:86:b2:5d:87:f2:d9:b8:9d.
Are you sure you want to continueconnecting (yes/no)? yes
mycat: Warning: Permanently added‘mycat‘ (RSA) to the list of known hosts.
mycat: starting namenode, logging to/usr/local/hadoop/logs/hadoop-root-namenode-mycat.out
mycat: Java HotSpot(TM) Client VMwarning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0which might have disabled stack guard. The VM will try to fix the stack guardnow.
mycat: It‘s highly recommended that you fix the library with‘execstack -c <libfile>‘, or link it with ‘-z noexecstack‘.
haproxy: starting datanode, logging to/usr/local/hadoop/logs/hadoop-root-datanode-haproxy.out
haproxy_slave: starting datanode,logging to /usr/local/hadoop/logs/hadoop-root-datanode-haproxy_slave.out
haproxy: /usr/local/hadoop/bin/hdfs: line 304:/usr/local/jdk/bin/java: No such file or directory
haproxy: /usr/local/hadoop/bin/hdfs: line 304: exec:/usr/local/jdk/bin/java: cannot execute: No such file or directory
haproxy_slave: /usr/local/hadoop/bin/hdfs:/usr/local/jdk/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: No such fileor directory
haproxy_slave:/usr/local/hadoop/bin/hdfs: line 304: /usr/local/jdk/bin/java: Success
Starting secondary namenodes [0.0.0.0]
The authenticity of host ‘0.0.0.0(0.0.0.0)‘ can‘t be established.
RSA key fingerprint is3f:44:d6:f4:31:b0:5b:ff:86:b2:5d:87:f2:d9:b8:9d.
Are you sure you want to continueconnecting (yes/no)? yes
0.0.0.0: Warning: Permanently added‘0.0.0.0‘ (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode,logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-mycat.out
0.0.0.0: Java HotSpot(TM) Client VMwarning: You have loaded library/usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stackguard. The VM will try to fix the stack guard now.
0.0.0.0: It‘s highly recommended thatyou fix the library with ‘execstack -c <libfile>‘, or link it with ‘-znoexecstack‘.
16/11/19 15:46:01 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... usi
##到此3主备集群OK