1.环境准备:
安装Centos6.5的操作系统
下载hadoop2.7文件
下载jdk1.8文件
2.修改/etc/hosts文件及配置互信:
在/etc/hosts文件中增加如下内容:
192.168.1.61 host61
192.168.1.62 host62
192.168.1.63 host63
配置好各服务器之间的ssh互信
3.添加用户,解压文件并配置环境变量:
useradd hadoop
passwd hadoop
tar -zxvf hadoop-2.7.1.tar.gz
mv hadoop-2.7.1 /usr/local
ln -s hadoop-2.7.1 hadoop
chown -R hadoop:hadoop hadoop-2.7.1
tar -zxvf jdk-8u60-linux-x64.tar.gz
mv jdk1.8.0_60 /usr/local
ln -s jdk1.8.0_60 jdk
chown -R root:root jdk1.8.0_60
echo ‘export JAVA_HOME=/usr/local/jdk‘ >>/etc/profile
echo ‘export PATH=/usr/local/jdk/bin:$PATH‘ >/etc/profile.d/java.sh
4.修改hadoop配置文件:
1)修改hadoop-env.sh文件:
cd /usr/local/hadoop/etc/hadoop/hadoop-env.sh
sed -i ‘s%#export JAVA_HOME=${JAVA_HOME}%export JAVA_HOME=/usr/local/jdk%g‘ hadoop-env.sh
2)修改core-site.xml,在最后添加如下内容:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://host61:9000/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/temp</value>
</property>
</configuration>
3)修改hdfs-site.xml文件:
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
4)修改mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>host61:9001</value>
</property>
</configuration>
5)配置masters
host61
6)配置slaves
host62
host63
5.用同样的方式配置host62及host63
6.格式化分布式文件系统
/usr/local/hadoop/bin/hadoop -namenode format
7.运行hadoop
1)/usr/local/hadoop/sbin/start-dfs.sh
2)/usr/local/hadoop/sbin/start-yarn.sh
8.检查:
[[email protected] sbin]# jps
4532 ResourceManager
4197 NameNode
4793 Jps
4364 SecondaryNameNode
[[email protected] ~]# jps
32052 DataNode
32133 NodeManager
32265 Jps
[[email protected] local]# jps
6802 NodeManager
6963 Jps
6717 DataNode