一.从hadoop 下载2.7.3 安装包
版本:hadoop-2.7.3.tar.gz
下载地址:www.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
百度云下载地址:
http://pan.baidu.com/s/1pLOyu9d 密码:s0j5
二. 解压下载好的hadoop-2.7.3.tar.gz到/usr/local目录下
$ sudo tar -xzvf hadoop-2.7.3.tar.gz
可以得到hadoop-2.7.3目录
三.hadoop配置
3.1 hadoop-env.sh
# cd /hadoop-2.7.3/etc/hadoop/
# sudo vim hadoop-env.sh
修改export JAVA_HOME=/usr/local/jdk1.8
yarn-env.sh (同上)
mapred-env.sh (同上)
3.2 slaves
删除 localhost
添加
hadoop1
hadoop2
3.2 core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop0:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop-2.7.3/tmp</value>
</property>
</configuration>
3.3 hdfs-site.xml
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop-2.7.3/hdf/data</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop-2.7.3/hdf/name</value>
<final>true</final>
</property>
</configuration>
3.4 mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop0:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop0:19888</value>
</property>
</configuration>
3.5 yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop0:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop0:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop0:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop0:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop0:8088</value>
</property>
</configuration>
四. 各个主机之间复制hadoop
# sudo scp -r /usr/local/hadoop-2.7.3 hadoop1:/usr/local/
# sudo scp -r /usr/local/hadoop-2.7.3 hadoop2:/usr/local/
五. 各个主机之间hadoop环境变量
5.1 # sudo vim /etc/profile
编辑内容:
export HADOOP_HOME=/usr/local/hadoop-2.7.3
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_LOG_DIR=/usr/local/hadoop-2.7.3/logs
export YARN_LOG_DIR=$HADOOP_LOG_DIR
5.2 使配置生效
# source /etc/profile
六 . 格式化namenode(在master上执行)
# cd /usr/local/hadoop-2.7.3/bin
# hdfs namenode -format
七. 启动 hadoop
# cd /usr/local/hadoop-2.7.3/sbin
# start-all.sh
master:
slave1:
slave2:
http://192.168.1.111:8088/cluster
后续集成zookeeper,hbase等
http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.9/
www.apache.org/dist/hbase/1.2.4/hbase-1.2.4-bin.tar.gz