參考资源下载:
http://pan.baidu.com/s/1ntwUij3
视频安装教程:hadoop安装.flv
VirtualBox虚拟机:hadoop.part1-part5.rar
hadoop文件:hadoop-2.2.0.tar.gz
hadoop配置文件:hadoop_conf.tar.gz
hadoop学习教程:炼数成金-hadoop
虚拟机下载安装:
VirtualBox-4.3.12-93733-Win.exe
http://dlc.sun.com.edgesuite.net/virtualbox/4.3.12/VirtualBox-4.3.12-
93733-Win.exe
(Win7下启动异常,可使用兼容模式打开)
可參考:http://www.cnblogs.com/xia520pi/archive/2012/05/16/2503949.html
0安装前检查
系统用户:root root
登陆用户:hadoop hadoop
关闭防火墙及不必要服务:
chkconfig iptables off
chkconfig ip6tables off
chkconfig postfix off
chkconfig bluetooth off
检查sshd是否打开、防火墙是否关闭等
chkconfig --list
开机启动文本方式(5窗体 3文本,改为文本模式加快系统启动)
vi /etc/inittab
立马关机/重新启动
shutdown -h now
reboot -h now
1Hadoop集群规划
NameNode
Hadoop1 192.168.1.111
DataNode
Hadoop1 192.168.1.111
Hadoop2 192.168.1.112
Hadoop3 192.168.1.113
软件版本号
Java 7up21
Hadoop2.2.0
2样板机安装
安装Jdk(省略)--Java 7up21
/usr/java/jdk1.7.0_21
安装Hadoop(省略)--Hadoop2.2.0
/app/hadoop/hadoop220
Hadoop1 192.168.1.111 08:00:27:64:15:BA System eth0
Hadoop2 192.168.1.112 08:00:27:CD:A6:29 System eth0
Hadoop3 192.168.1.113 08:00:27:AD:BF:A9 System eth0
vi /etc/hosts
Hadoop1 192.168.1.111
Hadoop2 192.168.1.112
Hadoop3 192.168.1.113
新增hadoop用户
group -g 1000 hadoop
useradd -u 2000 -g hadoop hadoop
passwd hadoop
改动环境变量
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_21
export JRE_HOME=/usr/java/jdk1.7.0_21/jre
export ANT_HOME=/app/ant192
export MAVEN_HOME=/app/maven305
export FINDBUGS_HOME=/app/findbugs202
export SCALA_HOME=/app/scala2104
export HADOOP_COMMON_HOME=/app/hadoop/hadoop220
export HADOOP_CONF_DIR=/app/hadoop/hadoop220/etc/hadoop
export YARN_CONF_DIR=/app/hadoop/hadoop220/etc/hadoop
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:${ANT_HOME}/bin:${MAVEN_HOME}/bin:${FINDBUGS_HOME}/bin:${SCALA_HOME}/bin:${HADOOP_COMMON_HOME}/bin:${HADOOP_COMMON_HOME}/sbin:$PATH
更新环境变量
source /etc/profile
切换到hadoop配置文件文件夹
cd /app/hadoop/hadoop220/etc/hadoop/
分别改动下面配置文件
vi slaves
vi core-site.xml
vi hdfs-site.xml
vi yarn-env.xml
vi mapred-site.xml
vi hadoop-env.sh
vi yarn-site.xml
vi slaves
hadoop1
hadoop2
hadoop3
vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:8000</value>
</property>
</configuration>
vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///app/hadoop/hadoop220/mydata/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///app/hadoop/hadoop220/mydata/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
vi yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop1</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>${yarn.resourcemanager.hostname}:8090</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
vi hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.7.0_21
3集群安装
VirtualBox虚拟机将hadoop1复制,分别为hadoop2和hadoop3
分别改动hadoop2和hadoop3的网卡地址、网络地址、主机名称
Hadoop1 192.168.1.111 08:00:27:64:15:BA System eth0
Hadoop2 192.168.1.112 08:00:27:CD:A6:29 System eth0
Hadoop3 192.168.1.113 08:00:27:AD:BF:A9 System eth0
分别改动hadoop2和hadoop3的网卡地址和网络属性信息
vi /etc/udev/rules.d//70-persistent-net.rules
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:64:15:ba", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
UUID=dc326328-8fb1-4e22-b8d1-f90a890e5f56
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
IPADDR=192.168.1.111
PREFIX=24
GATEWAY=192.168.1.1
DNS1=8.8.8.8
HWADDR=08:00:27:64:15:BA
LAST_CONNECT=1408499318
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop1
设置hadoop1、hadoop2和hadoop3的/etc/hosts文件域名解析
vi /etc/hosts
192.168.1.111 hadoop1
192.168.1.112 hadoop2
192.168.1.113 hadoop3
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
配置三台server实现ssh无password登陆
切换到用户根文件夹,生成hadoop用户密钥
su -hadoop
cd ~
生成其无password密钥对,询问其保存路径时直接回车採用默认路径。
生成的密钥对:id_rsa和id_rsa.pub,默认存储在"/home/hadoop/.ssh"文件夹下。
ssh-keygen -t rsa
在hadoop1节点上做例如以下配置。把id_rsa.pub追加到授权的key里面去。
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
改动文件"authorized_keys"
chmod 600 ~/.ssh/authorized_keys
依照以上方法分别设置hadoop2和hadoop3,最后使三台hadoop机子的id_rsa.pub全部都追加到authorized_keys中。并保证三台机子的/.ssh/authorized_keys同样。
(如不清楚也可參考视频中的具体设置)
用root用户登录server改动SSH配置文件"/etc/ssh/sshd_config"的下列内容。
RSAAuthentication yes # 启用 RSA 认证
PubkeyAuthentication yes # 启用公钥私钥配对认证方式
AuthorizedKeysFile .ssh/authorized_keys # 公钥文件路径(和上面生成的文件同)
service sshd restart
退出root登录。使用hadoop普通用户验证是否成功。
ssh hadoop1
cd /app/hadoop/hadoop220
hadoop系统的文件格式化
bin/hdfs namenode -format
hadoop系统hdfs文件系统启动/关闭
sbin/start-dfs.sh
sbin/stop-dfs.sh
hdfs文件系统測试
bin/hdfs dfs -ls /
bin/hdfs dfs -mkdir -p /dataguru/test
bin/hdfs dfs -ls /dataguru/test
bin/hdfs dfs -put LICENSE.txt /dataguru/test/
bin/hdfs dfs -ls /dataguru/test
hadoop系统启动/关闭
sbin/start-all.sh
sbin/stop-all.sh
全部相关命令
distribute-exclude.sh start-all.cmd stop-all.sh
hadoop-daemon.sh start-all.sh stop-balancer.sh
hadoop-daemons.sh start-balancer.sh stop-dfs.cmd
hdfs-config.cmd start-dfs.cmd stop-dfs.sh
hdfs-config.sh start-dfs.sh stop-secure-dns.sh
httpfs.sh start-secure-dns.sh stop-yarn.cmd
mr-jobhistory-daemon.sh start-yarn.cmd stop-yarn.sh
refresh-namenodes.sh start-yarn.sh yarn-daemon.sh
slaves.sh stop-all.cmd yarn-daemons.sh
4问题备注
could only be replicated to 0 nodes, instead of 1
Hadoop DataNode不启动的解决的方法:
原因:多次格式化hdfs后生成的namespaceIDs不兼容导致
1删除全部节点下的logs文件夹和存放数据的文件夹mydata
rm -rf logs/*
rm -rf mydata/*
2又一次格式化hdfs
bin/hdfs namenode -format