1. 修改主机名字 临时修改(session): sudo hostname orcll
/etc/sysconfig/network -> HOSTNAME=hadoop0 #localhost.localdomain shutdown -r now 2.将hostname与ip绑定
添加/etc/hosts文件 ping hadoop0
3.关闭防火墙
1 service iptables stop 2 chkconfig iptables off 3 -- service iptables status chkconfig [--list]|grep iptables
4.免密码登陆
1 ssh-keygen -t rsa 2 id_rsa id_rsa.pub 3 cp id_rsa.pub authorized_keys 4 5 ---- service sshd start chkconfig sshd on
问题:出现ping baidu.com connect: Network is unreachable
解决:
1>linux主动添加网关(GATEWAY): route add default gw 192.168.191.100 dev eth0
2>/etc/sysconfig/network-scriptings/ifcfg-eth0 设置GATEWAY=192.168.191.1
命令:
重启network: service network restart
查看路游表: route
如果没有安装ssh: #yum install ssh(先可以访问外网) yum -y install openssh-clients yum -y install openssh-server
ssh localhost
5.安转jdk
tar -vxf /opt/hadoop/jdk-8u25-linux-x64.gz.jar 编辑/etc/profile文件,在文件嘴上放添加 export JAVA_HOME=/opt/hadoop/jdk1.8.0_25 export PATH=.:$JAVA_HOME/bin:$PATH source /etc/profile java -version
6.安装hadoop
tar -zxvf hadoop-1.1.2.tar.gz 编辑/etc/profile文件 export HADOOP_HOME=/opt/hadoop/hadoop-1.1.2
source /etc/profile
修改hadoop配置文件
hadoop-env.sh
1 设置jdk环境路径,第九行
core.site.xml
1 <property> 2 <name>fs.default.name</name> 3 <value>hdfs://hadoop0:9000</value> 4 <description>change your own hostname</description> 5 </property> 6 <property> 7 <name>hadoop.tmp.dir</name> 8 <value>/usr/local/hadoop-1.1.2/tmp</value> 9 </property>
hdfs-site.xml
1 <property> 2 <name>dfs.replication</name> 3 <value>1</value> 4 </property> 5 <property> 6 <name>dfs.permissions</name> 7 <value>false</value> 8 </property>
mapred-site.xml
1 <property> 2 <name>mapred.job.tracker</name> 3 <value>hadoop0:9001</value> 4 <description>change your own hostname</description> 5 </property>
格式化:hadoop namenode -format 启动:start-all.sh
验证:jps
http://hadoop0:50070/dfshealth.jsp http://hadoop0:50030/jobtracker.jsp