安装 jdk
1 yum install java-1.7.0-openjdk* 3 检查安装:java -version
创建Hadoop用户,设置Hadoop用户使之可以免密码ssh到localhost
1 su - hadoop 2 ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa 3 cat ~/.ssh/id_dsa.pub>> ~/.ssh/authorized_keys 4 5 cd /home/hadoop/.ssh 6 chmod 600 authorized_keys
注意这里的权限问题,保证.ssh目录权限为700,authorized_keys为600
验证:
1 [[email protected] .ssh]$ ssh localhost 2 Last login: Sun Nov 17 22:11:55 2013
解压hadoop,安装在/opt/hadoop
1 tar -xzvf hadoop-2.6.0.tar.gz 2 mv -i /home/erik/hadoop-2.6.0 /opt/hadoop 3 chown -R hadoop /opt/hadoop
要修改的文件有hadoop-env.sh、core-site.xml 、 hdfs-site.xml 、 yarn-site.xml 、mapred-site.xml几个文件。
1 cd /usr/opt/hadoop/etc/hadoop
设置hadoop-env.sh中的java环境变量,改成这样JAVA_HOME好像没效
1 export JAVA_HOME= {你的java环境变量}
core-site.xml
1 <configuration> 2 <property> 3 <name>hadoop.tmp.dir</name> 4 <value>/opt/hadoop/tmp</value> 5 </property> 6 <property> 7 <name>fs.default.name</name> 8 <value>localhost:9000</value> 9 </property> 10 </configuration>
hdfs.xml
1 <configuration> 2 <property> 3 <name>dfs.replication</name> 4 <value>1</value> 5 </property> 6 <property> 7 <name>dfs.namenode.name.dir</name> 8 <value>/opt/hadoop/dfs/name</value> 9 </property> 10 <property> 11 <name>dfs.datanode.data.dir</name> 12 <value>/opt/hadoop/dfs/data</value> 13 </property> 14 <property> 15 <name>dfs.permissions</name> 16 <value>false</value> 17 </property> 18 </configuration>
yarn-site.xml
1 <configuration> 2 <property> 3 <name>mapreduce.framework.name</name> 4 <value>yarn</value> 5 </property> 6 7 <property> 8 <name>yarn.nodemanager.aux-services</name> 9 <value>mapreduce_shuffle</value> 10 </property> 11 </configuration>
mapred-site.xml
1 <configuration> 2 <property> 3 <name>mapred.job.tracker</name> 4 <value>localhost:9001</value> 5 </property> 6 </configuration>
配置环境变量,修改/etc/profile, 写在最后面即可。配置完要重启!!!
1 export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95.x86_64 2 export JRE_HOME=$JAVA_HOME/jre 3 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/bin 4 export HADOOP_INSTALL=/opt/hadoop 5 export PATH=${HADOOP_INSTALL}/bin:${HADOOP_INSTALL}/sbin${PATH} 6 export HADOOP_MAPRED_HOME=${HADOOP_INSTALL} 7 export HADOOP_COMMON_HOME=${HADOOP_INSTALL} 8 export HADOOP_HDFS_HOME=${HADOOP_INSTALL} 9 export YARN_HOME=${HADOOP_INSTALLL} 10 export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_INSTALL}/lib/natvie 11 export HADOOP_OPTS="-Djava.library.path=${HADOOP_INSTALL}/lib:${HADOOP_INSTALL}/lib/native"
之后就是见证奇迹的时候了,
1 cd /opt/hadoop/
格式化hdfs
1 bin/hdfs namenode -format
启动hdfs
1 sbin/start-dfs.sh 2 sbin/start-yarn.sh
理论上会见到
1 Starting namenodes on [localhost] 2 localhost: starting namenode, logging to /usr/opt/hadoop-2.6.0/logs/hadoop-hadoop-namenode-.out 3 localhost: starting datanode, logging to /usr/opt/hadoop-2.6.0/logs/hadoop-hadoop-datanode-.out 4 Starting secondary namenodes [0.0.0.0] 5 0.0.0.0: starting secondarynamenode, logging to /usr/opt/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-.out
输入网址127.0.0.1:50070就可以看见hadoop的网页了,这就说明成功了。
参考:
http://www.centoscn.com/hadoop/2015/0118/4525.html
http://blog.csdn.net/yinan9/article/details/16805275
http://www.aboutyun.com/thread-10554-1-1.html
时间: 2024-10-09 23:49:11