1 下载解压配置hadoop
hadoop-env.sh
export JAVA_HOME=/opt/JDK/jdk1.8.0_45
hdfs-site.xml
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/data</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>sherry:9001</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/hadoop/name</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://sherry:9000</value>
</property>
</configuration>
vi /etc/profile
export HADOOP_HOME=/opt/hadoop-1.2.1
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
source /etc/profile 生效
2 安装配置ssh
apt-get install openssh-server
ssh-keygen -t dsa -P ‘‘
执行完该指令后,在/root/.ssh目录下会出现两个文件:id_dsa和id_dsa.pub文件;
cat ./id_dsa.pub >> authorized_keys;
如果 ssh localhost 能够正常访问,就说嘛ssh配置ok
格式化hadoop
hadoop namenode -format
启动hadoop
start-all.sh
查看是否启动成功
jps
7026 SecondaryNameNode
6706 NameNode
7476 Jps
7125 JobTracker
6859 DataNode
7291 TaskTracker
出现上述节点,就说明启动成功