Hadoop 安装 第一天 环境搭建(二)

配置IP地址

设置IP地址网关,如下为master机器设置,slave1与slave2按照同样方法配置。

配置完成,重启网卡service network restart ,

查看ip   ifconfig

三台虚拟机,master,slave1,slave2

hosts映射:

修改 :vim /etc/hosts

检查配置是否生效

[[email protected] ~]# ping slave1
PING slave1 (192.168.109.11) 56(84) bytes of data.
64 bytes from slave1 (192.168.109.11): icmp_seq=1 ttl=64 time=1.87 ms
64 bytes from slave1 (192.168.109.11): icmp_seq=2 ttl=64 time=0.505 ms
64 bytes from slave1 (192.168.109.11): icmp_seq=3 ttl=64 time=0.462 ms

[[email protected] ~]# ping master
PING master (192.168.109.10) 56(84) bytes of data.
64 bytes from master (192.168.109.10): icmp_seq=1 ttl=64 time=0.311 ms
64 bytes from master (192.168.109.10): icmp_seq=2 ttl=64 time=0.465 ms
64 bytes from master (192.168.109.10): icmp_seq=3 ttl=64 time=0.541 ms

[[email protected] ~]# ping master
PING master (192.168.109.10) 56(84) bytes of data.
64 bytes from master (192.168.109.10): icmp_seq=1 ttl=64 time=1.08 ms
64 bytes from master (192.168.109.10): icmp_seq=2 ttl=64 time=0.463 ms
64 bytes from master (192.168.109.10): icmp_seq=3 ttl=64 time=0.443 ms

如上两两互ping。

防火墙关闭:

查看防火墙状态

[[email protected] ~]# service iptables status
Redirecting to /bin/systemctl status iptables.service
iptables.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)

由于我的虚拟机没有安装防火墙,所以显示不出来

如果安装了,以root用户使用如下命令关闭iptables

chkconfig iptables off

关闭sellinux:

[[email protected] ~]# vim /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled        ------此处修改为disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

配置ssh互信

以root用户使用vi /etc/ssh/sshd_config,打开sshd_config配置文件,开放三个配置,如下图所示:

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile .ssh/authorized_keys

配置后重启服务

service sshd restart

使用hadoop用户登录在三个节点中使用如下命令生成私钥和公钥;

[[email protected] ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory ‘/home/hadoop/.ssh‘.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
ee:da:84:e1:9a:e6:06:30:68:89:8e:24:5d:80:e2:de [email protected]
The key‘s randomart image is:
+--[ RSA 2048]----+
| ... |
|o . |
|=... |
|*=. |
|Bo. . S |
|.o.E . + |
| . o o |
| oo + |
| ++ ..o |
+-----------------+
[[email protected] ~]$

合并公钥到authorized_keys文件,在Master服务器,进入/hadoop/.ssh目录,通过SSH命令合并,
cat id_rsa.pub>> authorized_keys
ssh [email protected] cat ~/.ssh/id_rsa.pub>> authorized_keys
ssh [email protected] cat ~/.ssh/id_rsa.pub>> authorized_keys

[[email protected] ~]$ cd .ssh/
[[email protected] .ssh]$ ls
id_rsa id_rsa.pub
[[email protected] .ssh]$ cat id_rsa.pub>> authorized_keys
[[email protected] .ssh]$ ls
authorized_keys id_rsa id_rsa.pub

[[email protected] .ssh]$ ls
authorized_keys id_rsa id_rsa.pub known_hosts
[[email protected] .ssh]$ ls -l
total 16
-rw-rw-r-- 1 hadoop hadoop 790 May 13 09:55 authorized_keys
-rw------- 1 hadoop hadoop 1679 May 13 09:57 id_rsa
-rw-r--r-- 1 hadoop hadoop 395 May 13 09:57 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop 183 May 13 09:55 known_hosts
[[email protected] .ssh]$ ssh [email protected] cat ~/.ssh/id_rsa.pub>> authorized_keys
[email protected]‘s password:
[[email protected] .ssh]$ ls -l
total 16
-rw-rw-r-- 1 hadoop hadoop 1185 May 13 09:57 authorized_keys
-rw------- 1 hadoop hadoop 1679 May 13 09:57 id_rsa
-rw-r--r-- 1 hadoop hadoop 395 May 13 09:57 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop 183 May 13 09:55 known_hosts
[[email protected] .ssh]$ ssh [email protected] cat ~/.ssh/id_rsa.pub>> authorized_keys
The authenticity of host ‘slave2 (192.168.109.12)‘ can‘t be established.
ECDSA key fingerprint is 4e:f1:2c:3e:34:c1:36:75:ae:1c:a1:38:cf:45:bc:f8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘slave2,192.168.109.12‘ (ECDSA) to the list of known hosts.
[email protected]‘s password:
[[email protected] .ssh]$ ls -l
total 16
-rw-rw-r-- 1 hadoop hadoop 1580 May 13 09:58 authorized_keys
-rw------- 1 hadoop hadoop 1679 May 13 09:57 id_rsa
-rw-r--r-- 1 hadoop hadoop 395 May 13 09:57 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop 366 May 13 09:58 known_hosts
[[email protected] .ssh]$

在将密码文件分发出去

[[email protected] .ssh]$ scp authorized_keys [email protected]:/home/hadoop/.ssh
[email protected]‘s password:
authorized_keys 100% 1580 1.5KB/s 00:00

[[email protected] .ssh]$ scp authorized_keys [email protected]:/home/hadoop/.ssh
[email protected]‘s password:
authorized_keys 100% 1580 1.5KB/s 00:00
[[email protected] .ssh]$

在三台机器中使用如下设置authorized_keys读写权限

chmod 400 authorized_keys

测试ssh免密码登录是否生效

[[email protected] .ssh]$ ssh slave1
Last login: Fri May 13 10:23:09 2016 from master
[[email protected] ~]$ ssh slave2
Last login: Fri May 13 10:23:33 2016 from slave1
[[email protected] ~]$ ssh master
The authenticity of host ‘master (192.168.109.10)‘ can‘t be established.
ECDSA key fingerprint is 4e:f1:2c:3e:34:c1:36:75:ae:1c:a1:38:cf:45:bc:f8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘master,192.168.109.10‘ (ECDSA) to the list of known hosts.
Last login: Fri May 13 09:46:50 2016
[[email protected] ~]$

经过测试 OK

hadoop安装配置:

解压缩hadoop压缩文件

[[email protected] ~]# cd /tmp/
[[email protected] tmp]# ls

[[email protected] tmp]# tar -zxvf hadoop-2.4.1.tar.gz

[[email protected] tmp]# mv hadoop-2.4.1 /usr

[[email protected] tmp]# chown -R hadoop /usr/hadoop-2.4.1

在Hadoop目录下创建子目录:

[[email protected] hadoop-2.4.1]$ mkdir -p tmp
[[email protected] hadoop-2.4.1]$ mkdir -p name
[[email protected] hadoop-2.4.1]$ mkdir -p data

配置hadoop-env.sh:

配置hadoop中jdk和hadoop/bin路径

[[email protected] ~]$ cd /usr/hadoop-2.4.1/etc/hadoop

[[email protected] hadoop]$ vim hadoop-env.sh

确认是否生效

[[email protected] hadoop]$ source hadoop-env.sh
[[email protected] hadoop]$ hadoop version
Hadoop 2.4.1
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1604318
Compiled by jenkins on 2014-06-21T05:43Z
Compiled with protoc 2.5.0
From source with checksum bb7ac0a3c73dc131f4844b873c74b630
This command was run using /usr/hadoop-2.4.1/share/hadoop/common/hadoop-common-2.4.1.jar
[[email protected] hadoop]$

配置core-site.xml:

[[email protected] hadoop-2.4.1]$ cd etc/hadoop/
[[email protected] hadoop]$ vim core-site.xml

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop-2.4.1/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.groups</name>
<value>*</value>
</property>
</configuration>
~

配置yarn-env.sh:

# resolve links - $0 may be a softlink
export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}"

# some Java parameters
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
export JAVA_HOME=/usr/java/jdk1.8.0_65         ------------
export PATH=$PATH:/usr/hadoop-2.4.1/bin        ------------

if [ "$JAVA_HOME" != "" ]; then
#echo "run java in $JAVA_HOME"
JAVA_HOME=$JAVA_HOME
fi

if [ "$JAVA_HOME" = "" ]; then
echo "Error: JAVA_HOME is not set."
exit 1
fi

配置hdfs-site.xml:

[[email protected] hadoop]$ vim hdfs-site.xml

<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/hadoop-2.4.1/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/hadoop-2.4.1/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>

配置mapred-site.xml:

[[email protected] hadoop-2.4.1]$ cd etc/hadoop/
[[email protected] hadoop]$ cp mapred-site.xml.template mapred-site.xml

[[email protected] hadoop]$ vim mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>

配置yarn-site.xml:

<configuration>

  <property>

    <name>yarn.nodemanager.aux-services</name>

    <value>mapreduce_shuffle</value>

  </property>

  <property>

    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

    <value>org.apache.hadoop.mapred.ShuffleHandler</value>

  </property>

  <property>

    <name>yarn.resourcemanager.address</name>

    <value>master:8032</value>

  </property>

  <property>

    <name>yarn.resourcemanager.scheduler.address</name>

    <value>master:8030</value>

  </property>

  <property>

    <name>yarn.resourcemanager.resource-tracker.address</name>

    <value>master:8031</value>

  </property>

  <property>

    <name>yarn.resourcemanager.admin.address</name>

    <value>master:8033</value>

  </property>

  <property>

    <name>yarn.resourcemanager.webapp.address</name>

    <value>master:8088</value>

  </property>

</configuration>

配置slaves文件:

vi slaves

slave1

slave2

向各节点复制hadoop:

创建文件夹,切换到root,改权限,每个节点都按此步骤

[[email protected] ~]$ mkdir -p /usr/hadoop-2.4.1
mkdir: cannot create directory ‘/usr/hadoop-2.4.1’: Permission denied
[[email protected] ~]$ su - root
Password:
Last login: Fri May 13 09:10:15 CST 2016 on pts/0
Last failed login: Fri May 13 11:51:04 CST 2016 from master on ssh:notty
There were 3 failed login attempts since the last successful login.
[[email protected] ~]# mkdir -p /usr/hadoop-2.4.1
[[email protected] ~]# chown -R hadoop /usr/hadoop-2.4.1/
[[email protected] ~]#

scp -r /usr/hadoop-2.4.1 slave1:/usr/
scp -r /usr/hadoop-2.4.1 slave2:/usr/

格式化namenode:

[[email protected] bin]$./hdfs namenode -format

启动hdfs

cd hadoop-2.4.1/sbin

./start-dfs.sh

启动yarn

./start-yarn.sh

成功

问题

[[email protected] sbin]$ ./start-dfs.sh
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It‘s highly recommended that you fix the library with ‘execstack -c <libfile>‘, or link it with ‘-z noexecstack‘.
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on [2016-05-13 13:39:48,187 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable]
Error: Cannot find configuration directory: /etc/hadoop
Error: Cannot find configuration directory: /etc/hadoop
Starting secondary namenodes [2016-05-13 13:39:49,547 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
0.0.0.0]
Error: Cannot find configuration directory: /etc/hadoop
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It‘s highly recommended that you fix the library with ‘execstack -c <libfile>‘, or link it with ‘-z noexecstack‘.

解决方法:export HADOOP_CONF_DIR=/usr/hadoop-2.4.1/etc/hadoop

问题:电脑重新开机后如下

[[email protected] hadoop]$ hadoop version
bash: hadoop: command not found...

[[email protected] hadoop]$ vi ~/.bash_profile

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/bin:/usr/hadoop-2.4.1/bin
export PATH

[[email protected] hadoop]$ vi ~/.bash_profile

[[email protected] hadoop]$ hadoop version
Hadoop 2.4.1
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1604318
Compiled by jenkins on 2014-06-21T05:43Z
Compiled with protoc 2.5.0
From source with checksum bb7ac0a3c73dc131f4844b873c74b630
This command was run using /usr/hadoop-2.4.1/share/hadoop/common/hadoop-common-2.4.1.jar

时间: 2024-10-07 13:42:34

Hadoop 安装 第一天 环境搭建(二)的相关文章

[LAMP环境搭建二]MySQL安装

安装前准备工作:[[email protected] src]# yum -y install bison bison-devel ncurses ncurses-devel openssl openssl-devel cmake下载安装http://www.cmake.org/files/v2.8/cmake-2.8.12.2.tar.gz wgettar -zxvfcd./bootstrapgmakegmake install/********************************

Hadoop源码阅读环境搭建

Hadoop源码阅读环境搭建 一.说明 作为一个学习hadoop的同学,必须在本机上搭建hadoop源码阅读环境,这样,在方便阅读源码的同时也方便进行调试和源码修改.好了,下面开始搭建环境. 1.环境说明:hadoop 版本:1.2.1. IDE:eclipse.操作系统:centos 2.网上有人是通过eclipse的新建项目指定目录的方式将hadoop目录转换成Eclipse工程同时导入eclipse,具体做法如下: File-->new-->Java Project-->勾掉Use

Hadoop分布式集群环境搭建

分布式环境搭建之环境介绍 之前我们已经介绍了如何在单机上搭建伪分布式的Hadoop环境,而在实际情况中,肯定都是多机器多节点的分布式集群环境,所以本文将简单介绍一下如何在多台机器上搭建Hadoop的分布式环境. 我这里准备了三台机器,IP地址如下: 192.168.77.128 192.168.77.130 192.168.77.134 首先在这三台机器上编辑/etc/hosts配置文件,修改主机名以及配置其他机器的主机名 [[email protected] ~]# vim /etc/host

centos LAMP第一部分-环境搭建 第十九节课

centos LAMP第一部分-环境搭建  Linux软件删除方式,mysql安装, 第十九节课 上半节课 Linux软件删除方式 mysql安装 下半节课 apache下面的一个软件httpd,大家会把httpd误认为是apache 搜狐镜像:http://mirrors.sohu.com 康盛镜像:http://www.aminglinux.com/study_v2/download.html Linux软件删除方式源码包删除:直接删除目录rpm删除:rpm -eyum 删除 yum rem

【Hadoop基础教程】1、Hadoop之服务器基础环境搭建

本blog以K-Master服务器基础环境配置为例分别演示用户配置.sudo权限配置.网路配置.关闭防火墙.安装JDK工具等.用户需参照以下步骤完成KVMSlave1~KVMSlave3服务器的基础环境配置. 开发环境 硬件环境:Centos 6.5 服务器4台(一台为Master节点,三台为Slave节点) 软件环境:Java 1.7.0_45.hadoop-1.2.1 1.安装环境 硬件环境:Centos 6.5 服务器4台(一台为Master节点,三台为Slave节点) 软件环境:Java

【Hadoop基础教程】4、Hadoop之完全分布式环境搭建

上一篇blog我们完成了Hadoop伪分布式环境的搭建,伪分布式模式也叫单节点集群模式, NameNode.SecondaryNameNode.DataNode.JobTracker.TaskTracker所有的守护进程全部运行在K-Master节点之上.在本篇blog我们将搭建完全分布式环境,运行NameNode.SecondaryNameNode.JobTracker守护进程在主节点上,运行DataNode.TaskTracker在从节点上. 开发环境 硬件环境:Centos 6.5 服务器

一、Android学习第一天——环境搭建(转)

(转自:http://wenku.baidu.com/view/af39b3164431b90d6c85c72f.html) 一. Android学习第一天——环境搭建 Android 开发环境的搭建 环境搭建需要①Android SDK ②JDK ③eclipse 环境搭建开始: ㈠将Android SDK与JDK解压,最好路径中不要出现汉字,然后配置环境变量,方便命令行操作 ㈡为eclipse(3.4.1)安装开发Android插件——Android ADT Help-->Install N

kafka环境搭建二---Windows客户端Linux服务器

一.对于服务器端的搭建可以参考上一篇文章:kafka单机版环境搭建与测试 服务器端IP :10.0.30.221 运行环境的目录如下: 需要改动config文件夹下的server.properties中的以下两个属性 zookeeper.connect=localhost:2181改成zookeeper.connect=10.0.30.221:2181 以及默认注释掉的 #host.name=localhost改成host.name=10.0.30.221 host.name不更改会造成客户端报

ThinkPHP第一课 环境搭建

第一课 环境搭建 1.说明: ThinkPHP是一个开源的国产PHP框架,是为了简化企业级应用开发和敏捷WEB应用开发而诞生的.最早诞生于2006年初,原名FCS,2007年元旦正式更名为ThinkPHP,并且遵循Apache2开源协议发布.早期的思想架构来源于Struts,后来经过不断改进和完善,同时也借鉴了国外很多优秀的框架和模式,使用面向对象的开发结构和MVC模式,融合了Struts的Action和Dao思想和JSP的TagLib(标签库).RoR的ORM映射和ActiveRecord模式