完全分布式安装Hadoop

hadoop学习笔记之--完全分布模式安装

  Hadoop完全分布式模式安装步骤

  Hadoop模式介绍

  单机模式:安装简单,几乎不用作任何配置,但仅限于调试用途

  伪分布模式:在单节点上同时启动namenode、datanode、jobtracker、tasktracker、secondary namenode等5个进程,模拟分布式运行的各个节点

  完全分布式模式:正常的Hadoop集群,由多个各司其职的节点构成

  安装环境

  操作平台:vmware2

  操作系统:oracle linux 5.6

  软件版本:hadoop-0.22.0,jdk-6u18

  集群架构:3 node,master node(gc),slave node(rac1,rac2)

  安装步骤

  1.        下载Hadoop和jdk:

  如:hadoop-0.22.0

  

  2.        配置hosts文件

  所有的节点(gc,rac1,rac2)都修改/etc/hosts,使彼此之间都能把主机名解析为ip

  [root@gc ~]$ cat /etc/hosts

  # Do not remove the following line, or various programs

  # that require network functionality will fail.

  127.0.0.1               localhost.localdomain localhost

  ::1             localhost6.localdomain6 localhost6

  192.168.2.101           rac1.localdomain rac1

  192.168.2.102           rac2.localdomain rac2

  192.168.2.100           gc.localdomain gc

  3.        建立hadoop运行账号

  在所有的节点创建hadoop运行账号

  [root@gc ~]# groupadd hadoop

  [root@gc ~]# useradd -g hadoop grid --注意此处一定要指定分组,不然可能会不能建立互信

  [root@gc ~]# id grid

  uid=501(grid) gid=54326(hadoop) groups=54326(hadoop)

  [root@gc ~]# passwd grid

  Changing password for user grid.

  New UNIX password:

  BAD PASSWORD: it is too short

  Retype new UNIX password:

  passwd: all authentication tokens updated successfully.

  4.        配置ssh免密码连入

  注意要以hadoop用户登录,在hadoop用户的主目录下进行操作。

  每个节点做下面相同的操作

  [[email protected] ~]$ ssh-keygen -t rsa

  Generating public/private rsa key pair.

  Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

  Created directory ‘/home/hadoop/.ssh‘.

  Enter passphrase (empty for no passphrase):

  Enter same passphrase again:

  Your identification has been saved in /home/hadoop/.ssh/id_rsa.

  Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

  The key fingerprint is:

  54:80:fd:77:6b:87:97:ce:0f:32:34:43:d1:d2:c2:0d [email protected]

  [[email protected] ~]$ cd .ssh

  [[email protected] .ssh]$ ls

  id_rsa  id_rsa.pub

  把各个节点的authorized_keys的内容互相拷贝加入到对方的此文件中,然后就可以免密码彼此ssh连入。

  在其中一节点(gc)节点就可完成操作

  [[email protected] .ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

  [[email protected] .ssh]$ ssh rac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

  The authenticity of host ‘rac1 (192.168.2.101)‘ can‘t be established.

  RSA key fingerprint is 19:48:e0:0a:37:e1:2a:d5:ba:c8:7e:1b:37:c6:2f:0e.

  Are you sure you want to continue connecting (yes/no)  yes

  Warning: Permanently added ‘rac1,192.168.2.101‘ (RSA) to the list of known hosts.

  [email protected]‘s password:

  [[email protected] .ssh]$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

  The authenticity of host ‘rac2 (192.168.2.102)‘ can‘t be established.

  RSA key fingerprint is 19:48:e0:0a:37:e1:2a:d5:ba:c8:7e:1b:37:c6:2f:0e.

  Are you sure you want to continue connecting (yes/no)  yes

  Warning: Permanently added ‘rac2,192.168.2.102‘ (RSA) to the list of known hosts.

  [email protected]‘s password:

  [[email protected] .ssh]$ scp ~/.ssh/authorized_keys rac1:~/.ssh/authorized_keys

  [email protected]‘s password:

  authorized_keys                                                                                                            100% 1213     1.2KB/s   00:00

  [[email protected] .ssh]$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

  [email protected]‘s password:

  authorized_keys                                                                                                            100% 1213     1.2KB/s   00:00

  [[email protected] .ssh]$ ll

  总计 16

  -rw-rw-r-- 1 hadoop hadoop 1213 10-30 09:18 authorized_keys

  -rw------- 1 hadoop hadoop 1675 10-30 09:05 id_rsa

  -rw-r--r-- 1 hadoop hadoop  403 10-30 09:05 id_rsa.pub

  --分别测试连接

  [[email protected] .ssh]$ ssh rac1 date

  2012年 11月 18日星期日 01:35:39 CST

  [[email protected] .ssh]$ ssh rac2 date

  2012年 10月 30日星期二 09:52:46 CST

  --可以看到这步和配置oracle RAC中使用 SSH 建立用户等效性步骤是一样的。

  5.        解压hadoop安装包

  --可先一某节点解压配置文件

  [[email protected] ~]$ ll

  总计 43580

  -rw-r--r-- 1 grid hadoop 44575568 2012-11-19 hadoop-0.20.2.tar.gz

  [[email protected] ~]$ tar xzvf /home/grid/hadoop-0.20.2.tar.gz

  [[email protected] ~]$ ll

  总计 43584

  drwxr-xr-x 12 grid hadoop     4096 2010-02-19 hadoop-0.20.2

  -rw-r--r--  1 grid hadoop 44575568 2012-11-19 hadoop-0.20.2.tar.gz

  --在各节点安装jdk

  [[email protected] ~]# ./jdk-6u18-linux-x64-rpm.bin

  6.         Hadoop配置有关文件

  

  n       配置hadoop-env.sh

  [[email protected] conf]# pwd

  /root/hadoop-0.20.2/conf

  --修改jdk安装路径

  [[email protected] conf]vi hadoop-env.sh

  export JAVA_HOME=/usr/java/jdk1.6.0_18

  n       配置namenode,修改site文件

  --修改core-site.xml文件

  [[email protected] conf]# vi core-site.xml

  < xml version="1.0" >

  < xml-stylesheet type="text/xsl" href="configuration.xsl" >

  <!-- Put site-specific property overrides in this file. -->

  <configuration>

  <property>

  <name>fs.default.name</name>

  <value>hdfs://192.168.2.100:9000</value> --注意完全分布模式此地一定要用IP,下同

  </property>

  </configuration>

  注:fs.default.name NameNode的IP地址和端口

  --修改hdfs-site.xml文件

  [[email protected] conf]# vi hdfs-site.xml

  < xml version="1.0" >

  < xml-stylesheet type="text/xsl" href="configuration.xsl" >

  <!-- Put site-specific property overrides in this file. -->

  <configuration>

  <property>

  <name>dfs.data.dir</name>

  <value>/home/grid/hadoop-0.20.2/data</value> --注意此目录必需已经创建并能读写

  </property>

  <property>

  <name>dfs.replication</name>

  <value>2</value>

  </property>

  </configuration>

  hdfs-site.xml文件中常用配置参数:

  

  --修改mapred-site.xml文件

  [[email protected] conf]# vi mapred-site.xml

  < xml version="1.0" >

  < xml-stylesheet type="text/xsl" href="configuration.xsl" >

  <!-- Put site-specific property overrides in this file. -->

  <configuration>

  <property>

  <name>mapred.job.tracker</name>

  <value>192.168.2.100:9001</value>

  </property>

  </configuration>

  mapred-site.xml文件中常用配置参数

  

  n       配置masters和slaves文件

  [[email protected] conf]$ vi masters

  gc

  [[email protected] conf]$ vi slaves

  rac1

  rac2

  n       向各节点复制hadoop

  --把gc主机上面hadoop配置好文件分别copy到各节点

  --注意:复制到其它的节点后配置文件中要修改为此节点的IP

  [[email protected] conf]$ scp -r hadoop-0.20.2 rac1:/home/grid/

  [[email protected] conf]$ scp -r hadoop-0.20.2 rac2:/home/grid/

  7.         格式化namenode

  --分别在各节点进行格式化

  [[email protected] bin]$ pwd

  /home/grid/hadoop-0.20.2/bin

  [[email protected] bin]$ ./hadoop namenode –format

  12/10/31 08:03:31 INFO namenode.NameNode: STARTUP_MSG:

  /************************************************************

  STARTUP_MSG: Starting NameNode

  STARTUP_MSG:   host = gc.localdomain/192.168.2.100

  STARTUP_MSG:   args = [-format]

  STARTUP_MSG:   version = 0.20.2

  STARTUP_MSG:   build = ; compiled by ‘chrisdo‘ on Fri Feb 19 08:07:34 UTC 2010

  ************************************************************/

  12/10/31 08:03:31 INFO namenode.FSNamesystem: fsOwner=grid,hadoop

  12/10/31 08:03:31 INFO namenode.FSNamesystem: supergroup=supergroup

  12/10/31 08:03:31 INFO namenode.FSNamesystem: isPermissionEnabled=true

  12/10/31 08:03:32 INFO common.Storage: Image file of size 94 saved in 0 seconds.

  12/10/31 08:03:32 INFO common.Storage: Storage directory /tmp/hadoop-grid/dfs/name has been successfully formatted.

  12/10/31 08:03:32 INFO namenode.NameNode: SHUTDOWN_MSG:

  /************************************************************

  SHUTDOWN_MSG: Shutting down NameNode at gc.localdomain/192.168.2.100

  ************************************************************/

  8.         启动hadoop

  --在master节点启动hadoop守护进程

  [[email protected] bin]$ pwd

  /home/grid/hadoop-0.20.2/bin

  [[email protected] bin]$ ./start-all.sh

  starting namenode, logging to /home/grid/hadoop-0.20.2/bin/../logs/hadoop-grid-namenode-gc.localdomain.out

  rac2: starting datanode, logging to /home/grid/hadoop-0.20.2/bin/../logs/hadoop-grid-datanode-rac2.localdomain.out

  rac1: starting datanode, logging to /home/grid/hadoop-0.20.2/bin/../logs/hadoop-grid-datanode-rac1.localdomain.out

  The authenticity of host ‘gc (192.168.2.100)‘ can‘t be established.

  RSA key fingerprint is 8e:47:42:44:bd:e2:28:64:10:40:8e:b5:72:f9:6c:82.

  Are you sure you want to continue connecting (yes/no)  yes

  gc: Warning: Permanently added ‘gc,192.168.2.100‘ (RSA) to the list of known hosts.

  gc: starting secondarynamenode, logging to /home/grid/hadoop-0.20.2/bin/../logs/hadoop-grid-secondarynamenode-gc.localdomain.out

  starting jobtracker, logging to /home/grid/hadoop-0.20.2/bin/../logs/hadoop-grid-jobtracker-gc.localdomain.out

  rac2: starting tasktracker, logging to /home/grid/hadoop-0.20.2/bin/../logs/hadoop-grid-tasktracker-rac2.localdomain.out

  rac1: starting tasktracker, logging to /home/grid/hadoop-0.20.2/bin/../logs/hadoop-grid-tasktracker-rac1.localdomain.out

  9.        用jps检验各后台进程是否成功启动

  --在master节点查看后台进程

  [[email protected] bin]$ /usr/java/jdk1.6.0_18/bin/jps

  27462 NameNode

  29012 Jps

  27672 JobTracker

  27607 SecondaryNameNode

  --在slave节点查看后台进程

  [[email protected] conf]$ /usr/java/jdk1.6.0_18/bin/jps

  16722 Jps

  16672 TaskTracker

  16577 DataNode

  [[email protected] conf]$ /usr/java/jdk1.6.0_18/bin/jps

  31451 DataNode

  31547 TaskTracker

  31608 Jps

  10.     安装过程中遇到的问题

  1)        Ssh不能建立互信

  建用户时不指定分组,Ssh不能建立互信,如下的步骤

  [[email protected] ~]# useradd grid

  [[email protected] ~]# passwd grid

  解决:

  创建新的用户组,创建用户时并指定此用户组。

  [[email protected] ~]# groupadd hadoop

  [[email protected] ~]# useradd -g hadoop grid

  [[email protected] ~]# id grid

  uid=501(grid) gid=54326(hadoop) groups=54326(hadoop)

  [[email protected] ~]# passwd grid

  2)        启动hadoop后,slave节点没有datanode进程

  现象:

  在master节点启动hadoop后,master节点进程正常,但slave节点没有datanode进程。

  --Master节点正常

  [[email protected] bin]$ /usr/java/jdk1.6.0_18/bin/jps
29843 Jps
29703 JobTracker
29634 SecondaryNameNode
29485 NameNode

--此时再在两slave节点查看进程,发现还是没有datanode进程
[[email protected] bin]$ /usr/java/jdk1.6.0_18/bin/jps
5528 Jps
3213 TaskTracker

[[email protected] bin]$ /usr/java/jdk1.6.0_18/bin/jps
30518 TaskTracker
30623 Jps

  原因:

  --回头查看在master节点启动hadoop时的输出日志,在slave节点找到启动datanode进程的日志

  [[email protected] logs]$ pwd

  /home/grid/hadoop-0.20.2/logs

  [[email protected] logs]$ more hadoop-grid-datanode-rac1.localdomain.log

  /************************************************************

  STARTUP_MSG: Starting DataNode

  STARTUP_MSG:   host = rac1.localdomain/192.168.2.101

  STARTUP_MSG:   args = []

  STARTUP_MSG:   version = 0.20.2

  STARTUP_MSG:   build = ; compiled by ‘chrisdo‘ on Fri Feb 19 08:07:34 UTC 2010

  ************************************************************/

  2012-11-18 07:43:33,513 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: can not create directory: /usr/hadoop-0.20.2/data

  2012-11-18 07:43:33,513 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.

  2012-11-18 07:43:33,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

  /************************************************************

  SHUTDOWN_MSG: Shutting down DataNode at rac1.localdomain/192.168.2.101

  ************************************************************/

  --发现是hdfs-site.xml配置文件的目录data目录没有创建

  解决:

  在各节点创建hdfs的data目录,并修改hdfs-site.xml配置文件参数

  [[email protected] ~]# mkdir -p /home/grid/hadoop-0.20.2/data

  [[email protected] conf]# vi hdfs-site.xml

  < xml version="1.0" >

  < xml-stylesheet type="text/xsl" href="configuration.xsl" >

  <!-- Put site-specific property overrides in this file. -->

  <configuration>

  <property>

  <name>dfs.data.dir</name>

  <value>/home/grid/hadoop-0.20.2/data</value> --注意此目录必需已经创建并能读写

  </property>

  <property>

  <name>dfs.replication</name>

  <value>2</value>

  </property>

  </configuration>

  --重新启动hadoop,slave进程正常

  [[email protected] bin]$ ./stop-all.sh

  [[email protected] bin]$ ./start-all.sh

时间: 2024-08-26 05:15:02

完全分布式安装Hadoop的相关文章

Hadoop学习笔记_8_实施Hadoop集群 --分布式安装Hadoop

实施Hadoop集群 --分布式安装Hadoop 说明: 以Ubuntu配置为例,其中与CentOS不同之处会给出详细说明 现有三台服务器:其IP与主机名对应关系为: 192.168.139.129 master #NameNode/JobTrackerr结点 192.168.139.132 slave01 #DataNode/TaskTracker结点 192.168.139.137 slave02 #DataNode/TaskTracker结点 一.配置ssh实现Hadoop节点间用户的无密

伪分布式安装Hadoop + zookeeper + hbase安装配置

一.  安装JDK,配置环境JAVA环境变量 exportJAVA_HOME=/home/jdk1.6.0_27 exportJRE_HOME=/home/jdk1.6.0_27/jre exportANT_HOME=/home/apache-ant-1.8.2 export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH 二.  安装Hadoop-1.0.3 1.     下载hadoop文件,地址为:http://hadoop.apac

CentOS 6.5 伪分布式 安装 hadoop 2.6.0

安装 jdk 1 yum install java-1.7.0-openjdk* 3 检查安装:java -version 创建Hadoop用户,设置Hadoop用户使之可以免密码ssh到localhost 1 su - hadoop 2 ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa 3 cat ~/.ssh/id_dsa.pub>> ~/.ssh/authorized_keys 4 5 cd /home/hadoop/.ssh 6 chmod 600 au

Hadoop伪分布式安装

本文介绍的主要是Hadoop的伪分布式的搭建以及遇到的相关问题的解决,做一下记录,jdk的安装这里就不做太多的介绍了,相信大家根据网上的安装介绍很快就能安装成功. 环境 操作系统 Oracle VM VirtualBox-rhel-6.4_64   本机名称 yanduanduan   本机IP 192.168.1.102   JDK 1.7.0_79   hadoop 2.7.3 点此下载 Hadoop 有两个主要版本,Hadoop 1.x.y 和 Hadoop 2.x.y 系列,比较老的教材

hadoop伪分布式安装【翻译自hadoop1.1.2官方文档】

1.hadoop支持的平台: GNU/Linux平台是一个开发和生产的平台. hadoop已经被证明可以在GNU/Linux平台不是2000多个节点. win32是一个开发平台,分布式操作还没有在win32系统上很好的测试, 所以它不被作为生产环境. 2.安装hdoop需要的软件: linux和windows下安装hadoop需要的软件: 2.1从sun官网下载的1.6的jdk必须被安装. 2.2ssh 必须被安装 and ssh协议必须使用hadoop脚本用来管理远程的hadoop进程. 2.

hadoop完全分布式安装

1.安装环境是vmware workstation10.0模拟出三个虚拟节点,每一个节点安装Ubuntu12.04 LTS操作系统,主机名分别是hadoop1.hadoop2以及hadoop3.同时在每一个节点安装好java.安装方法同之前介绍的伪分布式安装方法一样. 2.接着是对三个节点的hosts文件进行配置,先用ifconfig命令查看三个节点的ip地址,然后用sudo vim /etc/hosts命令打开hosts文件,统一编辑如下: 3.配置完hosts文件之后,设置ssh无密码互联.

win7+Ubuntu双系统安装以及hadoop伪分布式安装

首先安装双系统进行伪分布式实验,安装win7+ubuntu双系统: 1.右键单击“我的电脑”进入“管理”,双击“存储”,再双击“磁盘管理”,在D盘位置右击“压缩卷”,分出一个大小为50G的磁盘空间,然后格式化,之后再删除卷,作为安装ubuntu系统所用. 2.下载安装easyBCD软件,新建一个Neo Grub启动,然后在点击设置,添加一下语句: title install ubuntu 12.04 LTS root(hd0,5) kernel(hd0,5)/vmlinuz boot=caspe

Java之美[从菜鸟到高手演练]之Linux下Hadoop的完全分布式安装

作者:二青 邮箱:[email protected]     微博:http://weibo.com/xtfggef 本来是想安装一个单节点的环境就好了,后来按装完了总觉得不够过瘾,于是今天继续研究一下,来一个完全分布式的集群安装.用到的软件和上一篇单节点安装Hadoop一样,如下: Ubuntu 14.10 64 Bit Server Edition Hadoop2.6.0 JDK 1.7.0_71 ssh rsync 准备环境 依然是VirtualBox + Ubuntu 14.10 64

吴超老师课程---hadoop的分布式安装过程

1.hadoop的分布式安装过程 1.1 分布结构 主节点(1个,是hadoop0):NameNode.JobTracker.SecondaryNameNode            从节点(2个,是hadoop1.hadoop2):DataNode.TaskTracker    1.2 各节点重新产生ssh加密文件    1.3 编辑各个节点的/etc/hosts,在该文件中含有所有节点的ip与hostname的映射信息    1.4 两两节点之间的SSH免密码登陆            ss