CentOS 6.5 搭建Hadoop 1.2.1集群

记录在64位CentOS 6.5环境下搭建Hadoop 2.5.2集群的步骤,同时遇到问题的解决办法,这些记录都仅供参考!

1、操作系统环境配置

1.1、操作系统环境

主机名 IP地址 角色 Hadoop用户
hadoop-master 192.168.30.50 Hadoop主节点 hadoop
hadoop-slave01 192.168.30.51 Hadoop从节点 hadoop
hadoop-slave02 192.168.30.52 Hadoop从节点 hadoop

1.2、关闭防火墙和SELinux

1.2.1、关闭防火墙

service iptables stop
chkconfig iptables off

1.2.2、关闭SELinux

setenforce 0
sed -i ‘s/enforcing/disabled/‘ /etc/sysconfig/selinux

1.3、hosts配置

vim /etc/hosts

########## Hadoop host ##########
192.168.30.50   hadoop-master
192.168.30.51   hadoop-slave01
192.168.30.52   hadoop-slave02

注:以上操作需要在root用户,通过ping 主机名可以返回对应的IP即可

1.4、配置无密码访问

在3台主机上使用hadoop用户配置无密码访问,所有主机的操作相同,以hadoop-master为例

生成私钥和公钥
ssh-keygen -t rsa

拷贝公钥到主机(需要输入密码)

ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]

注:以上操作需要在hadoop用户,通过hadoop用户ssh到其他主机不需要密码即可。

2、Java环境配置

2.1、下载JDK

mkdir -p /home/hadoop/app/java
cd /home/hadoop/app/java
wget -c http://download.oracle.com/otn/java/jdk/6u45-b06/jdk-6u45-linux-x64.bin

2.2、安装java

cd /home/hadoop/app/java
chmod +x jdk-6u45-linux-x64.bin
./jdk-6u45-linux-x64.bin

2.3、配置Java环境变量

vim .bash_profile

export JAVA_HOME=/home/hadoop/app/java/jdk1.6.0_45
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

启用环境变量
source .bash_profile

注:使用hadoop用户在所有机器安装jdk,通过java –version命令返回Java的版本信息即可

3、Hadoop安装配置

hadoop的安装配置使用hadoop用户操作;

3.1、安装Hadoop

  • 下载hadoop 1.2.1

    mkdir -p /home/hadoop/app/hadoop
    cd /home/hadoop/app/hadoop
    wget -c https://archive.apache.org/dist/hadoop/common/hadoop-1.2.1/hadoop-1.2.1-bin.tar.gz
    tar -zxf hadoop-1.2.1-bin.tar.gz
  • 创建hadoop临时文件目录
    mkdir -p /home/hadoop/app/tmp

3.2、配置Hadoop

Hadoop配置文件都是XML文件,使用hadoop用户操作即可;

3.2.1、配置core-site.xml

vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/app/hadoop/hadoop-1.2.1/tmp</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://192.168.30.50:9000</value>
    </property>
</configuration>

3.2.2、配置hdfs-site.xml

vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

3.2.3、配置mapred-site.xml

vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/mapred-site.xml

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>http://192.168.30.50:9001</value>
    </property>
</configuration>

3.2.4、配置master

vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/masters

hadoop-master

3.2.5、配置slaves

vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/slaves

hadoop-slave01
hadoop-slave02

3.2.6、配置hadoop-env.xml

vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/hadoop-env.sh

JAVA_HOME修改为如下:

export JAVA_HOME=/home/hadoop/app/java/jdk1.6.0_45

3.3、拷贝Hadoop程序到slave

scp -r app hadoop-slave01:/home/hadoop/
scp -r app hadoop-slave02:/home/hadoop/

3.4、配置Hadoop环境变量

在所有机器hadoop用户家目录下编辑 .bash_profile 文件,在最后追加:
vim /home/hadoop/.bash_profile

### Hadoop PATH
export HADOOP_HOME_WARN_SUPPRESS=1
export HADOOP_HOME=/home/hadoop/app/hadoop/hadoop-1.2.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

让环境变量生效:
source /home/hadoop/.bash_profile

3.5、启动Hadoop

在hadoop主节点上初始化HDFS文件系统,然后启动hadoop集群

3.5.1、初始化HDFS文件系统

hadoop namenode –format

3.5.2、启动和关闭Hadoop集群

  • 启动:
    start-all.sh
  • 关闭:
    stop-all.sh

3.5.3、hadoop各节点的启动进程

  • master

    $ jps
    22262 NameNode
    22422 SecondaryNameNode
    24005 Jps
    22506 JobTracker
  • slave
    $ jps
    2700 TaskTracker
    2611 DataNode
    4160 Jps

3.5.4、hadoop启动后验证

  • 简单的文件操作

hadoop fs -ls hdfs:/

Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2018-01-09 16:15 /home
drwxr-xr-x   - hadoop supergroup          0 2018-01-10 10:39 /user
  • 简单的MapReduce计算

    hadoop jar /home/hadoop/app/hadoop/hadoop-1.2.1/hadoop-examples-1.2.1.jar pi 10 10

    得到的计算结果是:

    Number of Maps  = 10
    Samples per Map = 10
    Wrote input for Map #0
    Wrote input for Map #1
    Wrote input for Map #2
    Wrote input for Map #3
    Wrote input for Map #4
    Wrote input for Map #5
    Wrote input for Map #6
    Wrote input for Map #7
    Wrote input for Map #8
    Wrote input for Map #9
    Starting Job
    18/01/10 13:49:35 INFO mapred.FileInputFormat: Total input paths to process : 10
    18/01/10 13:49:36 INFO mapred.JobClient: Running job: job_201801101031_0002
    18/01/10 13:49:37 INFO mapred.JobClient:  map 0% reduce 0%
    18/01/10 13:49:49 INFO mapred.JobClient:  map 10% reduce 0%
    18/01/10 13:49:50 INFO mapred.JobClient:  map 30% reduce 0%
    18/01/10 13:49:51 INFO mapred.JobClient:  map 40% reduce 0%
    18/01/10 13:49:59 INFO mapred.JobClient:  map 50% reduce 0%
    18/01/10 13:50:00 INFO mapred.JobClient:  map 60% reduce 0%
    18/01/10 13:50:02 INFO mapred.JobClient:  map 80% reduce 0%
    18/01/10 13:50:07 INFO mapred.JobClient:  map 100% reduce 0%
    18/01/10 13:50:12 INFO mapred.JobClient:  map 100% reduce 33%
    18/01/10 13:50:14 INFO mapred.JobClient:  map 100% reduce 100%
    18/01/10 13:50:16 INFO mapred.JobClient: Job complete: job_201801101031_0002
    18/01/10 13:50:16 INFO mapred.JobClient: Counters: 30
    18/01/10 13:50:16 INFO mapred.JobClient:   Job Counters
    18/01/10 13:50:16 INFO mapred.JobClient:     Launched reduce tasks=1
    18/01/10 13:50:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=95070
    18/01/10 13:50:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
    18/01/10 13:50:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
    18/01/10 13:50:16 INFO mapred.JobClient:     Launched map tasks=10
    18/01/10 13:50:16 INFO mapred.JobClient:     Data-local map tasks=10
    18/01/10 13:50:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25054
    18/01/10 13:50:16 INFO mapred.JobClient:   File Input Format Counters
    18/01/10 13:50:16 INFO mapred.JobClient:     Bytes Read=1180
    18/01/10 13:50:16 INFO mapred.JobClient:   File Output Format Counters
    18/01/10 13:50:16 INFO mapred.JobClient:     Bytes Written=97
    18/01/10 13:50:16 INFO mapred.JobClient:   FileSystemCounters
    18/01/10 13:50:16 INFO mapred.JobClient:     FILE_BYTES_READ=226
    18/01/10 13:50:16 INFO mapred.JobClient:     HDFS_BYTES_READ=2450
    18/01/10 13:50:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=682653
    18/01/10 13:50:16 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=215
    18/01/10 13:50:16 INFO mapred.JobClient:   Map-Reduce Framework
    18/01/10 13:50:16 INFO mapred.JobClient:     Map output materialized bytes=280
    18/01/10 13:50:16 INFO mapred.JobClient:     Map input records=10
    18/01/10 13:50:16 INFO mapred.JobClient:     Reduce shuffle bytes=280
    18/01/10 13:50:16 INFO mapred.JobClient:     Spilled Records=40
    18/01/10 13:50:16 INFO mapred.JobClient:     Map output bytes=180
    18/01/10 13:50:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=1146068992
    18/01/10 13:50:16 INFO mapred.JobClient:     CPU time spent (ms)=7050
    18/01/10 13:50:16 INFO mapred.JobClient:     Map input bytes=240
    18/01/10 13:50:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1270
    18/01/10 13:50:16 INFO mapred.JobClient:     Combine input records=0
    18/01/10 13:50:16 INFO mapred.JobClient:     Reduce input records=20
    18/01/10 13:50:16 INFO mapred.JobClient:     Reduce input groups=20
    18/01/10 13:50:16 INFO mapred.JobClient:     Combine output records=0
    18/01/10 13:50:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=1843138560
    18/01/10 13:50:16 INFO mapred.JobClient:     Reduce output records=0
    18/01/10 13:50:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=7827865600
    18/01/10 13:50:16 INFO mapred.JobClient:     Map output records=20
    Job Finished in 41.091 seconds
    Estimated value of Pi is 3.20000000000000000000

hadoop搭建完成,如果有错误,可以查看日志信息。

4、参考资料

原文地址:http://blog.51cto.com/balich/2059402

时间: 2024-08-27 11:43:17

CentOS 6.5 搭建Hadoop 1.2.1集群的相关文章

CentOS 6.7安装Hadoop 2.6.3集群环境

在CentOS 6.7 x64上搭建Hadoop 2.6.3完全分布式环境,并在DigitalOcean上测试成功. 本文假设: 主节点(NameNode)域名(主机名):m.fredlab.org 子节点(DataNode)域名(主机名):s1.fredlab.org s2.fredlab.org s3.fredlab.org 一.配置SSH互信 1.master机上生成公私钥:id_rsa和id_rsa.pub ssh-keygen 2.上传到每个节点机器的.ssh/目录下 .ssh/---

学习搭建Hadoop+HBase+ZooKeeper分布式集群环境

一.环境配置 由于集群至少需要三台服务器,我就拿上次做的MongoDB Master, Slave, Arbiter环境来做Hadoop集群.服务器还是ibmcloud 免费提供的.其中Arbiter在这里做的也是slave的角色. Hostname IP  Server Type Master 192.168.0.28 Centos6.2 Slave 192.168.0.29 Ubuntu14.04 Arbiter 192.168.0.30 Ubuntu14.04 配置三台机器的Master

基于CentOS与VmwareStation10搭建hadoop环境

基于CentOS与VmwareStation10搭建hadoop环境     目  录 1.         概述.... 1 1.1.     软件准备.... 1 1.2.     硬件准备.... 1 2.         安装与配置虚拟机.... 2 2.1.     创建虚拟机.... 2 2.1.1.     创建虚拟机节点1.. 2 2.1.2.     创建虚拟机节点2.. 4 2.1.3.     创建虚拟机节点3.. 4 2.2.     安装操作系统CentOS6.0..

Ubuntu 12.04下Hadoop 2.2.0 集群搭建(原创)

现在大家可以跟我一起来实现Ubuntu 12.04下Hadoop 2.2.0 集群搭建,在这里我使用了两台服务器,一台作为master即namenode主机,另一台作为slave即datanode主机,增加更多的slave只需重复slave部分的内容即可. 系统版本: master:Ubuntu 12.04 slave:Ubuntu 12.04 hadoop:hadoop 2.2.0 安装ssh服务:sudo apt-get install ssh 有时也要更新一下vim:sudo apt-ge

Cloudera Hadoop 4 实战课程(Hadoop 2.0、集群界面化管理、电商在线查询+日志离线分析)

课程大纲及内容简介: 每节课约35分钟,共不下40讲 第一章(11讲) ·分布式和传统单机模式 ·Hadoop背景和工作原理 ·Mapreduce工作原理剖析 ·第二代MR--YARN原理剖析 ·Cloudera Manager 4.1.2安装 ·Cloudera Hadoop 4.1.2 安装 ·CM下集群管理一 ·CM下集群管理二 ·Hadoop fs 命令详解 ·cloudera manager管理集群·cloudera manager下集群高级管理 第二章(约10讲) ·Hive数据表和

Nginx+Keepalived搭建高可用负载均衡集群

Nginx+Keepalived搭建高可用负载均衡集群   一. 环境说明 前端双Nginx+keepalived,nginx反向代理到后端的tomcat集群实现负载均衡,Keepalived实现集群高可用. 操作系统: Centos 6.6_X64 Nginx版本: nginx-1.9.5 Keepalived版本:keepalived-1.2.13 结构: Keepalived+nginx-MASTER:10.6.1.210         Keepalived+nginx-BACKUP:

Linux下Hadoop 2.2.0 集群配置攻略

Hadoop 2.2.0 集群配置攻略 用户输入标识: chmod +x jdk-7u45-linux-x64.rpm  为黑色带底纹 系统输出标识: java version "1.7.0_51" 为绿色小字 2014年3月20-日 by lilihao Q 404536204 1. 安装sun jdk (1). 到Oracle的官方网站下载jdk,目前最新版本是7u51 安装包: http://www.oracle.com/technetwork/java/javase/downl

hbase 集群搭建(公司内部测试集群)

我用的是cdh4.5版本:配置文件:$HBASE_HOME/conf/hbase-env.shexport JAVA_HOME=$JAVA_HOMEexport JAVA_HOME=/home/hadoop/jdk1.7.0_51export HBASE_CLASSPATH=$HBASE_HOME/conf# Tell HBase whether it should manage it's own instance of Zookeeper or not.export HBASE_MANAGES

搭建 RabbitMQ Server 高可用集群

阅读目录: 准备工作 搭建 RabbitMQ Server 单机版 RabbitMQ Server 高可用集群相关概念 搭建 RabbitMQ Server 高可用集群 搭建 HAProxy 负载均衡 因为公司测试服务器暂不能用,只能在自己电脑上重新搭建一下 RabbitMQ Server 高可用集群,正好把这个过程记录下来,以便日后查看. 公司测试服务器上的 RabbitMQ 集群,我搭建的是三台服务器,因为自己电脑空间有限,这边只能搭建两台服务器用作高可用集群,用的是 Vagrant 虚拟机