Hadoop学习日志- install hadoop

资料来源 : http://www.tutorialspoint.com/hadoop/hadoop_enviornment_setup.htm

Hadoop 安装

  1. 创建新用户

    $ su

    password:

    # useradd hadoop -g root

    # passwd hadoop

    New passwd:

    Retype new passwd

    修改/etc/sudoers 赋予sudo 权限

  2. 设置ssh

SSH Setup and Key Generation

SSH setup is required to do different operations on a cluster such as starting, stopping, distributed daemon shell operations. To authenticate different users of Hadoop, it is required to provide public/private key pair for a Hadoop user and share it with different users.

The following commands are used for generating a key value pair using SSH. Copy the public keys form id_rsa.pub to authorized_keys, and provide the owner with read and write permissions to authorized_keys file respectively.

$ ssh-keygen -t rsa

$ cat ~/.ssh/id_rsa.pub >>
~/.ssh/authorized_keys

$ chmod 0600
~/.ssh/authorized_keys

  1. Install java

Java is the main prerequisite for Hadoop. First of all, you should verify the existence of java in your system using the command "java -version". The syntax of java version command is given below.

$ java -version

If everything is in order, it will give you the following output.

java version "1.7.0_71"

Java(TM) SE Runtime
Environment
(build 1.7.0_71-b13)

Java
HotSpot(TM)
Client VM (build 25.0-b02, mixed mode)

If java is not installed in your system, then follow the steps given below for installing java.

Step 1

Download java (JDK <latest version> - X64.tar.gz) by visiting the following linkhttp://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads1880260.html.

Then jdk-7u71-linux-x64.tar.gz will be downloaded into your system.

Step 2

Generally you will find the downloaded java file in Downloads folder. Verify it and extract the jdk-7u71-linux-x64.gz file using the following commands.

$ cd Downloads/

$ ls

jdk-7u71-linux-x64.gz

$ tar zxf jdk-7u71-linux-x64.gz

$ ls

jdk1.7.0_71 jdk-7u71-linux-x64.gz

Step 3

To make java available to all the users, you have to move it to the location "/usr/local/". Open root, and type the following commands.

$ su

password:

# mv jdk1.7.0_71 /usr/local/

# exit

Step 4

For setting up PATH and JAVA_HOME variables, add the following commands to ~/.bashrc file.

export JAVA_HOME=/usr/local/jdk1.7.0_71

export PATH=$PATH:$JAVA_HOME/bin

Now apply all the changes into the current running system.

$ source ~/.bashrc

Step 5

Use the following commands to configure java alternatives:

# alternatives --install /usr/bin/java java usr/local/java/bin/java 2

?

# alternatives --install /usr/bin/javac javac usr/local/java/bin/javac 2

?

# alternatives --install /usr/bin/jar jar usr/local/java/bin/jar 2

?

# alternatives --set java usr/local/java/bin/java

?

# alternatives --set javac usr/local/java/bin/javac

?

# alternatives --set jar usr/local/java/bin/jar

Now verify the java -version command from the terminal as explained above.

?

  1. Downloading Hadoop

Download and extract Hadoop 2.4.1 from Apache software foundation using the following commands.

$ su

password:

# cd /usr/local

# wget http://apache.claz.org/hadoop/common/hadoop-2.4.1/

hadoop-2.4.1.tar.gz

# tar xzf hadoop-2.4.1.tar.gz

# mv hadoop-2.4.1/* to hadoop/

# exit

?

  1. Installing Hadoop in Standalone Mode

Here we will discuss the installation of?Hadoop 2.4.1?in standalone mode.

There are no daemons running and everything runs in a single JVM. Standalone mode is suitable for running MapReduce programs during development, since it is easy to test and debug them.

Setting Up Hadoop

You can set Hadoop environment variables by appending the following commands to?~/.bashrc?file.

export HADOOP_HOME=/usr/local/hadoop

Before proceeding further, you need to make sure that Hadoop is working fine. Just issue the following command:

$ hadoop version

If everything is fine with your setup, then you should see the following result:

Hadoop
2.4.1

Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768

Compiled
by hortonmu on 2013-10-07T06:28Z

Compiled
with protoc 2.5.0

From source with checksum 79e53ce7994d1628b240f09af91e1af4

It means your Hadoop‘s standalone mode setup is working fine. By default, Hadoop is configured to run in a non-distributed mode on a single machine.

Example

Let‘s check a simple example of Hadoop. Hadoop installation delivers the following example MapReduce jar file, which provides basic functionality of MapReduce and can be used for calculating, like Pi value, word counts in a given list of files, etc.

$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar

Let‘s have an input directory where we will push a few files and our requirement is to count the total number of words in those files. To calculate the total number of words, we do not need to write our MapReduce, provided the .jar file contains the implementation for word count. You can try other examples using the same .jar file; just issue the following commands to check supported MapReduce functional programs by hadoop-mapreduce-examples-2.2.0.jar file.

$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduceexamples-2.2.0.jar

Step 1

Create temporary content files in the input directory. You can create this input directory anywhere you would like to work.

$ mkdir input

$ cp $HADOOP_HOME/*.txt input

$ ls -l input

It will give the following files in your input directory:

total 24

-rw-r--r--
1 root root 15164
Feb
21
10:14 LICENSE.txt

-rw-r--r--
1 root root 101
Feb
21
10:14 NOTICE.txt

-rw-r--r--
1 root root 1366
Feb
21
10:14 README.txt

These files have been copied from the Hadoop installation home directory. For your experiment, you can have different and large sets of files.

Step 2

Let‘s start the Hadoop process to count the total number of words in all the files available in the input directory, as follows:

$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduceexamples-2.2.0.jar wordcount input ouput

Step 3

Step-2 will do the required processing and save the output in output/part-r00000 file, which you can check by using:

$cat output/*

It will list down all the words along with their total counts available in all the files available in the input directory.

"AS 4

"Contribution" 1

"Contributor" 1

"Derivative
1

"Legal 1

"License" 1

"License"); 1

"Licensor" 1

"NOTICE"
1

"Not 1

"Object" 1

"Source"
1

"Work" 1

"You" 1

"Your") 1

"[]" 1

"control" 1

"printed 1

"submitted"
1

(50%)
1

(BIS),
1

(C)
1

(Don‘t) 1

(ECCN) 1

(INCLUDING 2

(INCLUDING, 2

.............

?

?

  1. Installing Hadoop in Pseudo Distributed Mode

Follow the steps given below to install Hadoop 2.4.1 in pseudo distributed mode.

Step 1: Setting Up Hadoop

You can set Hadoop environment variables by appending the following commands to?~/.bashrc?file.

export HADOOP_HOME=/usr/local/hadoop

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

export HADOOP_INSTALL=$HADOOP_HOME

Now apply all the changes into the current running system.

$ source ~/.bashrc

Step 2: Hadoop Configuration

You can find all the Hadoop configuration files in the location "$HADOOP_HOME/etc/hadoop". It is required to make changes in those configuration files according to your Hadoop infrastructure.

$ cd $HADOOP_HOME/etc/hadoop

In order to develop Hadoop programs in java, you have to reset the java environment variables in?hadoop-env.sh?file by replacing?JAVA_HOME?value with the location of java in your system.

export JAVA_HOME=/usr/local/jdk1.7.0_71

The following are the list of files that you have to edit to configure Hadoop.

core-site.xml

The?core-site.xml?file contains information such as the port number used for Hadoop instance, memory allocated for the file system, memory limit for storing the data, and size of Read/Write buffers.

Open the core-site.xml and add the following properties in between <configuration>, </configuration> tags.

<configuration>

?

<property>

<name>fs.default.name </name>

<value> hdfs://localhost:9000 </value>

</property>

?

</configuration>

hdfs-site.xml

The?hdfs-site.xml?file contains information such as the value of replication data, namenode path, and datanode paths of your local file systems. It means the place where you want to store the Hadoop infrastructure.

Let us assume the following data.

dfs.replication (data replication value)
=
1

(In the below given path /hadoop/
is the user name.

hadoopinfra/hdfs/namenode is the directory created by hdfs file system.)

namenode path =
//home/hadoop/hadoopinfra/hdfs/namenode

(hadoopinfra/hdfs/datanode is the directory created by hdfs file system.)

datanode path =
//home/hadoop/hadoopinfra/hdfs/datanode

Open this file and add the following properties in between the <configuration> </configuration> tags in this file.

<configuration>

?

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

?

<property>

<name>dfs.name.dir</name>

<value>file:///home/hadoop/hadoopinfra/hdfs/namenode </value>

</property>

?

<property>

<name>dfs.data.dir</name>

<value>file:///home/hadoop/hadoopinfra/hdfs/datanode </value>

</property>

?

</configuration>

Note:?In the above file, all the property values are user-defined and you can make changes according to your Hadoop infrastructure.

yarn-site.xml

This file is used to configure yarn into Hadoop. Open the yarn-site.xml file and add the following properties in between the <configuration>, </configuration> tags in this file.

<configuration>

?

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

?

</configuration>

mapred-site.xml

This file is used to specify which MapReduce framework we are using. By default, Hadoop contains a template of yarn-site.xml. First of all, it is required to copy the file from?mapred-site,xml.template?to?mapred-site.xml?file using the following command.

$ cp mapred-site.xml.template mapred-site.xml

Open mapred-site.xml file and add the following properties in between the <configuration>, </configuration>tags in this file.

<configuration>

?

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

?

</configuration>

Verifying Hadoop Installation

The following steps are used to verify the Hadoop installation.

Step 1: Name Node Setup

Set up the namenode using the command "hdfs namenode -format" as follows.

$ cd ~

$ hdfs namenode -format

The expected result is as follows.

10/24/14
21:30:55 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = localhost/192.168.1.11

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 2.4.1

...

...

10/24/14 21:30:56 INFO common.Storage: Storage directory

/home/hadoop/hadoopinfra/hdfs/namenode has been successfully formatted.

10/24/14 21:30:56 INFO namenode.NNStorageRetentionManager: Going to

retain 1 images with txid >= 0

10/24/14 21:30:56 INFO util.ExitUtil: Exiting with status 0

10/24/14 21:30:56 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at localhost/192.168.1.11

************************************************************/

Step 2: Verifying Hadoop dfs

The following command is used to start dfs. Executing this command will start your Hadoop file system.

$ start-dfs.sh

The expected output is as follows:

10/24/14
21:37:56

Starting namenodes on [localhost]

localhost: starting namenode, logging to /home/hadoop/hadoop

2.4.1/logs/hadoop-hadoop-namenode-localhost.out

localhost: starting datanode, logging to /home/hadoop/hadoop

2.4.1/logs/hadoop-hadoop-datanode-localhost.out

Starting secondary namenodes [0.0.0.0]

Step 3: Verifying Yarn Script

The following command is used to start the yarn script. Executing this command will start your yarn daemons.

$ start-yarn.sh

The expected output as follows:

starting yarn daemons

starting resourcemanager, logging to /home/hadoop/hadoop

2.4.1/logs/yarn-hadoop-resourcemanager-localhost.out

localhost: starting nodemanager, logging to /home/hadoop/hadoop

2.4.1/logs/yarn-hadoop-nodemanager-localhost.out

Step 4: Accessing Hadoop on Browser

The default port number to access Hadoop is 50070. Use the following url to get Hadoop services on browser.

http://localhost:50070/

Step 5: Verify All Applications for Cluster

The default port number to access all applications of cluster is 8088. Use the following url to visit this service.

http://localhost:8088/

?

?

时间: 2024-07-29 11:29:19

Hadoop学习日志- install hadoop的相关文章

Hadoop学习—浅谈hadoop

大数据这个词越来越热,本人一直想学习一下,正巧最近有时间了解一下.先从hadoop入手,在此记录学习中的点滴. 什么是hadoop? What Is Apache Hadoop? The Apache? Hadoop? project develops open-source software for reliable, scalable, distributed computing 作者:Doug Cutting 受Google三篇论文的启发(GFS.MapReduce.BigTable) 解

Hadoop学习第一次:hadoop概念

1.大数据学习方向:一是系统建设技术,二,海量数据应用. 先说系统建设,现在主流的技术是HADOOP,主要基于mapreduce的分布式框架.目前可以先学习这个.但是我的观点,在分布式系统出来之前,主要是集中式架构,如DB2,oracle.为什么现在用分布式架构,那是因为现在集中式架构受限于IO性能,出来速度慢,如果又一种硬件技术,可以很快地处理海量数据,性能上能满足需求,那么集中式架构优于分布式架构,因为集中式架构稳定,运维压力小.现在的集中式架构要么性能达不到要求,要么就是过于昂贵.我期待一

[Hadoop] Hadoop学习笔记之Hadoop基础

1 Hadoop是什么? Google公司发表了两篇论文:一篇论文是“The Google File System”,介绍如何实现分布式地存储海量数据:另一篇论文是“Mapreduce:Simplified Data Processing on Large Clusters”,介绍如何对分布式大规模数据进行处理.Doug Cutting在这两篇论文的启发下,基于OSS(Open Source software)的思想实现了这两篇论文中的原理,从而Hadoop诞生了. Hadoop是一种开源的适合

Hadoop学习笔记0001——Hadoop安装配置

Hadoop配置主要事项 1. 保证Master和Slave能够ping通: 2. 配置/etc/hosts文件: 3. 能够ssh无密码切换各台主机: 4. 安装sun公司的jdk,在/etc/profile中设置好环境变量: 5. 下载Hadoop,安装.配置.搭建Hadoop集群: 1.Hadoop简介 Hadoop是Apache软件基金会旗下的一个开源分布式计算平台.以Hadoop分布式文件系统(HDFS,Hadoop Distributed Filesystem)和MapReduce(

hadoop学习笔记(一)——hadoop安装及测试

这几天乘着工作之余,学习了一下hadoop技术,跌跌撞撞的几天,终于完成了一个初步的hadoop的安装及测试,具体如下: 动力:工作中遇到的数据量太大,服务器已经很吃力,sql语句运行老半天,故想用大数据技术来改善一下 环境:centos5.11+jdk1.7+hadoop2.5.2 1.  伪分布安装步骤 关闭防火墙 修改ip 修改hostname 设置ssh自动登录 安装jdk 安装hadoop 注:此部分涉及到的Linux操作部分可以再下面的链接中找到,Linux初级操作 2.  安装jd

Hadoop学习笔记(3) Hadoop文件系统二

1 查询文件系统 (1) 文件元数据:FileStatus,该类封装了文件系统中文件和目录的元数据,包括文件长度.块大小.备份.修改时间.所有者以及版权信息.FileSystem的getFileStatus()方法用于获取文件或目录的FileStatus对象. 例:展示文件状态信息 public class ShowFileStatusTest{ private MiniDFSCluster cluster; private FileSystem fs; @Before public void

Hadoop 学习笔记五 ---Hadoop系统通信协议介绍

本文约定: DN: DataNode TT: TaskTracker NN: NameNode SNN: Secondry NameNode JT: JobTracker 本文介绍Hadoop各节点和Client之间通信协议. Hadoop的通信是建立在RPC的基础上,关于RPC的详解介绍大家可以参照 "hadoop rpc机制 && 将avro引入hadoop rpc机制初探" Hadoop中节点之间的通信是比较复杂的一个网络,若可以把它们之间的通信网络了解清楚,那么

Hadoop学习笔记之Hadoop伪分布式环境搭建

搭建为伪分布式Hadoop环境 1.宿主机(Windows)与客户机(安装在虚拟机中的Linux)网络连接. a) Host-only 宿主机与客户机单独组网: 好处:网络隔离: 坏处:虚拟机和其他服务器之间不能通讯: b) Bridge 桥接 宿主机与客户机在同一个局域网中. 好处:窦在同一个局域网,可以互相访问: 坏处:不完全. 2.Hadoop的为分布式安装步骤 a) 设置静态IP 在centos下左面上右上角图标右键修改: 重启网卡service network restart; 验证:

Hadoop学习笔记—6.Hadoop Eclipse插件的使用

开篇:Hadoop是一个强大的并行软件开发框架,它可以让任务在分布式集群上并行处理,从而提高执行效率.但是,它也有一些缺点,如编码.调试Hadoop程序的难度较大,这样的缺点直接导致开发人员入门门槛高,开发难度大.因此,Hadop的开发者为了降低Hadoop的难度,开发出了Hadoop Eclipse插件,它可以直接嵌入到Hadoop开发环境中,从而实现了开发环境的图形界面化,降低了编程的难度. 一.天降神器插件-Hadoop Eclipse Hadoop Eclipse是Hadoop开发环境的