Hadoop.2.x_源码编译

一、基本环境搭建

  1. 准备

hadoop-2.5.0-src.tar.gz
apache-maven-3.0.5-bin.tar.gz
jdk-7u67-linux-x64.tar.gz
protobuf-2.5.0.tar.gz
可联外部网络

  2. 安装 jdk-7u67-linux-x64.tar.gz 与 apache-maven-3.0.5-bin.tar.gz

[[email protected] ~]$ vi /etc/profile
#JAVA_HOME
export JAVA_HOME=/opt/modules/jdk1.7.0_67
export PATH=$PATH:$JAVA_HOME/bin
#MAVEN_HOME
export MAVEN_HOME=/opt/modules/apache-maven-3.0.5
export PATH=$PATH:$MAVEN_HOME/bin
[[email protected] ~] source /etc/profile
[[email protected] ~] java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
[[email protected] ~]$ echo $MAVEN_HOME
/opt/modules/apache-maven-3.0.5
[[email protected] ~]# mvn -v
Apache Maven 3.0.5 (r01de14724cdef164cd33c7c8c2fe155faf9602da; 2013-02-19 05:51:28-0800)
Maven home: /opt/modules/apache-maven-3.0.5
Java version: 1.7.0_67, vendor: Oracle Corporation
Java home: /opt/modules/jdk1.7.0_67/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-504.el6.x86_64", arch: "amd64", family: "unix" 

  PS:准备文件中最好准备好maven的仓库文件,否则将下载很久

[[email protected] hadoop-2.5.0-src]# ls /root/.m2/repository/
ant biz commons-chain commons-el commons-validator junit sslext antlr bouncycastle commons-cli commons-httpclient dom4j log4j tomcat aopalliance bsh commons-codec commons-io doxia logkit xerces asm cglib commons-collections commons-lang io net xml-apis avalon-framework classworlds commons-configuration commons-logging javax org xmlenc backport-util-concurrent com commons-daemon commons-net jdiff oro xpp3 bcel commons-beanutils commons-digester commons-pool jline regexp

  3. yum 安装 cmake,zlib-devel,openssl-devel,gcc gcc-c++,ncurses-devel

[[email protected] ~]# yum -y install cmake
[[email protected] ~]# yum -y install zlib-devel
[[email protected] ~]# yum -y install openssl-devel
[[email protected] ~]# yum -y install gcc gcc-c++
[[email protected] hadoop-2.5.0-src]# yum -y install ncurses-devel

  4. 安装 protobuf-2.5.0.tar.gz(解压后进入protobuf主目录)

[[email protected] protobuf-2.5.0]# mkdir -p /opt/modules/protobuf
[[email protected] protobuf-2.5.0]# ./configure --prefix=/opt/modules/protobuf
...
[[email protected] protobuf-2.5.0]# make
...
[[email protected] protobuf-2.5.0]# make install
...
[[email protected] protobuf-2.5.0]# vi /etc/profile
...
#PROTOBUF_HOME
export PROTOBUF_HOME=/opt/modules/protobuf
export PATH=$PATH:$PROTOBUF_HOME/bin
[[email protected] protobuf-2.5.0]# source /etc/profile
[[email protected] protobuf-2.5.0]# protoc --version
libprotoc 2.5.0

  5. 解压Hadoop源文件压缩包,并进入主目录进行编译

[[email protected] protobuf-2.5.0]# cd ../../files/
[[email protected] files]# tar -zxf hadoop-2.5.0-src.tar.gz -C ../src/
[[email protected] files]# cd ../src/hadoop-2.5.0-src/
[[email protected] hadoop-2.5.0-src]# mvn package -DskipTests -Pdist,native
...
[INFO] Executed tasks
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /home/liuwl/opt/src/hadoop-2.5.0-src/hadoop-dist/target/hadoop-dist-2.5.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [8:22.179s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [5:14.366s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [1:50.627s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.795s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [1:11.384s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [1:55.962s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [10:21.736s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [4:01.790s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [35.829s]
[INFO] Apache Hadoop Common .............................. SUCCESS [12:51.374s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [29.567s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.220s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [1:04:44.352s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [1:40.397s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [1:24.100s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [12.020s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.239s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.298s]
[INFO] hadoop-yarn-api ................................... SUCCESS [2:07.150s]
[INFO] hadoop-yarn-common ................................ SUCCESS [3:13.690s]
[INFO] hadoop-yarn-server ................................ SUCCESS [1.009s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [54.750s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [2:53.418s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [23.570s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [16.137s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [1:17.456s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [9.170s]
[INFO] hadoop-yarn-client ................................ SUCCESS [17.790s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.132s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [6.689s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [3.015s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.102s]
[INFO] hadoop-yarn-project ............................... SUCCESS [13.562s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.526s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [1:27.794s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [1:32.320s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [19.368s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [26.041s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [31.938s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [38.261s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [5.923s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [12.856s]
[INFO] hadoop-mapreduce .................................. SUCCESS [15.510s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [20.631s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [51.096s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [13.185s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [22.877s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [25.861s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [9.764s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [7.152s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [23.914s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [21.289s]
[INFO] Apache Hadoop Client .............................. SUCCESS [18.486s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.966s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [37.039s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [9.809s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.192s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [34.114s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2:21:11.103s
[INFO] Finished at: Wed Sep 14 11:49:38 PDT 2016
[INFO] Final Memory: 86M/239M
[INFO] ------------------------------------------------------------------------
时间: 2024-10-10 15:26:17

Hadoop.2.x_源码编译的相关文章

Hadoop,HBase,Zookeeper源码编译并导入eclipse

基本理念:尽可能的参考官方英文文档 Hadoop:  http://wiki.apache.org/hadoop/FrontPage HBase:  http://hbase.apache.org/book.html Zookeeper:  https://cwiki.apache.org/confluence/display/ZOOKEEPER/Index 环境介绍 Ubuntu 14.04LTS, 32位 接下来则按照Hadoop,HBase,Zookeeper顺序来进行源码编译,建立文件夹

【hadoop 2.6】hadoop 2.6源码编译过程,redhat 5.8操作系统进行编译【附:软件下载】

大家在官网下载hadoop2.6安装完使用的时候,总是在控制台有这样一句 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 因为官网给的32位系统编译的版本,在64位的操作系统上使用就需要自己编译源码了 下面介绍下我的编译过程: 这里是下载了maven,ant,findbugs,分别解压后

Hadoop之Linux源码编译

Hadoop开篇,按惯例,先编译源码,导入到Eclipse,这样以后要了解那块,或者那块出问题了,直接找源码. hadoop2.4.1编译需要protoc2.5.0的支持,所以还要下载protoc.我下载的是:protobuf-2.5.0.tar.bz2 对protoc进行编译安装前先要装几个依赖包:gcc,gcc-c++,make 如果已经安装的可以忽略 yum install gcc yum install gcc-c++ yum install make yum install cmake

CentOS 6.4 64位 源码编译hadoop 2.2.0

CentOS 6.4 64位 源码编译hadoop 2.2.0 搭建环境:Centos 6.4 64bit 1.安装JDK 参考这里2.安装mavenmaven官方下载地址,可以选择源码编码安装,这里就直接下载编译好的wget http://mirror.bit.edu.cn/apache/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.zip解压文件后,同样在/etc/profie里配置环境变量vim /etc/profieexport

Hadoop源码编译过程

一.           为什么要编译Hadoop源码 Hadoop是使用Java语言开发的,但是有一些需求和操作并不适合使用java,所以就引入了本地库(Native Libraries)的概念,通过本地库,Hadoop可以更加高效地执行某一些操作. native目录是本地库,位于hadoop-2.x.x/lib/native目录下,仅支持linux和unix操作系统. hadoop中的zlib和gzip模块会加载本地库,如果加载失败,会提示: 16/07/06 12:18:24 WARN u

Hadoop2.x介绍与源码编译

1.Hadoop 项目的四大模块 Hadoop Common: The common utilities that support the other Hadoop modules. Hadoop Distributed File System (HDFS?): A distributed file system that provides high-throughput access to application data. Hadoop YARN: A framework for job s

1、Spark 2.1 源码编译支持CDH

目前CDH支持的spark版本都是1.x, 如果想要使用spark 2x的版本, 只能编译spark源码生成支持CDH的版本. 一.准备工作 找一台Linux主机, 由于spark源码编译会下载很多的第三方类库包, 因此需要主机能够联网. 1.安装Java, 配置环境变量, 版本为JDK1.7或者以上 下载地址:http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261

Spark1.0源码编译

编译方式一:mavenexport MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"mvn -Pyarn -Phive -Dhadoop.version=2.3.0-cdh5.0.0 -Dyarn.version=2.3.0-cdh5.0.0 -DskipTests clean package 参数说明:-Pyarn:支持yarn -Phive:Spark SQL中支持Hive-Dhadoop

Win7 32bit下hadoop2.5.1源码编译与平台搭建中遇到的各种错误

本文从一个小白在安装hadoop遇到的各种困难和错误说起,也希望得到大神的指点. 首先hadoop更新很快,最新的是hadoop2.5.1,因此就介绍下在安装2.5.1时遇到的各种困难. 如果直接准备在系统上安装binary版本的,可以参考http://www.cnblogs.com/kinglau/p/3270160.html.在下载好hadoop文件时,一定要注意下载的hadoop版本是否与本机的系统版本相对应,如32bit的还是64bit的,否则在执行start-all.sh时会报错.我就