在Fedora18上配置个人的Hadoop开发环境

在Fedora18上配置个人的Hadoop开发环境

1.    背景

文章中讲述了类似于“personalcondor”的一种“personal hadoop” 配置法。主要的目的是配置文件和日志文件有一个单一的源,

可以用软连接到开发生成的二进制库,这样就可以在所生成二进制库更新的时候维护其他的数据和配置项。

2.    用户案例

1.  比较不用改变现有系统中安装软件的情况下,在本地的沙盒环境中做测试

2.  单一源的配置文件盒日志文件

3.    参考

网页:

http://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment

http://vichargrave.com/create-a-hadoop-build-and-development-environment-for-hadoop/

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

http://wiki.apache.org/hadoop/

http://docs.hortonworks.com/CURRENT/index.htm#Appendix/Configuring_Ports/HDFS_Ports.htm

书籍:

Hadoop “TheDefinitive Guide”

4.    免责声明

1.  当前是在使用存在maven依赖的非本地开发步骤,详细信息在本地的包中,请查看:https://fedoraproject.org/wiki/Features/Hadoop

2 . 单节点环境搭建步骤在下边列出

5.    先决条件

1.      配置没有密码的ssh


yum install openssh openssh-clients openssh-server

# generate a public/private key, if you don‘t already have one

ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

chmod 600 ~/.ssh/*

# testing ssh:

ps -ef | grep sshd     # verify sshd is running

ssh localhost          # accept the certification when prompted

sudo passwd root       # Make sure the root has a password

2.        安装其它依赖包

yum install cmake git subversion dh-make ant autoconf automake sharutils libtool asciidoc xmlto curl protobuf-compiler gcc-c++ 

3.        安装java和开发环境

yum install java-1.7.0-openjdk java-1.7.0-openjdk-devel java-1.7.0-openjdk-javadoc *maven*

修改.bashrc文件信息

 export JVM_ARGS="-Xmx1024m -XX:MaxPermSize=512m"
 export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=512m"

注意:以上的配置用在F18的OpenJDK7上,可以通过以下命令来测试当前环境配置是否成功。

mvn install -Dmaven.test.failure.ignore=true

6.     搭建“personal-hadoop“

1.        下载编译hadoop

git clone git://git.apache.org/hadoop-common.git
cd hadoop-common
git checkout -b branch-2.0.4-alpha origin/branch-2.0.4-alpha
mvn clean package -Pdist -DskipTests

2.        创建沙盒环境

在这个配置中我们默认到/home/tstclair

cd ~
mkdir personal-hadoop
cd personal-hadoop
mkdir -p conf data name logs/yarn
ln -sf <your-git-loc>/hadoop-dist/target/hadoop-2.0.4-alpha home

3.        重写你的环境变量

附加以下信息到家目录的.bashrc文件中


# Hadoop env override:

export HADOOP_BASE_DIR=${HOME}/personal-hadoop

export HADOOP_LOG_DIR=${HOME}/personal-hadoop/logs

export HADOOP_PID_DIR=${HADOOP_BASE_DIR}

export HADOOP_CONF_DIR=${HOME}/personal-hadoop/conf

export HADOOP_COMMON_HOME=${HOME}/personal-hadoop/home

export HADOOP_HDFS_HOME=${HADOOP_COMMON_HOME}

export HADOOP_MAPRED_HOME=${HADOOP_COMMON_HOME}

# Yarn env override:

export HADOOP_YARN_HOME=${HADOOP_COMMON_HOME}

export YARN_LOG_DIR=${HADOOP_LOG_DIR}/yarn

#classpath override to search hadoop loc

export CLASSPATH=/usr/share/java/:${HADOOP_COMMON_HOME}/share

#Finally update your PATH

export PATH=${HADOOP_COMMON_HOME}/bin:${HADOOP_COMMON_HOME}/sbin:${HADOOP_COMMON_HOME}/libexec:${PATH}

4.        验证以上步骤

source ~/.bashrc
which hadoop    # verify it should be ${HOME}/personal-hadoop/home/bin  
hadoop -help    # verify classpath is correct.

5.        创建初始化单一源的配置文件

拷贝默认的配置文件

cp ${HADOOP_COMMON_HOME}/etc/hadoop/* ${HADOOP_BASE_DIR}/conf

更新你的hdfs-site.xml文件:


<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<!-- Override tstclair with your home directory -->

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://localhost/</value>

</property>

<property>

<name>dfs.name.dir</name>

<value>file:///home/tstclair/personal-hadoop/name</value>

</property>

<property>

<name>dfs.http.address</name>

<value>0.0.0.0:50070</value>

</property>

<property>

<name>dfs.data.dir</name>

<value>file:///home/tstclair/personal-hadoop/data</value>

</property>

<property>

<name>dfs.datanode.address</name>

<value>0.0.0.0:50010</value>

</property>

<property>

<name>dfs.datanode.http.address</name>

<value>0.0.0.0:50075</value>

</property>

<property>

<name>dfs.datanode.ipc.address</name>

<value>0.0.0.0:50020</value>

</property>

</configuratio

更新mapred-site.xml文件


<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<!-- Update or append these vars -->

<configuration>

<property>

<name>mapreduce.cluster.temp.dir</name>

<value>

</value>

<description>No description</description>

<final>true</final>

</property>

<property>

<name>mapreduce.cluster.local.dir</name>

<value>

</value>

<description>No description</description>

<final>true</final>

</property>

</configuration>

最后更新yarn-site.xml文件


<?xml version="1.0"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<configuration>

<!-- Site specific YARN configuration properties -->

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>localhost:8031</value>

<description>host is the hostname of the resource manager and

port is the port on which the NodeManagers contact the Resource Manager.

</description>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>localhost:8030</value>

<description>host is the hostname of the resourcemanager and port is the port

on which the Applications in the cluster talk to the Resource Manager.

</description>

</property>

<property>

<name>yarn.resourcemanager.scheduler.class</name>

<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>

<description>In case you do not want to use the default scheduler</description>

</property>

<property>

<name>yarn.resourcemanager.address</name>

<value>localhost:8032</value>

<description>the host is the hostname of the ResourceManager and the port is the port on

which the clients can talk to the Resource Manager. </description>

</property>

<property>

<name>yarn.nodemanager.local-dirs</name>

<value>

</value>

<description>the local directories used by the nodemanager</description>

</property>

<property>

<name>yarn.nodemanager.address</name>

<value>localhost:8034</value>

<description>the nodemanagers bind to this port</description>

</property>

<property>

<name>yarn.nodemanager.resource.memory-mb</name>

<value>10240</value>

<description>the amount of memory on the NodeManager in GB</description>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce.shuffle</value>

<description>shuffle service that needs to be set for Map Reduce to run </description>

</property>

</configuration>

7.    开启单节点的Hadoop集群

格式化namenode

hadoop namenode -format
#verify output is correct.

开启hdfs:

start-dfs.sh

打开浏览器http://localhost:50070,查看是否有一个节点已经被启动

接下来开启yarn

start-yarn.sh

通过查看日志文件来验证是否正常启动

最后通过运行MapReduce任务来检查Hadoop是否正常运行

cd ${HADOOP_COMMON_HOME}/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-example-2.0.4-alpha.jar randomwriter out

文章出处:http://timothysc.github.io/blog/2013/04/22/personalhadoop/

时间: 2024-10-31 14:40:05

在Fedora18上配置个人的Hadoop开发环境的相关文章

react-native —— 在Mac上配置React Native Android开发环境排坑总结

配置React Native Android开发环境总结 1.卸载Android Studio,在终端(terminal)执行以下命令: rm -Rf /Applications/Android\ Studio.app rm -Rf ~/Library/Preferences/AndroidStudio* rm ~/Library/Preferences/com.google.android.studio.plist rm -Rf ~/Library/Application\ Support/A

Mac上配置maven+eclipse+spark开发环境

1.安装jdk 2.下载scala-ide.官网:http://scala-ide.org 3.安装maven 4.在eclipse中,配置maven的安装了路径.偏好设置--->maven--->installpath 5.修改maven的镜像文件,即setting.txt中的mirror.具体修改为如下: <mirrors> <mirror> <id>alimaven</id> <name>aliyun maven</nam

MAC上配置asp.net core开发环境

安装.NET Core sdk https://www.microsoft.com/net/core#macos 安装VS Code https://code.visualstudio.com/Download 使用vs code,需要安装一些必要的插件,比如c# extention.点左边五个大按钮选项最下面一个,便可管理你的插件. IDE都有一些快捷键,这个百度或BING一下即可. 运行调试,左边第四个选项 Git配置,左边第三个选项 安装NodeJs 推荐使用Homebrew安装软件,如果

hadoop开发环境搭建(1)

作为初学Hadoop的新手,搭建Hadoop开发环境花了我很大功夫.倒不是hadoop搭建复杂,由于hadoop本身是一个分布式.多jvm进程的运行环境,我们想达到能用eclipse进行代码跟踪调试目的,还真不是一般的费劲. 一边在网上向给位前辈学习,一边自己动手尝试,花了我整整一天的时间终于完成了,为了使自己好不太容易完成的成就,后续被轻易忘记,也为了帮助其他hadoop小白同类脱贫致富,花了一晚上总结了此篇博文,以兹鼓励. 一.准备篇 言归正传,首先是准备篇.这里我们需要准备不少东东: 1.

搭建基于MyEclipse的Hadoop开发环境

前面我们已经搭建了一个伪分布模式的Hadoop运行环境. 我们绝大多数都习惯在Eclipse或MyEclipse中做Java开发,本次随笔我就教大家如何搭建一个基于MyEclipse IDE的Hadoop开发环境. 闲话少说,走起! 第一步 安装MyEclipse的Hadoop插件 1 打开MyEclipse,查看是否已经安装过 window  ->  preferences 没有显示Hadoop Map/Reduce,所以说明是MyEclipse是没有安装过Eclipse的插件. 首先,确认你

Hadoop开发环境搭建(linux)

Hadoop开发环境搭建(linux) 零.安装xwindows apt-get install ubuntu-desktop 一.安装Eclipse 下载Eclipse,解压安装,例如安装到/usr/local,即/usr/local/eclipse 二.在eclipse上安装hadoop插件 1.下载hadoop插件 下载地址:http://pan.baidu.com/s/1mgiHFok 此zip文件包含了源码,我们使用使用编译好的jar即可,解压后,release文件夹中的hadoop.

spark-windows(含eclipse配置)下本地开发环境搭建

spark-windows(含eclipse配置)下本地开发环境搭建   >>>>>>注意:这里忽略JDK的安装,JDK要求是1.8及以上版本,请通过 java  –version查看. 一.spark命令行环境搭建 Step1:安装Spark 到官网http://spark.apache.org/downloads.html选择相应版本,下载安装包.我这里下的是2.1.3版本,后面安装的Hadoop版本需要跟Spark版本对应.下载后找个合适的文件夹解压即可.这里新建

在Eclipse上搭建Cocos2d-x的Android开发环境

很多其它相关内容请查看本人博客:http://www.bokeyi.com/ll/category/cocos2d-x/ 本文的搭建方法是最新最正确的方法,好多朋友反映搭建eclipse交叉编译环境非常复杂又头疼,事实上仅仅是网上的资料太过零散,差一步而谬之千里啊,不多说,假设你看了这篇文章,恭喜你,你省下了至少48小时的生命. 我的操作系统:WIN7 Eclipse版本号:eclipse-jee-kepler-SR2-win32 Cocos2d-x版本号:2.2.3 一.Android环境搭建

【转载】在Ubuntu下配置舒服的Python开发环境

在Ubuntu下配置舒服的Python开发环境 2013-10-26 00:10 11188人阅读 评论(0) 收藏 举报 目录(?)[+] Ubuntu 提供了一个良好的 Python 开发环境,但如果想使我们的开发效率最大化,还需要进行很多定制化的安装和配置.下面的是我们团队开发人员推荐的一个安装和配置步骤,基于 Ubuntu 12.04 桌面版本标准安装. 安装 Python 发布版本和 build 依赖包 建议至少安装 Python 2.7/3.2 版本,毕竟 Python 2.X/3.