hadoop2.3安装过程及问题解决

三台serveryiprod01,02,03,当中01为namenode,02为secondarynamenode。3个均为datanode

3台server的这里提到的配置均需一样。

0、安装前提条件:

0.1 确保有java

安装完java后,在.bash_profile中,必须有JAVA_HOME配置

export JAVA_HOME=/home/yimr/local/jdk

0.2 确保3台机器建立信任关系,详见还有一篇文章

1、core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/sdc/tmp/hadoop-${user.name}</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://yiprod01:9000</value>
    </property>
</configuration>

2、hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
         <name>dfs.namenode.secondary.http-address</name>
         <value><span style="font-family: Arial, Helvetica, sans-serif;">yiprod02</span><span style="font-family: Arial, Helvetica, sans-serif;">:9001</value></span>
    </property>
    <property>
         <name>dfs.namenode.name.dir</name>
         <value>file:/home/yimr/dfs/name</value>
    </property>
    <property>
         <name>dfs.datanode.data.dir</name>
         <value>file:/home/yimr/dfs/data</value>
    </property>
    <property>
         <name>dfs.replication</name>
         <value>3</value>
    </property>
    <property>
         <name>dfs.webhdfs.enabled</name>
         <value>true</value>
    </property>
</configuration>

3、hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.6.0_27

4、mapred-site.xml

<configuration>
    <property>
        <!-- 使用yarn作为资源分配和任务管理框架 -->
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <!-- JobHistory Server地址 -->
        <name>mapreduce.jobhistory.address</name>
        <value>yiprod01:10020</value>
    </property>
    <property>
        <!-- JobHistory WEB地址 -->
        <name>mapreduce.jobhistory.webapp.address</name>
        <value><span style="font-family: Arial, Helvetica, sans-serif;">yiprod01</span><span style="font-family: Arial, Helvetica, sans-serif;">:19888</value></span>
    </property>
    <property>
        <!-- 排序文件的时候一次同一时候最多可并行的个数 -->
        <name>mapreduce.task.io.sort.factor</name>
        <value>100</value>
    </property>
    <property>
ll        <name>mapreduce.reduce.shuffle.parallelcopies</name>
        <value>50</value>
    </property>
    <property>
        <name>mapred.system.dir</name>
        <value>file:/home/yimr/dfs/mr/system</value>
    </property>
    <property>
        <name>mapred.local.dir</name>
        <value>file:/home/sdc/dfs/mr/local</value>
    </property>
    <property>
        <!-- 每一个Map Task须要向RM申请的内存量 -->
        <name>mapreduce.map.memory.mb</name>
        <value>1536</value>
    </property>
    <property>
        <!-- 每一个Map阶段申请的Container的JVM參数 -->
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx1024M</value>
    </property>
    <property>
        <!-- 每一个Reduce Task须要向RM申请的内存量 -->
        <name>mapreduce.reduce.memory.mb</name>
        <value>2048</value>
    </property>
    <property>
        <!-- 每一个Reduce阶段申请的Container的JVM參数 -->
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx1536M</value>
    </property>
    <property>
        <!-- 排序内存使用限制 -->
        <name>mapreduce.task.io.sort.mb</name>
        <value>512</value>
    </property>
</configuration>

5、yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>yiprod01:8080</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>yiprod01:8081</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>yiprod01:8082</value>
    </property>
    <property>
        <!-- 每一个nodemanager可分配的内存总量 -->
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>${hadoop.tmp.dir}/nodemanager/remote</value>
    </property>
    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>${hadoop.tmp.dir}/nodemanager/logs</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>yiprod01:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>yiprod01:8088</value>
    </property>
</configuration>

6、format namenode

java.io.IOException: NameNode is not formatted.
hadoop namenode -format

7、问题解决

7.1 32位库问题

表现:

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/08/01 11:59:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/yimr/local/hadoop-2.3.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It‘s highly recommended that you fix the library with ‘execstack -c <libfile>‘, or link it with ‘-z noexecstack‘.
yiprod01]
sed: -e expression #1, char 6: unknown option to `s‘
-c: Unknown cipher type ‘cd‘
The authenticity of host ‘yiprod01 (192.168.1.131)‘ can‘t be established.
RSA key fingerprint is ac:9e:e0:db:d8:7a:29:5c:a1:d4:7f:4c:38:c0:72:30.
Are you sure you want to continue connecting (yes/no)?

64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
You: ssh: Could not resolve hostname You: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known

原因是使用了下载hadoop时。默认编译的32位的库

file libhadoop.so.1.0.0

libhadoop.so.1.0.0: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped

暂时解决的方法:

改动etc以下的hadoop-env.sh

在末尾加上例如以下两行

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.library.path=$HADOOP_PREFIX/lib"

但仍然有下面warning

14/08/01 11:46:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

至此hadoop能够正常启动起来,在单独的一篇文章介绍怎样彻底解决此问题。

时间: 2024-08-06 20:07:22

hadoop2.3安装过程及问题解决的相关文章

我的hadoop2.4安装过程

先记录下安装jdk出现的问题: (1)首先删除掉已经安装的openjdk sudo apt-get purge openjdk* ----以下出自: http://blog.csdn.net/berryreload/article/details/7492012 Ubuntu 12.04 Precise Clean up the historical open jdk: sudo apt-get purge openjdk* Add a new repo and install the sdk:

Ubuntu 14.04 下安装gSOAP安装过程和问题解决

soap下载地址为:http://www.cs.fsu.edu/~engelen/soap.html 将下载的安装包解压,并拷贝到linux的操作目录,本人操作目录为: $: /home/orange 安装编译工具: $sudo apt-get install build-essential 为了成功编译gSOAP,您需要安装GTK+的开发文件和GLib库(libraries). $sudo apt-get install libgtk2.0-dev libglib2.0-dev 以上面的目录为

Hadoop2.2.0安装过程记录

1    安装环境1.1    客户端1.2    服务端1.3    安装准备    2    操作系统安装2.1.1    BIOS打开虚拟化支持2.1.2    关闭防火墙2.1.3    安装VNC3    JAVA安装    3.1    配置目的3.2    配置过程3.3    结果验证4    主机名配置4.1    配置目的4.2    配置过程4.3    结果验证5    增加hadoop用户5.1    配置目的5.2    配置过程5.3    结果验证6    Had

CentOS 7 U盘安装及常用WEB环境安装过程问题解决

1.1 **第一种方法 : 使用UlraISO 将CentOS-7.0-1406-x86_64-Everything.iso写入U盘.**  成功进入引导界面.直接选择第一项.出现错误,Warning: /dev/root does not exist, could not boot 无法进入安装界面.原因initrd.img启动后找不到vmlinuz.进入安装界面选择第一项后,按TAB键 ,编辑参数以下参数,sdb1必须加冒号(sdb1是U盘的设备名称,在不同的主机下为不同的名字.通常为 sd

redhat6.5安装cacti监控及安装过程中的问题解决

课题描述:在redhat系统中安装cacti这款通用的监控工具. 环境描述:我的客户端使用的win7的系统,通过putty远程登录Linux服务器 客户端IP:192.168.0.23,Linux服务器IP:192.168.0.56(使用的是内网) cacti原理:cacti本身是一个PHP脚本,它的所有功能都是由其插件完成的,cacti通过snmp协议与被监控主机取得通信,被监控主机的监控参数保存在cacti目录下的一个rra文件中,通过rrdtool这个工具定时地到这个文件中读取数据,然后做

OpenStack安装过程Yum源的问题解决

首先说Openstack官网的安装文档做的太棒了,文档链接:http://docs.openstack.org/mitaka/install-guide-rdo/. 但是在执行 yum install python-openstackclient 遇到如下错误,有若干依赖包无法下载. python2-babel-2.3.4-1.el7.noar FAILED                                          ====-                 ] 700

Oracle10g安装过程中ORA-27125问题解决

Oracle10g在CentOS7的安装过程中报错如下错误信息: ORA-27125: unable to create shared memory segment 解决办法: [[email protected] database]# id oracle uid=1000(oracle) gid=1000(oinstall) groups=1000(oinstall),1001(dba) [[email protected] database]# echo "1001" >/p

艰辛五天:Ubuntu14.04+显卡驱动+cuda+Theano环境安装过程

题记:从一开始不知道显卡就是GPU(虽然是学计算机的,但是我真的不知道-脑残如我也是醉了),到搞好所有这些环境前后弄了5天时间,前面的买显卡.装显卡和装双系统见另一篇博客装显卡.双系统,这篇主要记录我怎么配置后面的环境,虽然中间重装Ubuntu三次,后面安装过程也没差别. 基础平台:64-bit,Ubuntu14.04 1.安装NVIDIA驱动(参考技术文章,基本是复制啊,蟹蟹作者~) (1) 在官网下载NVIDIA驱动,根据自己买的型号选择下载,放到 /home/lvxia/ 目录下面,我下载

mysql兼mariadb安装过程详解

mysql兼mariadb下载自己找自己对应的版本: https://dev.mysql.com/downloads/mysql/ 因为5.5以后都用cmake编译了,所以系统里没有的话,就下个源码的装一下,怎么测试系统里有没有装了,在命令行中输入#cma  在按Tab看有没有cmake  有的话系统就装过了,就不用在装了.没有话就去下个吧,下载地址:https://cmake.org/download/ 还要装boost才能装cmake,boost下载地址:https://sourceforg