【Hadoop】伪分布式环境搭建、验证

Hadoop伪分布式环境搭建:

自动部署脚本:

#!/bin/bash
set -eux

export APP_PATH=/opt/applications
export APP_NAME=Ares

# 安装apt依赖包
apt-get update -y     && apt-get install supervisor -y     && apt-get install python-dev python-pip libmysqlclient-dev -y

# 安装pip、python依赖
pip install --upgrade pip     && pip install -r ./build-depends/pip-requirements/requirements.txt

# 安装JDK
tar -xzvf ./build-depends/jdk-package/jdk-7u60-linux-x64.tar.gz     && ln -s jdk1.7.0_60/ jdk

# 配置JAVA环境变量
echo -e ‘\n‘ >> /etc/profile
echo ‘# !!!No Modification, This Section is Auto Generated by ‘${APP_NAME} >> /etc/profile
echo ‘export JAVA_HOME=‘${APP_PATH}/${APP_NAME}/jdk >> /etc/profile
echo ‘export JRE_HOME=${JAVA_HOME}/jre‘ >> /etc/profile
echo ‘export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar‘ >> /etc/profile
echo ‘export PATH=${PATH}:${JAVA_HOME}/bin:${JRE_HOME}/bin‘ >> /etc/profile
source /etc/profile && java -version

# 安装Hadoop
tar -xzvf ./build-depends/hadoop-package/hadoop-2.5.2.tar.gz     && ln -s hadoop-2.5.2 hadoop
# hadoop-env.sh配置JAVA_HOME
mv ./hadoop/etc/hadoop/hadoop-env.sh ./hadoop/etc/hadoop/hadoop-env.sh.bak     && cp -rf ./build-depends/hadoop-conf/hadoop-env.sh ./hadoop/etc/hadoop/     && sed -i "25a export JAVA_HOME=${APP_PATH}/${APP_NAME}/jdk" ./hadoop/etc/hadoop/hadoop-env.sh
# core-site.xml配置
mv ./hadoop/etc/hadoop/core-site.xml ./hadoop/etc/hadoop/core-site.xml.bak     && python ./build-utils/configueUpdate/templateInvoke.py ./build-depends/hadoop-conf/core-site.xml ./hadoop/etc/hadoop/core-site.xml
# hdfs-site.xml配置
mv ./hadoop/etc/hadoop/hdfs-site.xml ./hadoop/etc/hadoop/hdfs-site.xml.bak     && python ./build-utils/configueUpdate/templateInvoke.py ./build-depends/hadoop-conf/hdfs-site.xml ./hadoop/etc/hadoop/hdfs-site.xml
# mapred-site.xml配置
python ./build-utils/configueUpdate/templateInvoke.py ./build-depends/hadoop-conf/mapred-site.xml.template ./hadoop/etc/hadoop/mapred-site.xml
# yarn-site.xml配置
mv ./hadoop/etc/hadoop/yarn-site.xml ./hadoop/etc/hadoop/yarn-site.xml.bak     && python ./build-utils/configueUpdate/templateInvoke.py ./build-depends/hadoop-conf/yarn-site.xml ./hadoop/etc/hadoop/yarn-site.xml
# slaves, 即DataNode配置
mv ./hadoop/etc/hadoop/slaves ./hadoop/etc/hadoop/slaves.bak
DataNodeList=(`echo ${DataNodeList} | tr ";" "\n"`)
for DataNode in ${DataNodeList}; do
    echo ${DataNode} >> ./hadoop/etc/hadoop/slaves
done

# 配置Hadoop环境变量
echo -e ‘\n‘ >> /etc/profile
echo ‘# !!!No Modification, This Section is Auto Generated by ‘${APP_NAME} >> /etc/profile
echo ‘export HADOOP_HOME=‘${APP_PATH}/${APP_NAME}/hadoop >> /etc/profile
echo ‘export PATH=${PATH}:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin‘ >> /etc/profile
source /etc/profile && hadoop version

# Namenode格式化
# hadoop namenode -format -force
hdfs namenode -format -force

# 启动hdfs、yarn
stop-dfs.sh && start-dfs.sh && jps
stop-yarn.sh && start-yarn.sh && jps

# hdfs测试
# hadoop fs -put ./build-depends/jdk-package/jdk-7u60-linux-x64.tar.gz hdfs://HADOOP-NODE1:9000/
hdfs dfs -put ./build-depends/jdk-package/jdk-7u60-linux-x64.tar.gz hdfs://HADOOP-NODE1:9000/
# hadoop fs -get hdfs://HADOOP-NODE1:9000/jdk-7u60-linux-x64.tar.gz .
hdfs dfs -get hdfs://HADOOP-NODE1:9000/jdk-7u60-linux-x64.tar.gz .
rm -rf jdk-7u60-linux-x64.tar.gz

# mapred测试
hadoop jar ./hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar pi 5 10

# word-count测试
touch word-count.txt        && echo "hello world" >> word-count.txt     && echo "hello tom" >> word-count.txt     && echo "hello jim" >> word-count.txt     && echo "hello kitty" >> word-count.txt     && echo "hello baby" >> word-count.txt
# hadoop fs -put word-count.txt hdfs://HADOOP-NODE1:9000/
# hadoop fs -rm hdfs://HADOOP-NODE1:9000/word-count.txt
hadoop fs -mkdir hdfs://HADOOP-NODE1:9000/word-count
hadoop fs -mkdir hdfs://HADOOP-NODE1:9000/word-count/input
# hadoop fs -mkdir hdfs://HADOOP-NODE1:9000/word-count/output
# hadoop fs -rmdir hdfs://HADOOP-NODE1:9000/word-count/output
hadoop fs -put word-count.txt hdfs://HADOOP-NODE1:9000/word-count/input
hadoop jar ./hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar wordcount hdfs://HADOOP-NODE1:9000/word-count/input hdfs://HADOOP-NODE1:9000/word-count/output
hadoop fs -ls hdfs://HADOOP-NODE1:9000/word-count/output
hadoop fs -cat hdfs://HADOOP-NODE1:9000/word-count/output/part-r-00000

# supervisord 配置文件
#cp ${APP_PATH}/supervisor.conf.d/*.conf /etc/supervisor/conf.d/

# start supervisord nodaemon
# /usr/bin/supervisord --nodaemon
#/usr/bin/supervisord

运行脚本:

# 此处描述应用运行命令使用方法.
export APP_PATH=/opt/applications
export APP_NAME=Ares
export APP_Version=2.5.2

# 单节点-伪分布式
#HOSTNAME           IP              HDFS                                YARN
#HADOOP-NODE1       10.20.0.11      NameNode/SNameNode/DataNode         NodeManager/ResourceManager
export NameNode_HOST=HADOOP-NODE1
export NameNode_RPCPort=9000
export NameNode_HTTP_PORT=50070
export SNameNode_HOST=HADOOP-NODE1
export SNameNode_HTTP_PORT=50090
export SNameNode_HTTPS_PORT=50091
export HDFS_Replication=1
export YARN_RSC_MGR_HOST=HADOOP-NODE1
export YARN_RSC_MGR_HTTP_PORT=8088
export YARN_RSC_MGR_HTTPS_PORT=8090
export DataNodeList=‘HADOOP-NODE1‘

mkdir -p ${APP_PATH}/${APP_NAME}     && mv ${APP_NAME}-${APP_Version}.zip ${APP_PATH}/${APP_NAME}/     && cd ${APP_PATH}/${APP_NAME}/     && unzip ${APP_NAME}-${APP_Version}.zip     && chmod a+x run.sh     && ./run.sh
时间: 2024-10-07 05:22:22

【Hadoop】伪分布式环境搭建、验证的相关文章

一、Hadoop伪分布式环境搭建

Hadoop 2.x伪分布式环境搭建步骤: 1.修改hadoop-env.sh.yarn-env.sh.mapred-env.sh 方法:使用notepad++(beifeng用户)代开这三个文件 添加代码:export JAVA_HOME=/opt/modules/jdk1.7.0_67 2.修改core-site.xml.hdfs-site.xml.yarn-site.xml.mapred-site.xml配置文件 1)修改core-site.xml <configuration> <

《OD大数据实战》hadoop伪分布式环境搭建

一.安装并配置Linux 8. 使用当前root用户创建文件夹,并给/opt/下的所有文件夹及文件赋予775权限,修改用户组为当前用户 mkdir -p /opt/modules mkdir -p /opt/software mkdir -p /opt/datas mkdir -p /opt/tools chmod 775 /opt/* chown beifeng:beifeng /opt/* 最终效果如下: [[email protected]02 opt]$ pwd /opt [[email

Hadoop学习笔记之Hadoop伪分布式环境搭建

搭建为伪分布式Hadoop环境 1.宿主机(Windows)与客户机(安装在虚拟机中的Linux)网络连接. a) Host-only 宿主机与客户机单独组网: 好处:网络隔离: 坏处:虚拟机和其他服务器之间不能通讯: b) Bridge 桥接 宿主机与客户机在同一个局域网中. 好处:窦在同一个局域网,可以互相访问: 坏处:不完全. 2.Hadoop的为分布式安装步骤 a) 设置静态IP 在centos下左面上右上角图标右键修改: 重启网卡service network restart; 验证:

hadoop伪分布式环境搭建

1.准备Linux环境 1.0点击VMware快捷方式,右键打开文件所在位置 -> 双击vmnetcfg.exe -> VMnet1 host-only ->修改subnet ip 设置网段:192.168.1.0 子网掩码:255.255.255.0 -> apply -> ok 回到windows --> 打开网络和共享中心 -> 更改适配器设置 -> 右键VMnet1 -> 属性 -> 双击IPv4 -> 设置windows的IP:1

hadoop学习(一) &#160; hadoop伪分布式环境搭建

前期准备 1.创建hadoop相关目录(便于管理) 2.给/opt/*目录赋予hadoop用户及所有组权限 sudo chrown -R hadoop:hadoop /opt/* 3.JDK安装与配置 配置HDFS/YARN/MAMREDUCE 1.解压hadoop tar -zxf hadoop-2.5.0.tar.gz -C /opt/modules/ (删除doc下的帮助文档,节省空间) rm -rf /opt/modules/hadoop-2.5.0/share/doc/ 2.配置had

hadoop伪分布式环境搭建:linux操作系统安装图解

本篇文章是接上一篇<新手入门篇:虚拟机搭建hadoop环境的详细步骤>,上一篇有人问怎么没写hadoop安装.在文章开头就已经说明了,hadoop安装会在后面写到,因为整个系列的文章涉及到每一步的截图,导致文章整体很长.会分别先对虚拟机的安装.Linux系统安装进行介绍,然后才会写到hadoop安装,关于hadoop版本我使用的是大快搜索三节点发行版DKhadoop.(三节点的DKHadoop发行版可以自己去大快网站页面下载,目前是开放所有权限的,也就是免费版本和付费版本的权限一样,不知道以后

Hadoop初体验:快速搭建Hadoop伪分布式环境

0.前言 本文旨在使用一个全新安装好的Linux系统从0开始进行Hadoop伪分布式环境的搭建,以达到快速搭建的目的,从而体验Hadoop的魅力所在,为后面的继续学习提供基础环境. 对使用的系统环境作如下说明: 操作系统:CentOS 6.5 64位 主机IP地址:10.0.0.131/24 主机名:leaf 用户名:root hadoop版本:2.6.5 jdk版本:1.7 可以看到,这里直接使用root用户,而不是按照大多数的教程创建一个hadoop用户来进行操作,就是为了达到快速搭建Had

Hadoop 2.x伪分布式环境搭建测试

Hadoop 2.x伪分布式环境搭建测试 标签(空格分隔): hadoop hadoop,spark,kafka交流群:459898801 1,搭建hadoop所需环境 卸载open JDK rpm -qa |grep java rpm -e –nodeps [java] 1.1,在/opt/目录下创建四个目录: modules/ software/ datas/ tools/ 解压hadoop-2.5.0及jdk-7u67-linux-x64.tar.gz至modules目录下. $tar -

阿里云服务器centos7.3下搭建hadoop伪分布式环境

一.软硬件环境 CentOS 7.2 64位 OpenJDK-1.8.0 Hadoop-2.7 二.安装SSH客户端 安装ssh: yum install openssh-clients openssh-server 安装完成后,使用以下命令测试: ssh localhost输入 root 账户的密码,如果可以正常登录,则说明SSH安装没有问题. 配置SSH免key登陆 hadoop是一个分布式系统,节点间通过ssh通信,为了避免在连接过程中人工输入密码,需要进行ssh免key登陆的配置,由于本