ambari之hadoop的安装

前期环境准备:

三台机器:192.168.1.25(1-3);251-ambari_master;252-ambari_slare1;253-ambari_slare2

机器配置,系统环境

使用虚拟机,系统rhel6.6 ; centos6.5

所有机器都需要执行一下设置(我已经设置好了,可以看一下):

1.打开最大文件数设置

[[email protected]_slare2 ~]# ulimit -Hn
10000

[[email protected]_slare2 ~]# ulimit -Sn
10000

补充:【

1. 修改/etc/security/limits.conf

通过 vi /etc/security/limits.conf修改其内容,在文件最后加入(数值也可以自己定义):

* soft  nofile = 32768

* hard  nofile = 65536

2. 修改/etc/profile

通过vi /etc/profile修改,在最后加入以下内容

ulimit -n 32768

然后重新登录即可生效了。

说明

其实只修改/etc/profile就可以生效了,但我还是建议把/etc/security/limits.conf也修改一下。

最后强调的是,你如果要使得修改对所有用户都生效,那么现在看来你只能重新编译Linux的内核才行。】

2. 修改hosts文件
[[email protected]_slare2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.251 ambari_master
192.168.1.252 ambari_slare1
192.168.1.253 ambari_slare2

3. 设置selinux为disabled
[[email protected]_slare2 ~]# getenforce
Disabled

4. 关闭防火墙
[[email protected]_slare2 ~]# /etc/init.d/iptables status
iptables: Firewall is not running.

5. 开启ntpd
[[email protected]_slare2 ~]# /etc/init.d/ntpd status
ntpd (pid  5858) is running...

6. 设置环境变量jdk

[[email protected]_slare2 ~]# java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

7. 配置ssh

[[email protected]_slare2 ~]# cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnxCVyAwzM743vB6KB4EVLDZ0+ydsmEtuMHD0ATar8zWqPDuBvGc4un5Fv1mIBCgOt3+GyWbDznACNlDzLkwRkxU8XhhTsRFHaWb9t9rH0N9dDEWbLqE1D70MY+oN7ZMVwDSooUESRx05Eg8szoDPY+JXHF8AWgigNUhesJkMVpshI+GNV/x3a9F2aRvTyk5QibMVcmNGYXdrIxzhX8VDWAWI1soy3vAorteHORzOdzWuPZm78MUYwTA1p7z1h7q4gfG3GIEThkCss72LE1m7mIwgTNeAlxWYXckxzhQC13pS1D2dWKLucQHhVqfU0QW5mPE0f/++oyx9trFr72Aaaw== [email protected]_slare2
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAki9g5Sy4WWXkDcz78bU5oq5flWk5JL9UeL62XC0qoapD63SjFep9aRWBSabxWHaENo7G/6ES8CI2kjTIwB1syMC8ropDAx+WbkSoLPwrwqapnK49OtQ0hnTs6QMAHey3ilzWfZxKmnk0yKavFqhbfPaBYps8ewXeGdPFsRaPIJzbInwXMw4/sB1hguA0rR55fs3vJR6Px1RGSt6fq/pxX7Wmug+3JShJras8ucs3F+C491f49dNwhQBdgjHCEFabhXeSZG3ngMBX8sOMhuN19Xg/oIaa2IKX4ckIu/LbwNw5lIc+6l9kVn0Y0BLeHux6gLaQM0EfwMbsvdOK/tGZrQ== [email protected]_slare1
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAtZacjGL5llcaJLQZtCKDzqg/CQPjKFRJDmo+bIFBfrlD5jAR78fXnlhxdqg8dvSVnSyU7q73bYrV7U+ym4lqq3J7JMRGuNSBBndnwu90I175w8V4IntPS9tv/oLo9zzsPrnKYmsxXguUahEOJJErImIQ4LPJ3oBUDISxfIjEckjlvkUNThUmOMxSHVwyvpwFBzDWBcYsYJtZJbZYOdNSQyOb3AFOgwkgR+sPj+C+Kdp6yP/Ua3r/yZGGgUR+NFLM8x7Oz236cmJVy0xVFrE3BxYIJDp+VBeWb8bTdI40XCPmRvb0wpLRFy9nj7i1EMzZ7xTvCTZo48oLBsxH/obd+w== [email protected]_master

8. yum -y groupinstall "Development tools"

9. yum -y install mysql*

10. yum -y install ruby* redhat-lsb* snappy*

制作ambari和hadoop本地源:

1. 创建本地源目录

选取一台服务器,安装http服务,安装完后会自动创建 /var/www/html 目录(我选的是251 ambari_master)
启动httpd 命令: service httpd start  || /etc/init.d/httpd start
                 chkconfig httpd on
创建两个目录:
mkdir  /var/www/html/ambari
mkdir  /var/www/html/hdp

2.安装createrepo(在这里需要强制安装,添加:--force --nodeps)

2.1 [[email protected]_master Packages]# rpm -ivh createrepo-0.9.9-22.el6.noarch.rpm --force --nodeps
warning: createrepo-0.9.9-22.el6.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing...                ########################################### [100%]
   1:createrepo             ########################################### [100%]

2.2 [[email protected]_master hdp]# yum install python*

(输出结果太多,这里就不粘贴了,自己试试)

3.ambari本地源创建

解压HDP-UTILS-1.1.0.20-centos6.tar.gz 到目录 /var/www/html/ambari
命令:tar  zxvf  HDP-UTILS-1.1.0.20-centos6.tar.tar  -C  /var/www/html/ambari
把Updates-ambari-2.2.1.0目录拷贝到目录 /var/www/html/ambary

cd /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6
mkdir Updates-ambari-2.2.1.0
cp -r /var/www/html/ambari/Updates-ambari-2.2.1.0/ambari /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0

进入/var/www/html/ambari 目录,执行命令:createrepo  ./
ambari本地源制作完成

结果:

[[email protected]_master ambari]# createrepo ./
Spawning worker 0 with 50 pkgs
Workers Finished
Gathering worker results

Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
[[email protected]_master ambari]# ls
HDP-UTILS-1.1.0.20  repodata  Updates-ambari-2.2.1.0

4. hadoop本地源创建

把HDP-2.4.0目录拷贝到目录 /var/www/html/hdp
进入/var/www/html/hdp 目录,执行命令:createrepo  ./  
hadoop本地源制作完成

结果:

[[email protected]_master ambari]# cd ../hdp/
[[email protected]_master hdp]# createrepo ./
Spawning worker 0 with 179 pkgs
Workers Finished
Gathering worker results

Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
[[email protected]_master hdp]# ls
HDP-2.4.0  repodata

5. 系统光盘本地源创建

[[email protected]_master ambari]# mount -o loop /mnt/iso/rhel-server-6.6-x86_64-dvd.iso /mnt/cdrom/

[[email protected]_master ambari]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       50G   11G   36G  24% /
tmpfs                 2.0G   72K  2.0G   1% /dev/shm
/dev/sda1             477M   33M  419M   8% /boot
/dev/mapper/VolGroup-lv_home
                       45G   52M   43G   1% /home
/mnt/iso/rhel-server-6.6-x86_64-dvd.iso
                      3.6G  3.6G     0 100% /mnt/cdrom

[[email protected]_master ambari]#ls /mnt/iso/

rhel-server-6.6-x86_64-dvd.iso
[[email protected]_master ambari]# ls /mnt/cdrom/
EFI      EULA_ja           isolinux       ResilientStorage
EULA     EULA_ko           LoadBalancer   RPM-GPG-KEY-redhat-beta
EULA_de  EULA_pt           media.repo     RPM-GPG-KEY-redhat-release
EULA_en  EULA_zh           Packages       ScalableFileSystem
EULA_es  GPL               README         Server
EULA_fr  HighAvailability  release-notes  TRANS.TBL
EULA_it  images            repodata
[[email protected]_master ambari]# cat /etc/yum.repos.d/myself.repo
[daxiong]
name=daxiong
baseurl=file:///mnt/cdrom
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

检验:

[[email protected]_master yum.repos.d]# yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
HDP-2.4                                                                                                                                                                  | 2.9 kB     00:00    
HDP-2.4/primary_db                                                                                                                                                       |  61 kB     00:00    
HDP-UTILS-1.1.0.20                                                                                                                                                       | 2.9 kB     00:00    
HDP-UTILS-1.1.0.20/primary_db                                                                                                                                            |  31 kB     00:00    
ambari-2.2.1                                                                                                                                                             | 2.9 kB     00:00    
ambari-2.2.1/primary_db                                                                                                                                                  |  31 kB     00:00    
daxiong                                                                                                                                                                  | 4.1 kB     00:00 ...
daxiong/primary_db                                                                                                                                                       | 3.1 MB     00:00 ...
repo id                                                                 repo name                                                                                                         status
HDP-2.4                                                                 HDP-2.4                                                                                                             179
HDP-UTILS-1.1.0.20                                                      Hortonworks Data Platform Utils Version - HDP-UTILS-1.1.0.20                                                         50
ambari-2.2.1                                                            Ambari 2.2.1                                                                                                         50
daxiong                                                                 daxiong                                                                                                           3,785
repolist: 4,064

6. mysql 的配置

CREATE USER ‘ambari‘@‘%‘ IDENTIFIED BY ‘ambari‘;

GRANT ALL PRIVILEGES ON *.* TO ‘ambari‘@‘%‘;

CREATE USER ‘ambari‘@‘localhost‘ IDENTIFIED BY ‘ambari‘;

GRANT ALL PRIVILEGES ON *.* TO ‘ambari‘@‘localhost‘;

CREATE USER ‘ambari‘@‘ambari_master‘ IDENTIFIED BY ‘ambari_master‘;

GRANT ALL PRIVILEGES ON *.* TO ‘ambari‘@‘ambari_master‘;

FLUSH PRIVILEGES;

CREATE DATABASE hive;

FLUSH PRIVILEGES;

CREATE DATABASE ambari;
FLUSH PRIVILEGES;

CREATE USER ‘hive‘@‘localhost‘ IDENTIFIED BY ‘hive‘;

GRANT ALL PRIVILEGES ON *.* TO ‘hive‘@‘localhost‘;

CREATE USER ‘hive‘@‘%‘ IDENTIFIED BY ‘hive‘;

GRANT ALL PRIVILEGES ON *.* TO ‘hive‘@‘%‘;

CREATE USER ‘hive‘@‘ambari_slave2‘IDENTIFIED BY ‘hive‘;

GRANT ALL PRIVILEGES ON *.* TO ‘hive‘@‘ambari_slave2‘;

FLUSH PRIVILEGES;

CREATE USER ‘rangeradmin‘@‘%‘ IDENTIFIED BY ‘rangeradmin‘;

GRANT ALL PRIVILEGES ON *.* TO ‘rangeradmin‘@‘%‘  with grant option;

CREATE USER ‘rangeradmin‘@‘localhost‘ IDENTIFIED BY ‘rangeradmin‘;

GRANT ALL PRIVILEGES ON *.* TO ‘rangeradmin‘@‘localhost‘  with grant option;

CREATE USER ‘rangeradmin‘@‘ambari_master‘ IDENTIFIED BY ‘rangeradmin‘;

GRANT ALL PRIVILEGES ON *.* TO ‘rangeradmin‘@‘ambari_master‘  with grant option;

FLUSH PRIVILEGES;

CREATE USER ‘root‘@‘%‘ IDENTIFIED BY ‘root‘;

GRANT ALL PRIVILEGES ON *.* TO ‘root‘@‘%‘;

FLUSH PRIVILEGES;

7.查看yum源(检查,可以省略这步,我是为了后期安装简单所以放在这)

[[email protected] ~]# ls /etc/yum.repos.d/
ambari.repo  HDP.repo  HDP-UTILS.repo  mnt.repo  redhat.repo  rhel-source.repo
[[email protected] ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-2.2.1.]
name=Ambari 2.2.1
baseurl=http://192.168.1.253/ambari/
gpgcheck=0
enabled=1

[HDP-UTILS-1.1.0.20]
name=Hortonworks Data Platform Utils Version - HDP-UTILS-1.1.0.20
baseurl=http://192.168.1.253/ambari/
gpgcheck=0
enabled=1
[[email protected] ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS-1.1.0.20
baseurl=http://192.168.1.253/ambari

path=/
enabled=1
gpgcheck=0
[[email protected] ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.4]
name=HDP-2.4
baseurl=http://192.168.1.253/hdp

path=/
enabled=1
gpgcheck=0[[email protected] ~]#

8. 配置安装ambari

主节点:

yum -y install ambari-server(这步会自动安装postgresql数据库,我使用的是mysql,所以停掉它)

service postgresql stop
chkconfig postgresql off

scp /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql [email protected]_slave2:/root

mysql数据库节点:

use ambari;

source /root/Ambari-DDL-MySQL-CREATE.sql;

FLUSH PRIVILEGES;

ambari-server setup

ambari-server start

JAVA_HOME=/usr/local/jdk1.7.0_79

cd /usr/share/java
rm -rf  mysql-connector-java.jar
ln -s mysql-connector-java-5.1.17.jar  mysql-connector-java.jar

ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar

cp /usr/share/java/mysql-connector-java-5.1.17.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar

vim /var/lib/ambari-server/resources/stacks/HDP/2.4/repos/repoinfo.xml

<os family="redhat6">
    <repo>
      <baseurl>http://192.168.1.253/hdp</baseurl>
      <repoid>HDP-2.4</repoid>
      <reponame>HDP</reponame>
    </repo>
    <repo>
      <baseurl>http://192.168.1.253/ambari</baseurl>
      <repoid>HDP-UTILS-1.1.0.20</repoid>
      <reponame>HDP-UTILS</reponame>
    </repo>
  </os>

然后就可以到web界面安装hadoop集群以及所需的组件了,在web安装过程中会出现问题,我在这就不一一细说了,其实终端或者log中都有提示,或者就有答案,不会的一些问题可以直接在网上找找,一般都有。

  1. echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
    echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
    echo never > /sys/kernel/mm/transparent_hugepage/enabled
    echo never > /sys/kernel/mm/transparent_hugepage/defrag
  2. 在web界面会到主节点的私钥,来识别子节点的机器:

    gpgcheck=0[[email protected]_master ~]# cat .ssh/id_rsa

  3. 后期会有补充:看完有问题可以直接联系我,QQ:1591345922
时间: 2024-10-11 04:59:42

ambari之hadoop的安装的相关文章

ambari本地源自动化安装hortonworks hadoop(转)

原文地址:http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=26230811&id=4023821 环境配置 Hostname IP OS Ambari 172.16.235.164 Centos X64 base install Maste.hadoop 172.16.235.165 Centos X64 base install Datanode1.hadoop 172.16.235.166 Centos X64 base

ambari 搭建hadoop大数据平台系列4-配置ambari-server

ambari 搭建hadoop大数据平台系列4-配置ambari-server,分为三部分: 官网:  https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/download_the_ambari_repo_lnx7.html 安装ambari-server  配置ambari-server  命令;ambari-server setup 启动ambari-server 命令

Centos5.6下利用Ambari搭建Hadoop集群(Hotonworks框架)

 写在前面: 很长时间没在CSDN上写博客了,最近两个月公司计划要搞大数据,而我们的技术不够,所以这段时间主要进行Hadoop的相关技术学习及储备.废话不多说,下面进入主题,本次主要讨论在Centos5.6下通过Ambari进行Hadoop集群的安装(基于Hotonworks框架). 一.Centos安装(已装了系统的可以跳过此步) 1.    把光盘放入光驱中重新启动电脑,让光盘引导来安装CentOS 5.6系统; 上图为CentOS的安装启动界面,如果想用文本方式安装,就输入linux te

cent os 6.5+ambari+HDP集群安装

1. 搭建一个测试集群,集群有4台机器,配置集群中每一台机器的/etc/hosts文件: [[email protected] .ssh]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.28.3.40 nn n

Hadoop的安装

原文:http://www.linuxidc.com/Linux/2016-07/133508.htm 1.下载Hadoop安装包,笔者学习使用的是Hadoop1.2.1.提供一下下载地址吧: http://apache.fayea.com/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz. 2.创建/usr/local目录,进入此目录,下载安装包后解压,解压后出出现一个hadoop-1.2.1的文件夹,修改目录名为hadoop,进入该文件夹,目录结构

hadoop&spark安装(上)

硬件环境: hddcluster1 10.0.0.197 redhat7 hddcluster2 10.0.0.228 centos7  这台作为master hddcluster3 10.0.0.202 redhat7 hddcluster4 10.0.0.181 centos7 软件环境: 关闭所有防火墙firewall openssh-clients openssh-server java-1.8.0-openjdk java-1.8.0-openjdk-devel hadoop-2.7.

Hadoop的安装模式

Hadoop的安装模式分为三种:单机模式.伪分布模式.全分布模式 单机模式,这是默认的安装模式,也是占用资源最少的模式,配置文件不用修改.完全运行在本地,不与其他节点交互,也不使用Hadoop文件系统,不加载任何守护进程,主要用于开发调试MapReduce应用程序. 伪分布模式,即单节点集群模式,所有的守护进程都运行在同一台机子上.这种模式增加了代码调试功能,可以查看内存.HDFS的输入/输出,以及与其他守护进程之间的交互. 全分布模式,真正分布式集群配置,用于生产环境.

centos下hadoop的安装

hadoop的安装不难,但是需要做不少的准备工作. 一.JDK 需要先安装jdk.centos下可以直接通过yum install java-1.6.0-openjdk来安装.不同发布版的安装方式可能不一样. 二.设置ssh 之后,需要设置ssh为密钥认证登录.如果没有这一步,那么以后每次hadoop运行的时候都会提示输入密码.可以直接ssh-keygen -t rsa一路回车生成ssh的密钥对,然后进入当前用户的家目录,进入.ssh目录,cp /home/hadoop/.ssh/id_rsa.

hadoop配置安装

必备软件 这里以Hadoop 1.1.2为学习的版本. jdk-6u24-linux-i586.bin hadoop-1.1.2.tar hbase-0.94.7-security.tar hive-0.9.0.tar sqoop-1.4.3.bin__hadoop-1.0.0.tar zookeeper-3.4.5.tar 2. 安装步骤 linux系统环境配置,jdk和hadoop的安装参考http://www.cnblogs.com/xia520pi/archive/2012/05/16/