CDH5.16.1集群企业真正离线部署

?.准备?作

1.离线部署大纲

  • MySQL离线部署
  • CM离线部署
  • Parcel?件离线源部署

2.规划

linux版本:CentOS 7.2

节点 MySQL组件 Parcel?件离线源 CM服务进程 ?数据组件
hadoop001 MySQL Parcel Alert Publisher Event Server NN RM DN NM ZK
hadoop002 Alert Publisher Event Server DN NM ZK
hadoop003 Host Monitor Service Monitor DN NM ZK

3.下载源

?.集群节点初始化

1.阿里云购买3台虚拟机

(最低配置 2core 8G),选择按量付费

CentOS7.2

2.当前笔记本(win)hosts配置文件

路径: C:\Windows\System32\drivers\etc\hosts

39.97.188.249   hadoop001   hadoop001
39.97.225.112   hadoop002   hadoop002
39.97.224.68    hadoop003   hadoop003

注意:IP是你虚拟机公网IP

3.设置所有节点的hosts文件

echo ‘172.17.144.104 hadoop001‘ >> /etc/hosts
echo ‘172.17.144.103 hadoop002‘ >> /etc/hosts
echo ‘172.17.144.105 hadoop003‘ >> /etc/hosts
#检查
cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.17.144.104 hadoop001
172.17.144.103 hadoop002
172.17.144.105 hadoop003

注意:IP为内网IP

4.关闭所有节点防火墙及清空规则

云主机

我们使用的云主机,无论阿里云还是腾讯云的防火墙都是关闭的,所以我们不需要关闭服务器的防火墙。但是,我们需要检查下是否自动开启了web访问端口,如果没有,则自己添加

(1)打开安全组配置

进入之后点击配置规则

(2)添加安全组规则

注意:

1.点击蓝色感叹号会有规则说明

2.授权对象如果在公司内需要设置网段,就按照上图,将ip网段规定好。不限制的话就直接0.0.0.0/0

内网服务器

最好在内部服务器部署时就将防火墙关闭,如果不行就暂时关闭,等部署成功再开启

systemctl stop firewalld
systemctl disable firewalld
iptables -F   

5.关闭所有节点selinux

阿里云服务器已经将selinux关闭了,所以不用配置

自己的服务器很可能会开启selinux,这样就需要关闭了

将SELINUX=disabled设置进去,之后重启才会生效

vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

6.设置所有节点时区一致及时钟同步

阿里云已经将节点时区和时间做了同步

我们实操下公司环境的时区时间同步

6.1时区

[[email protected] ~]# timedatectl
      Local time: Tue 2019-05-28 15:37:53 CST
  Universal time: Tue 2019-05-28 07:37:53 UTC
        RTC time: Tue 2019-05-28 15:37:53
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: yes
      DST active: n/a
#查看命令帮助,学习?关重要,?需百度,太low
[[email protected] ~]# timedatectl --help
timedatectl [OPTIONS...] COMMAND ...
Query or change system time and date settings.
-h --help Show this help message
--version Show package version
--no-pager Do not pipe output into a pager
--no-ask-password Do not prompt for password
-H --host=[[email protected]]HOST Operate on remote host
-M --machine=CONTAINER Operate on local container
--adjust-system-clock Adjust system clock when changing local RTC mode
Commands:
status Show current time settings
set-time TIME Set system time
set-timezone ZONE Set system time zone
list-timezones Show known time zones
set-local-rtc BOOL Control whether RTC is in local time
set-ntp BOOL Control whether NTP is enabled
#查看哪些时区
[[email protected] ~]# timedatectl list-timezones
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
Africa/Algiers
Africa/Asmara
Africa/Bamako
#所有节点设置亚洲上海时区
[[email protected] ~]# timedatectl set-timezone Asia/Shanghai
[[email protected] ~]# timedatectl set-timezone Asia/Shanghai
[[email protected] ~]# timedatectl set-timezone Asia/Shanghai

6.2.时间

#所有节点安装ntp
[[email protected] ~]# yum install -y ntp
#选取hadoop001为ntp的主节点
[[email protected] ~]# vi /etc/ntp.conf
#time
server 0.asia.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org
server 3.asia.pool.ntp.org
#当外部时间不可用时,可使用本地硬件时间
server 127.127.1.0 iburst local clock
#允许哪些网段的机器来同步时间   修改成自己的内网网段
restrict 172.17.144.0 mask 255.255.255.0 nomodify notrap
#开启ntpd及查看状态
[[email protected] ~]# systemctl start ntpd
[[email protected] ~]# systemctl status ntpd
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: d
isabled)
Active: active (running) since Sat 2019-05-11 10:15:00 CST; 11min ago
Main PID: 18518 (ntpd)
CGroup: /system.slice/ntpd.service
!"18518 /usr/sbin/ntpd -u ntp:ntp -g
May 11 10:15:00 hadoop001 systemd[1]: Starting Network Time Service...
May 11 10:15:00 hadoop001 ntpd[18518]: proto: precision = 0.088 usec
May 11 10:15:00 hadoop001 ntpd[18518]: 0.0.0.0 c01d 0d kern kernel time sync enabl
ed
May 11 10:15:00 hadoop001 systemd[1]: Started Network Time Service.
#验证
[[email protected] ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
LOCAL(0) .LOCL. 10 l 726 64 0 0.000 0.000 0.000
#其他从节点停?禁?ntpd服务
[[email protected] ~]# systemctl stop ntpd
[[email protected] ~]# systemctl disable ntpd
Removed symlink /etc/systemd/system/multi-user.target.wants/ntpd.service.
[[email protected] ~]# /usr/sbin/ntpdate hadoop001
11 May 10:29:22 ntpdate[9370]: adjust time server 172.19.7.96 offset 0.000867 sec
#每天凌晨同步hadoop001节点时间[[email protected] ~]# crontab -e
00 00 * * * /usr/sbin/ntpdate hadoop001
[[email protected] ~]# systemctl stop ntpd
[[email protected] ~]# systemctl disable ntpd
Removed symlink /etc/systemd/system/multi-user.target.wants/ntpd.service.
[[email protected] ~]# /usr/sbin/ntpdate hadoop001
11 May 10:29:22 ntpdate[9370]: adjust time server 172.19.7.96 offset 0.000867 sec
#每天凌晨同步hadoop001节点时间
[[email protected] ~]# crontab -e
00 00 * * * /usr/sbin/ntpdate hadoop001

7.JDK部署

mkdir /usr/java
tar -xzvf jdk-8u45-linux-x64.tar.gz -C /usr/java/
#切记必须修正所属?户及?户组
chown -R root:root /usr/java/jdk1.8.0_45
[[email protected] cdh5.16.1]# vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_45
export PATH=${JAVA_HOME}/bin:${PATH}
source /etc/profile
which java

如果节点过多,那么就只做一台镜像模板,将基础工作完成之后,分发克隆。。(最好请运维小哥哥做~)

8.hadoop001节点离线部署MySQL5.7

(按照生产标准)

8.1 解压及创建文件夹

#解压
[[email protected] cdh5.16.1]# tar xzvf mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz -C /usr/local/
#切换目录
[[email protected] cdh5.16.1]# cd /usr/local/
#修改mysql名称
[[email protected] local]# mv mysql-5.7.11-linux-glibc2.5-x86_64/ mysql
#创建文件夹
[[email protected] local]# mkdir mysql/arch mysql/data mysql/tmp

8.2 创建my.cnf

rm /etc/my.cnf
vim /etc/my.cnf
[client]
port            = 3306
socket          = /usr/local/mysql/data/mysql.sock
default-character-set=utf8mb4

[mysqld]
port            = 3306
socket          = /usr/local/mysql/data/mysql.sock

skip-slave-start

skip-external-locking
key_buffer_size = 256M
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 4M
query_cache_size= 32M
max_allowed_packet = 16M
myisam_sort_buffer_size=128M
tmp_table_size=32M

table_open_cache = 512
thread_cache_size = 8
wait_timeout = 86400
interactive_timeout = 86400
max_connections = 600

# Try number of CPU‘s*2 for thread_concurrency
#thread_concurrency = 32 

#isolation level and default engine
default-storage-engine = INNODB
transaction-isolation = READ-COMMITTED

server-id  = 1739
basedir     = /usr/local/mysql
datadir     = /usr/local/mysql/data
pid-file     = /usr/local/mysql/data/hostname.pid

#open performance schema
log-warnings
sysdate-is-now

binlog_format = ROW
log_bin_trust_function_creators=1
log-error  = /usr/local/mysql/data/hostname.err
log-bin = /usr/local/mysql/arch/mysql-bin
expire_logs_days = 7

innodb_write_io_threads=16

relay-log  = /usr/local/mysql/relay_log/relay-log
relay-log-index = /usr/local/mysql/relay_log/relay-log.index
relay_log_info_file= /usr/local/mysql/relay_log/relay-log.info

log_slave_updates=1
gtid_mode=OFF
enforce_gtid_consistency=OFF

# slave
slave-parallel-type=LOGICAL_CLOCK
slave-parallel-workers=4
master_info_repository=TABLE
relay_log_info_repository=TABLE
relay_log_recovery=ON

#other logs
#general_log =1
#general_log_file  = /usr/local/mysql/data/general_log.err
#slow_query_log=1
#slow_query_log_file=/usr/local/mysql/data/slow_log.err

#for replication slave
sync_binlog = 500

#for innodb options
innodb_data_home_dir = /usr/local/mysql/data/
innodb_data_file_path = ibdata1:1G;ibdata2:1G:autoextend

innodb_log_group_home_dir = /usr/local/mysql/arch
innodb_log_files_in_group = 4
innodb_log_file_size = 1G
innodb_log_buffer_size = 200M

#根据生产需要,调整pool size
innodb_buffer_pool_size = 2G
#innodb_additional_mem_pool_size = 50M #deprecated in 5.6
tmpdir = /usr/local/mysql/tmp

innodb_lock_wait_timeout = 1000
#innodb_thread_concurrency = 0
innodb_flush_log_at_trx_commit = 2

innodb_locks_unsafe_for_binlog=1

#innodb io features: add for mysql5.5.8
performance_schema
innodb_read_io_threads=4
innodb-write-io-threads=4
innodb-io-capacity=200
#purge threads change default(0) to 1 for purge
innodb_purge_threads=1
innodb_use_native_aio=on

#case-sensitive file names and separate tablespace
innodb_file_per_table = 1
lower_case_table_names=1

[mysqldump]
quick
max_allowed_packet = 128M

[mysql]
no-auto-rehash
default-character-set=utf8mb4

[mysqlhotcopy]
interactive-timeout

[myisamchk]
key_buffer_size = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

8.3 创建用户组及用户

[[email protected] local]# groupadd -g 101 dba
[[email protected] local]# useradd -u 514 -g dba -G root -d /usr/local/mysql mysqladmin
[[email protected] local]# id mysqladmin
uid=514(mysqladmin) gid=101(dba) groups=101(dba),0(root)

## 一般不需要设置mysqladmin的密码,直接从root或者LDAP用户sudo切换

8.4 copy 环境变量配置文件

copy 环境变量配置文件至mysqladmin用户的home目录中,为了以下步骤配置个人环境变量

cp /etc/skel/.* /usr/local/mysql  

8.5 配置环境变量

[[email protected] local]# vi mysql/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi
# User specific environment and startup programs
export MYSQL_BASE=/usr/local/mysql
export PATH=${MYSQL_BASE}/bin:$PATH

unset USERNAME

#stty erase ^H
set umask to 022
umask 022
PS1=`uname -n`":"‘$USER‘":"‘$PWD‘":>"; export PS1

8.6 赋权限和用户组 切换用户mysqladmin 安装

[[email protected] local]# chown  mysqladmin:dba /etc/my.cnf
[[email protected] local]# chmod  640 /etc/my.cnf
[[email protected] local]# chown -R mysqladmin:dba /usr/local/mysql
[[email protected] local]# chmod -R 755 /usr/local/mysql 

8.7 配置服务及开机自启动

[[email protected] local]#  cd /usr/local/mysql
#将服务文件拷贝到init.d下,并重命名为mysql
[[email protected] mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysql
#赋予可执行权限
[[email protected] mysql]# chmod +x /etc/rc.d/init.d/mysql
#删除服务
[[email protected] mysql]# chkconfig --del mysql
#添加服务
[[email protected] mysql]# chkconfig --add mysql
[[email protected] mysql]# chkconfig --level 345 mysql on

8.8 安装libaio及安装mysql的初始db

[[email protected] mysql]# yum -y install libaio
[[email protected] mysql]# su - mysqladmin
Last login: Tue May 28 17:04:49 CST 2019 on pts/0
hadoop001:mysqladmin:/usr/local/mysql:>bin/mysqld > --defaults-file=/etc/my.cnf > --user=mysqladmin > --basedir=/usr/local/mysql/ > --datadir=/usr/local/mysql/data/ > --initialize

在初始化时如果加上 –initial-insecure,则会创建空密码的 [email protected] 账号,否则会创建带密码的 [email protected] 账号,密码直接写在 log-error 日志文件中(在5.6版本中是放在 ~/.mysql_secret 文件里,更加隐蔽,不熟悉的话可能会无所适从)

8.9 查看临时密码

#查看密码
hadoop001:mysqladmin:/usr/local/mysql/data:>cat hostname.err |grep password
2019-05-28T09:28:40.447701Z 1 [Note] A temporary password is generated for [email protected]: J=<z#diyC4fh

8.10 启动

hadoop001:mysqladmin:/usr/local/mysql:>/usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/my.cnf &
[1] 21740
hadoop001:mysqladmin:/usr/local/mysql:>2019-05-28T09:38:16.127060Z mysqld_safe Logging to ‘/usr/local/mysql/data/hostname.err‘.
2019-05-28T09:38:16.196799Z mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data
#按两次回车

##退出mysqladmin用户
##查看mysql进程号
[[email protected] mysql]#ps -ef|grep mysql
mysqlad+ 21740     1  0 17:38 pts/0    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/my.cnf
mysqlad+ 22557 21740  0 17:38 pts/0    00:00:00 /usr/local/mysql/bin/mysqld --defaults-file=/etc/my.cnf --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --plugin-dir=/usr/local/mysql/lib/plugin --log-error=/usr/local/mysql/data/hostname.err --pid-file=/usr/local/mysql/data/hostname.pid --socket=/usr/local/mysql/data/mysql.sock --port=3306
root     22609  9194  0 17:39 pts/0    00:00:00 grep --color=auto mysql
##通过mysql进程号查看mysql端口号
[[email protected] mysql]# netstat -nlp|grep 22557
#切换成mysqladmin
[[email protected] mysql]# su - mysqladmin
Last login: Tue May 28 17:24:45 CST 2019 on pts/0
hadoop001:mysqladmin:/usr/local/mysql:>
##查看mysql是否运行
hadoop001:mysqladmin:/usr/local/mysql:>service mysql status
MySQL running (22557)[  OK  ]

8.11 登录及修改用户密码

#初始密码
hadoop001:mysqladmin:/usr/local/mysql:>mysql -uroot -p‘J=<z#diyC4fh‘
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.11-log

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.
#重置密码
mysql> alter user [email protected] identified by ‘ruozedata123‘;
mysql>GRANT ALL PRIVILEGES ON *.* TO ‘root‘@‘%‘ IDENTIFIED BY ‘ruozedata123‘ ;
#刷权限
mysql> flush privileges;

8.12 重启

9.创建CDH的元数据库和用户、 amon服务的数据库及用户

mysql> CREATE DATABASE `cmf`  DEFAULT CHARACTER SET utf8;
mysql> GRANT ALL PRIVILEGES ON cmf.* TO ‘cmf‘@‘%‘ IDENTIFIED BY ‘ruozedata123‘ ;
mysql> create database amon default character set utf8;
mysql> GRANT ALL PRIVILEGES ON amon.* TO ‘amon‘@‘%‘ IDENTIFIED BY ‘ruozedata123‘ ;
--刷权限
mysql> flush privileges;

10.部署 mysql JDBC jar

[[email protected] cdh5.16.1]# mkdir -p /usr/share/java
[[email protected] cdh5.16.1]# ls -lh
total 3.5G
-rw-r--r-- 1 root root 2.0G May 15 10:01 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
-rw-r--r-- 1 root root   41 May 14 20:17 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha1
-rw-r--r-- 1 root root 803M May 15 09:38 cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz
-rw-r--r-- 1 root root 166M May 14 20:21 jdk-8u45-linux-x64.gz
-rw-r--r-- 1 root root  65K May 14 20:17 manifest.json
-rw-r--r-- 1 root root 523M May 15 09:28 mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz
-rw-r--r-- 1 root root 984K May 15 09:10 mysql-connector-java-5.1.47.jar
#mysql的jar包一定要去掉版本号~,有坑
[[email protected] cdh5.16.1]# cp mysql-connector-java-5.1.47.jar /usr/share/java/mysql-connector-java.jar

三.CDH离线部署

1.部署CM Server 和Agent

1.1 所有节点创建?录及解压

mkdir /opt/cloudera-manager

sed -i "s/server_host=localhost/server_host=hadoop001/g" /opt/cloudera-manager/cm-5.16.1/etc/cloudera-scm-agent/config.ini

1.2 所有节点修改config.ini

所有节点修改agent的配置,指向server的节点hadoop001

sed -i "s/server_host=localhost/server_host=hadoop001/g" /opt/cloudera-manager/cm-
5.16.1/etc/cloudera-scm-agent/config.ini

1.3 主节点修改server的配置

vi /opt/cloudera-manager/cm-5.16.1/etc/cloudera-scm-server/db.properties
com.cloudera.cmf.db.type=mysql
com.cloudera.cmf.db.host=hadoop001
com.cloudera.cmf.db.name=cmf
com.cloudera.cmf.db.user=cmf
com.cloudera.cmf.db.password=ruozedata123
com.cloudera.cmf.db.setupType=EXTERNAL

1.4 所有节点创建cloudera-scm用户

#创建cloudera-scm
useradd --system --home=/opt/cloudera-manager/cm-5.16.1/run/cloudera-scm-server/ --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm
#修改cloudera-manager的权限
chown -R cloudera-scm:cloudera-scm /opt/cloudera-manager

1.5 所有节点修改cloudera-manager用户名用户组

chown -R cloudera-scm:cloudera-scm /opt/cloudera-manager

2.hadoop001节点部署离线parcel源

2.1 部署离线parcel源

mkdir -p /opt/cloudera/parcel-repo
[[email protected] opt]# cd ~/cdh5.16.1/
[[email protected] cdh5.16.1]# ls -lh
total 3.5G
-rw-r--r-- 1 root root 2.0G May 15 10:01 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
-rw-r--r-- 1 root root   41 May 14 20:17 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha1
-rw-r--r-- 1 root root 803M May 15 09:38 cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz
-rw-r--r-- 1 root root 166M May 14 20:21 jdk-8u45-linux-x64.gz
-rw-r--r-- 1 root root  65K May 14 20:17 manifest.json
-rw-r--r-- 1 root root 523M May 15 09:28 mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz
-rw-r--r-- 1 root root 984K May 15 09:10 mysql-connector-java-5.1.47.jar
[[email protected] cdh5.16.1]# mv CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha1 /opt/cloudera/parcel-repo/CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha

#切记mv时,重命名去掉1,不然在部署过程CM认为如上?件下载未完整,会持续下载
[[email protected] cdh5.16.1]# mv CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel /opt/cloudera/parcel-repo/
[[email protected] cdh5.16.1]# mv manifest.json  /opt/cloudera/parcel-repo/

如果你是通过网络下载的parcel包,我们就需要对CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel 进行校验,防止文件损坏!!!

[[email protected] parcel-repo]# cat CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha
703728dfa7690861ecd3a9bcd412b04ac8de7148
#计算下载文件的值,进行对比
[[email protected] parcel-repo]# sha1sum CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
703728dfa7690861ecd3a9bcd412b04ac8de7148  CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
#相同,可以正常使用

2.2 目录修改用户及用户组

chown -R cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo/

3.所有节点创建大数据软件安装目录、用户及用户组权限

mkdir -p /opt/cloudera/parcels
chown -R cloudera-scm:cloudera-scm /opt/cloudera/

4.hadoop001节点启动Server

4.1 启动server
/opt/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-server start
4.2 阿?云web界?,设置该hadoop001节点防?墙放开7180端?
4.3 等待1min,打开 http://hadoop001:7180 账号密码:admin/admin
4.4 假如打不开,去看server的log,根据错误仔细排查错误
    log路径在/opt/cloudera-manager/cm-5.16.1/log/cloudera-scm-server

5.所有节点启动Agent

/opt/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-agent start

6.接下来,全部Web界面操作

http://hadoop001:7180/

账号密码:admin/admin

7.欢迎使?Cloudera Manager--最终?户许可条款与条件。勾选

8.欢迎使?Cloudera Manager--您想要部署哪个版本?选择Cloudera Express免费版本

9.感谢您选择Cloudera Manager和CDH

10.为CDH集群安装指导主机。选择[当前管理的主机],全部勾选

11.选择存储库

12.集群安装--正在安装选定Parcel

假如本地parcel离线源配置正确,则"下载"阶段瞬间完成,其余阶段视节点数与内部?络情况决定。

13.检查主机正确性

13.1.建议将/proc/sys/vm/swappiness设置为最?值10。
swappiness值控制操作系统尝试交换内存的积极;
swappiness=0:表示最?限度使?物理内存,之后才是swap空间;
swappiness=100:表示积极使?swap分区,并且把内存上的数据及时搬迁到swap空间;
如果是混合服务器,不建议完全禁?swap,可以尝试降低swappiness。
临时调整:
sysctl vm.swappiness=10
永久调整:
cat << EOF >> /etc/sysctl.conf
# Adjust swappiness value
vm.swappiness=10
EOF
13.2.已启?透明???压缩,可能会导致重?性能问题,建议禁?此设置。
临时调整:
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
永久调整:
cat << EOF >> /etc/rc.d/rc.local
# Disable transparent_hugepage
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
EOF
# centos7.x系统,需要为"/etc/rc.d/rc.local"?件赋予执?权限
chmod +x /etc/rc.d/rc.local

14.自定义服务,选择部署Zookeeper、 HDFS、 Yarn服务

15.自定义角色分配

16.数据库设置

连接测试失败的可能原因:
(1)mysql JDBC jar包没有放到/usr/share/java或jar包没有去掉版本号
(2)建数据库cmf 和amon的时候,没有将权限设置成%
(3)设置完权限之后,没有flush privileges;

17.审改设置,默认即可

18.?次运?

19.恭喜您!

20.主页

四.报错

1.在数据库设置测试时发生报错

报错信息

ERROR [email protected]:com.cloudera.server.web.common.JsonResponse:
JsonResponse created with throwable: com.cloudera.server.web.cmf.MessageException:
A package was not selected.


原因:
测试连接时,等待时间过长,我就点了返回键重新加载,然后出现packet找不到的异常。
解决:
返回到选择大数据组件的页面后,重新进行操作,就可以测试成功了。

原文地址:https://blog.51cto.com/14309075/2401752

时间: 2024-09-30 05:55:18

CDH5.16.1集群企业真正离线部署的相关文章

负载均衡集群企业及应用实战--LVS

负载均衡集群企业及应用实战-Lvs LVS是Linux Virtual Server的简称: 也就是Linux虚拟服务器, 是一个由章文嵩博士发起的自由软件项目,它的官方站点是www.linuxvirtualserver.org. 现在LVS已经是 Linux标准内核的一部分,在Linux2.4内核以前,使用LVS时必须要重新编译内核以支持LVS功能模块,但是从Linux2.4内核以后,已经完全内置了LVS的各个功能模块, 无需给内核打任何补丁,可以直接使用LVS提供的各种功能. LVS自从19

Kubernetes 集群的两种部署过程(daemon部署和容器化部署)以及glusterfs的应用!

ClusterIp:通过VIP来访问, NodePort: 需要自己搭建负载据衡器 LoadBalancer:仅仅用于特定的云提供商 和 Google Container Engine https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/ port:相当于服务端口(对及集群内客户访问) targetPort: 相当于pods端口 nodePort: 宿主机端口(也是服务端口,只不过是对集群外客户访问)

3-3 Hadoop集群完全分布式配置部署

Hadoop集群完全分布式配置部署 下面的部署步骤,除非说明是在哪个服务器上操作,否则默认为在所有服务器上都要操作.为了方便,使用root用户. 1.准备工作 1.1 centOS6服务器3台 手动指定3服务器台以下信息: hostname IP mask gateway DNS 备注 master 172.17.138.82 255.255.255.0 172.17.138.1 202.203.85.88 服务器1 slave1 172.17.138.83 255.255.255.0 172.

【待补充】Spark 集群模式 &amp;&amp; Spark Job 部署模式

0. 说明 Spark 集群模式 && Spark Job 部署模式 1. Spark 集群模式 [ Local ] 使用一个 JVM 模拟 Spark 集群 [ Standalone ] 启动 master + worker 进程 [ mesos ] -- [ Yarn ] -- 2. Spark Job 部署模式 [ Client ] Driver 程序运行在 Client 端. [ Cluster ] Driver 程序运行在某个 worker 上. spark-shell 只能以

CDH5.7Hadoop集群搭建(离线版)

用了一周多的时间终于把CDH版Hadoop部署在了测试环境,本文将就这个部署过程做个总结. 一.Hadoop版本选择. Hadoop大致可分为Apache Hadoop和第三方发行第三方发行版Hadoop,考虑到Hadoop集群部署的高效,集群的稳定性,以及后期集中的配置管理,业界多使用Cloudera公司的发行版,简称为CDH. 下面是转载的Hadoop社区版本与第三方发行版本的比较: Apache社区版本 优点: 完全开源免费. 社区活跃 文档.资料详实 缺点: 复杂的版本管理.版本管理比较

大数据平台CentOS7+CDH5.12.1集群搭建

1.CM(Cloudera Manager)介绍 1.1 简介 Cloudera Manager是一个拥有集群自动化安装.中心化管理.集群监控.报警功能的一个工具,使得安装集群从几天的时间缩短在几个小时内,运维人员从数十人降低到几人以内,极大的提高集群管理的效率. 对比Apache / CDH / HDP: (1)Apache:运维麻烦,组件间兼容性需要自己调研.(一般大厂使用,技术实力雄厚,有专业的运维人员)(2)CDH:国内使用最多的版本,但CM不开源,但其实对中.小公司使用来说没有影响(建

cdh5+hive+zookeeper集群环境搭建

环境 1.centos6.5(64位) 机器规划及节点分布 主机 角色 节点 节点 节点 节点 节点 192.168.115.132 master namenode   journalnode zk hive 192.168.115.133 slave1 namenode datanode journalnode zk hive 192.168.115.134 slave2   datanode journalnode zk   目录设置 dfs.namenode.name.dir = file

Pentaho Kettle 6.1连接CDH5.4.0集群

作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 欢迎转载 最近把之前写的Hadoop MapReduce程序又总结了下,发现很多逻辑基本都是大致相同的,于是想到能不能利用ETL工具来进行配置相关逻辑来实现MapReduce代码自动生成并执行,这样可以简化现有以及之后的一部分工作.于是选取了上手容易并对Hadoop支持的比较成熟的Pentaho Kettle来测试,把一些配置过程和遇到的坑记录下来. Kettle可以在官网下载到,但是官网会让你注册才能下载而

CDH5.2.0集群优化配置

HDFSdfs.block.size HDFS中的数据block大小,默认是64M,对于较大集群,可以设置为128或264M dfs.datanode.socket.write.timeout/dfs.socket.timeout 增加dfs.datanode.socket.write.timeout和dfs.socket.timeout两个属性的设置(默认300),比如30000,避免可能出现的IO超时异常 dfs.datanode.max.transfer.threads 增加datanod