HBase的多节点集群详细启动步骤(3或5节点)(分为Zookeeper自带还是外装)

HBase的多节点集群详细启动步骤(3或5节点)分为

  1、HBASE_MANAGES_ZK的默认值是false(zookeeper外装)(推荐)

  2、HBASE_MANAGES_ZK的默认值是true(zookeeper自带)

1、HBASE_MANAGES_ZK的默认值是false(推荐)

  伪分布模式下,如(weekend110)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例。但是,该实例只能为单机或伪分布模式下的HBase提供服务。

  若是分布式模式,则需要配置自己的Zookeeper集群。如(HadoopMaster、HadoopSlave1、HadoopSlave2)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示,分布式模式里,在启动HBase时,HBase将Zookeeper作为自身的一部分运行。进程变为HQuorumPeer。
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是false,它表示,分布式模式里,需要,先提前手动,每个节点都手动启动Zookeeper,然后再在主节点上启动HBase时,进程变为HMaster(HadoopMaster节点)。

  若,HBASE_MANAGES_ZK的默认值是false
1、则,直接在HadoopMaster机器上,先启动Hadoop,
2、在HadoopMaster、HadoopSlave1、HadoopSlave2机器上,分别手动一个一个得去,启动Zookeeper
3、在HadoopMaster机器上,再启动HBase即可。

  1、则,直接在HadoopMaster机器上,先启动Hadoop,
[[email protected] hadoop-2.6.0]$ jps
1998 Jps
[[email protected] hadoop-2.6.0]$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/11/02 19:59:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [HadoopMaster]
HadoopMaster: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-namenode-HadoopMaster.out
HadoopSlave1: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave1.out
HadoopSlave2: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-HadoopMaster.out
16/11/02 20:00:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-HadoopMaster.out
HadoopSlave2: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave2.out
HadoopSlave1: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave1.out
[[email protected] hadoop-2.6.0]$ jps
2281 SecondaryNameNode
2124 NameNode
2430 ResourceManager
2736 Jps

[[email protected] hadoop-2.6.0]$ jps
1877 Jps
[[email protected] hadoop-2.6.0]$ jps
2003 NodeManager
2199 Jps
1928 DataNode
[[email protected] hadoop-2.6.0]$

[[email protected] hadoop-2.6.0]$ jps
1893 Jps
[[email protected] hadoop-2.6.0]$ jps
2019 NodeManager
2195 Jps
1945 DataNode
[[email protected] hadoop-2.6.0]$

  2、在HadoopMaster、HadoopSlave1、HadoopSlave2机器上,分别手动一个一个得去,启动Zookeeper
[[email protected] hadoop-2.6.0]$cd ..
[[email protected] app]$ cd zookeeper-3.4.6/
[[email protected]HadoopMaster zookeeper-3.4.6]$ bin/zkServer.sh start
[[email protected]HadoopSlave1 zookeeper-3.4.6]$ bin/zkServer.sh start
[[email protected]HadoopSlave2 zookeeper-3.4.6]$ bin/zkServer.sh start

  3、在HadoopMaster机器上,再启动HBase即可。
[[email protected] hadoop-2.6.0]$ cd ..
[[email protected] app]$ cd hbase-1.2.3
[[email protected]HadoopMaster hbase-1.2.3]$ bin/start-hbase.sh
HadoopSlave2: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave2.out
HadoopSlave1: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave1.out
HadoopMaster: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopMaster.out
starting master, logging to /home/hadoop/app/hbase-1.2.3/logs/hbase-hadoop-master-HadoopMaster.out
HadoopSlave1: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave1.out
HadoopSlave2: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave2.out
[[email protected] hbase-1.2.3]$ jps

  进入hbase shell啊,只有HadoopMaster才可进,
[[email protected] hbase-1.2.3]$ hbase shell
2016-11-02 20:07:31,288 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hbase-1.2.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter ‘help<RETURN>‘ for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016

hbase(main):001:0>

[[email protected] hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[[email protected] hadoop-2.6.0]$

[[email protected] hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[[email protected] hadoop-2.6.0]$

退出hbase shell啊
hbase(main):001:0> exit
[[email protected] hbase-1.2.3]$

2、HBASE_MANAGES_ZK的默认值是true

  伪分布模式下,如(weekend110、djt002)
hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例。
但是,该实例只能为单机或伪分布模式下的HBase提供服务。

  若是分布式模式,则需要配置自己的Zookeeper集群。如(HadoopMaster、HadoopSlave1、HadoopSlave2)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示,分布式模式里,在启动HBase时,HBase将Zookeeper作为自身的一部分运行。进程变为HQuorumPeer。
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是false,它表示,分布式模式里,需要,先提前手动,每个节点都手动启动Zookeeper,然后再在主节点上启动HBase时,进程变为HMaster(HadoopMaster节点)。

  若,HBASE_MANAGES_ZK的默认值是true
1、则,直接在HadoopMaster机器上,先启动Hadoop,
2、再启动HBase即可。

  1、则,直接在HadoopMaster机器上,先启动Hadoop,
[[email protected] hadoop-2.6.0]$ jps
1998 Jps
[[email protected] hadoop-2.6.0]$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/11/02 19:59:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [HadoopMaster]
HadoopMaster: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-namenode-HadoopMaster.out
HadoopSlave1: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave1.out
HadoopSlave2: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-HadoopMaster.out
16/11/02 20:00:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-HadoopMaster.out
HadoopSlave2: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave2.out
HadoopSlave1: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave1.out
[[email protected] hadoop-2.6.0]$ jps
2281 SecondaryNameNode
2124 NameNode
2430 ResourceManager
2736 Jps

[[email protected] hadoop-2.6.0]$ jps
1877 Jps
[[email protected] hadoop-2.6.0]$ jps
2003 NodeManager
2199 Jps
1928 DataNode
[[email protected] hadoop-2.6.0]$

[[email protected] hadoop-2.6.0]$ jps
1893 Jps
[[email protected] hadoop-2.6.0]$ jps
2019 NodeManager
2195 Jps
1945 DataNode
[[email protected] hadoop-2.6.0]$

  2、再启动HBase即可。
[[email protected] hadoop-2.6.0]$ cd ..
[[email protected] app]$ cd hbase-1.2.3
[[email protected] hbase-1.2.3]$ bin/start-hbase.sh
HadoopSlave2: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave2.out
HadoopSlave1: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave1.out
HadoopMaster: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopMaster.out
starting master, logging to /home/hadoop/app/hbase-1.2.3/logs/hbase-hadoop-master-HadoopMaster.out
HadoopSlave1: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave1.out
HadoopSlave2: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave2.out
[[email protected] hbase-1.2.3]$ jps
3201 Jps
2281 SecondaryNameNode
2951 HQuorumPeer
2124 NameNode
2430 ResourceManager
3013 HMaster
[[email protected] hbase-1.2.3]$

[[email protected] hadoop-2.6.0]$ jps
2336 HRegionServer
2003 NodeManager
2396 Jps
2257 HQuorumPeer
1928 DataNode
[[email protected] hadoop-2.6.0]$

[[email protected] hadoop-2.6.0]$ jps
2019 NodeManager
2254 HQuorumPeer
2451 Jps
2333 HRegionServer
1945 DataNode
[[email protected] hadoop-2.6.0]$

  进入hbase shell啊,只有HadoopMaster才可进,
[[email protected] hbase-1.2.3]$ hbase shell
2016-11-02 20:07:31,288 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hbase-1.2.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter ‘help<RETURN>‘ for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016

hbase(main):001:0>

[[email protected] hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[[email protected] hadoop-2.6.0]$

[[email protected] hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[[email protected] hadoop-2.6.0]$

退出hbase shell啊
hbase(main):001:0> exit
[[email protected] hbase-1.2.3]$

  

时间: 2024-10-18 11:10:48

HBase的多节点集群详细启动步骤(3或5节点)(分为Zookeeper自带还是外装)的相关文章

HBase的单节点集群详细启动步骤(分为Zookeeper自带还是外装)

伪分布模式下,如(weekend110)hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例.但是,该实例只能为单机或伪分布模式下的HBase提供服务. 若是分布式模式,则需要配置自己的Zookeeper集群.如(HadoopMaster.HadoopSlave1.HadoopSlave2)hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示,分布式模式里,在启动HB

说说单节点集群里安装hive、3\5节点集群里安装hive的诡异区别

这几天,无意之间,被这件事情给迷惑,不解!先暂时贴于此,以后再解决! 详细问题如下: 在hive的安装目录下(我这里是 /home/hadoop/app/hive-1.2.1),hive的安装目录的lib下(我这里是/home/hadoop/app/hive-1.2.1/lib)存放了mysql-connector-java-5.1.21.jar. 我的mysql,是用root用户安装的,在/home/hadoop/app目录,所以,启动也得在此目录下. 对于djt002,我的mysql是roo

ElasticSearch 在3节点集群的启动

ElasticSearch的启动分前台和后台启动 先介绍前台启动: 先在master节点上启动 可以看到已经启动了 同时在slave1.slave2节点上也启动 可以看到都已经启动了! 在浏览器分别打开每个节点的状况观察,这里需要等待变成green状态才算是可以 这个是master节点的状况 这个是slave1节点的状况 这个是slave2节点的状况 这个是整个集群的健康状况 现在启动kibana 在浏览器打开测试页面(我这里只能在火狐浏览器打开,为什么我自己也不清楚) 前台启动到这里就结束了,

Postgres-XL 集群详细创建步骤

最近公司业务需求,需要使用Postgres-XL 集群,关于这部分知识,网络资料不多.经过一段时间的查询,和各自弯路之后,终于完成安装.将详细步骤完整记录,以备查阅.也希望能帮到需要的人. 下面就开始吧: 主机列表和集群安装的角色分配 10.21.13.109  GTM 10.21.13.67  coordinator&datanode 10.21.13.60  datanode 2.创建postgres用户,这部分我使用ansible完成的用户创建,以及相关软件包的应用,节省劳动力(yum其实

WAS集群系列(9):集群搭建:步骤7:添加节点

(1).确认两节点时间同步 要保证节点间的时间误差控制在5分钟之内,否则将不会完成节点的添加 (2).节点1执行添加节点指令 主机名输入为:DM主机名称 节点1:(到AppSrv01路径下) 指令为:addNodeWIN-PLDC49NNSAA 8879 (3).节点2执行添加节点指令 主机名输入为:DM服务器主机名 节点2(到AppSrv01路径下) 指令为:addNodeWIN- PLDC49NNSAA 8879 (4).重启DM 到Dmgr路径下: 执行指令:stopManager    

hbase集群在启动的时候找不到JAVA_HOME的问题

hbase集群在启动的时候找不到JAVA_HOME的问题,启动集群的时候报错信息如下: [email protected]:/usr/local/hbase-0.90.4/bin# ./start-hbase.sh starting master, logging to /usr/local/hbase-0.90.4/bin/../logs/hbase-root-master-master.out slave1: +========================================

Storm集群安装部署步骤【详细版】

作者: 大圆那些事 | 文章可以转载,请以超链接形式标明文章原始出处和作者信息 网址: http://www.cnblogs.com/panfeng412/archive/2012/11/30/how-to-install-and-deploy-storm-cluster.html 本文以Twitter Storm官方Wiki为基础,详细描述如何快速搭建一个Storm集群,其中,项目实践中遇到的问题及经验总结,在相应章节以“注意事项”的形式给出. 1. Storm集群组件 Storm集群中包含两

redhat 下 oracle 10G RAC 集群 详细安装

在大家做RAC安装测试搭建环境时,没有存储环境下,我来教大家怎么采用虚拟机来安装 ORACLE 10 rac,这样可以让大家更快学习好 ORACLE 10 RAC ,我会把很详细的安装写给大家. 1.安装前的准备 准备需要软件 10201_clusterware_linux_x86_64.cpio.gz 10201_database_linux_x86_64.cpio.gz binutils-2.17.50.0.6-6.0.1.el5.x86_64.rpm oracleasm-2.6.18-16

Oracle 11gR2 RAC集群服务启动与关闭总结

引言:这写篇文章的出处是因为我的一名学生最近在公司搭建RAC集群,但对其启动与关闭的顺序和原理不是特别清晰,我在教学工作中也发现了很多学员对RAC知识了解甚少,因此我在这里就把RAC里面涉及到的最常用的启动与关闭顺序和命令逐一列举出来,由于RAC的后台资源较多,因此涉及到的命令也很多,最后附上帮助手册让在工作中临时使用时也可以迅速查到,如果这篇文章能够帮到大家就是我今后继续努力撰写的动力,感谢大家对我文章的浏览多提宝贵意见. 关闭过程(CRS集群关闭->关闭数据库)1.关闭数据库:用oracl用