首先需要保证hadoop已经在电脑上安装。然后接下来只需要下载hhase配置就行了。
Step1: 下载hbase http://archive.apache.org/dist/hbase/1.2.6/ 选择hbase-1.2.6-bin.tar.gz
Step2: 解压hbase到指定目录
Step3: 修改配置文件(进去conf文件夹下)
Step 3.1: hbase-env.sh
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home export HBASE_MANAGES_ZK=true // 这里我使用的是hbase自带的zookeper,所以需要将这个变量值设置为true
Step 3.2: hbase-site.xml
这里hbase.rootdir中的路径需要与hadoop core.site.xml中配置保持一致
<configuration> <property>
<name>hbase.rootdir</name> <value>hdfs://localhost:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration>
Step 4:启动hhase
这里需要先启动hadoop,因为hbase要使用hadoop的hdfs服务
Step 4.1: 启动hadoop
./start-all.sh
Step 4.2: 启动hbase
./start-hbase.sh
Step 4.3:以上如果运行正常,使用jps将会看到以下界面。
27780 NameNode 27860 DataNode 28839 HMaster 4312 29754 Jps 28941 HRegionServer 27965 SecondaryNameNode 28383 HQuorumPeer
Step 4.4: 访问hbase shell测试下
Gogo:bin my$ ./hbase shell 2018-07-08 12:17:44,820 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/Users/chong/opt/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/Users/chong/opt/hadoop-2.8.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter ‘help<RETURN>‘ for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017 hbase(main):001:0> status 1 active master, 0 backup masters, 1 servers, 0 dead, 3.0000 average load hbase(main):002:0>
原文地址:https://www.cnblogs.com/gogolee/p/9279691.html
时间: 2024-10-11 20:34:18