Ubuntu 14.10 下Eclipse操作HBase

环境介绍

  64位Ubuntu14.10,Hadoop 2.5.0 ,HBase 0.99.0

准备环境

  1 安装Hadoop 2.5.0,可参考http://www.cnblogs.com/liuchangchun/p/4097286.html

  2 安装HBase 0.99.0 ,可参考http://www.cnblogs.com/liuchangchun/p/4096891.html

  3 安装Ecliose

新建Java工程

  1 运行Eclipse,创建一个新的Java工程“MyHBase”,右键项目根目录,选择 “Properties”->“Java Build Path”->“Library”->“Add External JARs”,将HBase解压后根目录下lib子目录下所有jar 包添加到本工程的Classpath下。
  2.  按照步骤1中的操作,将自己所连接的HBase的配置文件hbase-site.xml添加到本工程的Classpath中,如下所示为配置文件的一个示例:

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;

public class HBaseTest {         

    private static Configuration conf =null;
     /**
      * 初始化配置
     */
     static {
         conf = HBaseConfiguration.create();
     }   

    /**
     * 创建一张表
     */
    public static void creatTable(String tableName, String[] familys) throws Exception {
        HBaseAdmin admin = new HBaseAdmin(conf);
        if (admin.tableExists(tableName)) {
            System.out.println("table already exists!");
        } else {
            HTableDescriptor tableDesc = new HTableDescriptor(tableName);
            for(int i=0; i<familys.length; i++){
                tableDesc.addFamily(new HColumnDescriptor(familys[i]));
            }
            admin.createTable(tableDesc);
            System.out.println("create table " + tableName + " ok.");
        }
    }      

    /**
     * 删除表
     */
    public static void deleteTable(String tableName) throws Exception {
       try {
           HBaseAdmin admin = new HBaseAdmin(conf);
           admin.disableTable(tableName);
           admin.deleteTable(tableName);
           System.out.println("delete table " + tableName + " ok.");
       } catch (MasterNotRunningException e) {
           e.printStackTrace();
       } catch (ZooKeeperConnectionException e) {
           e.printStackTrace();
       }
    }      

    /**
     * 插入一行记录
     */
    public static void addRecord (String tableName, String rowKey, String family, String qualifier, String value)
            throws Exception{
        try {
            HTable table = new HTable(conf, tableName);
            Put put = new Put(Bytes.toBytes(rowKey));
            put.add(Bytes.toBytes(family),Bytes.toBytes(qualifier),Bytes.toBytes(value));
            table.put(put);
            System.out.println("insert recored " + rowKey + " to table " + tableName +" ok.");
        } catch (IOException e) {
            e.printStackTrace();
        }
    }      

    /**
     * 删除一行记录
     */
    public static void delRecord (String tableName, String rowKey) throws IOException{
        HTable table = new HTable(conf, tableName);
        List list = new ArrayList();
        Delete del = new Delete(rowKey.getBytes());
        list.add(del);
        table.delete(list);
        System.out.println("del recored " + rowKey + " ok.");
    }      

    /**
     * 查找一行记录
     */
    public static void getOneRecord (String tableName, String rowKey) throws IOException{
        HTable table = new HTable(conf, tableName);
        Get get = new Get(rowKey.getBytes());
        Result rs = table.get(get);
        for(KeyValue kv : rs.raw()){
            System.out.print(new String(kv.getRow()) + " " );
            System.out.print(new String(kv.getFamily()) + ":" );
            System.out.print(new String(kv.getQualifier()) + " " );
            System.out.print(kv.getTimestamp() + " " );
            System.out.println(new String(kv.getValue()));
        }
    }      

    /**
     * 显示所有数据
     */
    public static void getAllRecord (String tableName) {
        try{
             HTable table = new HTable(conf, tableName);
             Scan s = new Scan();
             ResultScanner ss = table.getScanner(s);
             for(Result r:ss){
                 for(KeyValue kv : r.raw()){
                    System.out.print(new String(kv.getRow()) + " ");
                    System.out.print(new String(kv.getFamily()) + ":");
                    System.out.print(new String(kv.getQualifier()) + " ");
                    System.out.print(kv.getTimestamp() + " ");
                    System.out.println(new String(kv.getValue()));
                 }
             }
        } catch (IOException e){
            e.printStackTrace();
        }
    }      

    public static void  main (String [] agrs) {
        try {
            String tablename = "scores";
            String[] familys = {"grade", "course"};
            HBaseTest.creatTable(tablename, familys);      

            //add record zkb
            HBaseTest.addRecord(tablename,"zkb","grade","","5");
            HBaseTest.addRecord(tablename,"zkb","course","","90");
            HBaseTest.addRecord(tablename,"zkb","course","math","97");
            HBaseTest.addRecord(tablename,"zkb","course","art","87");
            //add record  baoniu
            HBaseTest.addRecord(tablename,"baoniu","grade","","4");
            HBaseTest.addRecord(tablename,"baoniu","course","math","89");      

            System.out.println("===========get one record========");
            HBaseTest.getOneRecord(tablename, "zkb");      

            System.out.println("===========show all record========");
            HBaseTest.getAllRecord(tablename);      

            System.out.println("===========del one record========");
            HBaseTest.delRecord(tablename, "baoniu");
            HBaseTest.getAllRecord(tablename);      

            System.out.println("===========show all record========");
            HBaseTest.getAllRecord(tablename);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
} 

  3 运行前要启动Hadoop和HBase,没问题的话,会打印出如下

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
create table scores ok.
insert recored zkb to table scores ok.
insert recored zkb to table scores ok.
insert recored zkb to table scores ok.
insert recored zkb to table scores ok.
insert recored baoniu to table scores ok.
insert recored baoniu to table scores ok.
===========get one record========
zkb course: 1416917870482 90
zkb course:art 1416917872071 87
zkb course:math 1416917871719 97
zkb grade: 1416917869799 5
===========show all record========
baoniu course:math 1416917874500 89
baoniu grade: 1416917874139 4
zkb course: 1416917870482 90
zkb course:art 1416917872071 87
zkb course:math 1416917871719 97
zkb grade: 1416917869799 5
===========del one record========
del recored baoniu ok.
zkb course: 1416917870482 90
zkb course:art 1416917872071 87
zkb course:math 1416917871719 97
zkb grade: 1416917869799 5
===========show all record========
zkb course: 1416917870482 90
zkb course:art 1416917872071 87
zkb course:math 1416917871719 97
zkb grade: 1416917869799 5
时间: 2024-08-24 03:59:14

Ubuntu 14.10 下Eclipse操作HBase的相关文章

Ubuntu 14.10 下Eclipse安装Hadoop插件

准备环境 1 安装好了Hadoop,之前安装了Hadoop 2.5.0,安装参考http://www.cnblogs.com/liuchangchun/p/4097286.html 2 安装Eclipse,这个直接在其官网下载即可 安装步骤 1 下载Eclipse插件,我找的是Hadoop 2.2 的插件,在Hadoop 2.5 下可以正常用,获取插件这里有两种方式 1.1 一是自己下载源码自己编译,过程如下 首先,下载eclipse-hadoop的插件,网址是https://github.co

Ubuntu 14.10 下ZooKeeper+Hadoop2.6.0+HBase1.0.0 的HA机群高可用配置

1 硬件环境 Ubuntu 14.10 64位 2 软件环境 openjdk-7-jdk hadoop 2.6.0 zookeeper-3.4.6 hbase-1.0.0 3 机群规划 3.1 zookeeper配置-机器结点 192.168.1.100 1421-0000192.168.1.106 1421-0003192.168.1.107 1421-0004192.168.1.108 1421-0005192.168.1.109 1421-0006 3.2 hadoop配置-机器结点 19

解决ubuntu 14.04 下eclipse 3.7.2 不能启动,报Could not detect registered XULRunner to use 或 org.eclipse.swt.SWTError: XPCOM 等问题的处理

对于eclipse 3.7.2在ubuntu 14.04下不能启动,需要在 eclipse/configuration 目录下的config.ini文件内增加一行org.eclipse.swt.browser.DefaultType=mozilla #This configuration file was written by: org.eclipse.equinox.internal.frameworkadmin.equinox.EquinoxFwConfigFileParser #Thu J

Ubuntu 14.10 下Hive配置

1 系统环境 Ubuntu 14.10 JDK-7 Hadoop 2.6.0 2 安装步骤 2.1 下载Hive 我第一次安装的时候,下载的是Hive-1.2.1,配置好之后,总是报错 [ERROR] Terminal initialization failed; falling back to unsupported java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was exp

Ubuntu 14.10 下DokuWiki安装

环境说明: Ubuntu 14.10 64位 1 下载DokuWiki:http://download.dokuwiki.org/ 2 解压到 /var/www/html下面 3 如果没有安装Apace和PHP,那么需要安装 sudo apt-get install apache2 php5 4 解压完dokuwiki之后,还需要设置它的权限,增加写的权限,要不然instal的时候会报错 5 启动apace,浏览器打开localhost/dokuwiki,会跳转到安装界面,输入一些信息之后,就可

Ubuntu 14.10 下awk命令详解

简介 awk是一个强大的文本分析工具,相对于grep的查找,sed的编辑,awk在其对数据分析并生成报告时,显得尤为强大.简单来说awk就是把文件逐行的读入,以空格为默认分隔符将每行切片,切开的部分再进行各种分析处理. awk有3个不同版本: awk.nawk和gawk,未作特别说明,一般指gawk,gawk 是 AWK 的 GNU 版本. awk其名称得自于它的创始人 Alfred Aho .Peter Weinberger 和 Brian Kernighan 姓氏的首个字母.实际上 AWK

Ubuntu 14.10下mysql 编译安装

1. 安装环境:Ubuntu server 14.10Mysql-5.6.23.tar.gz 2. 安装必备的工具sudo apt-get install make bison g++ build-essential libncurses5-dev cmake 3. 添加组合用户 设置安装目录权限sudo groupadd mysqlsudo useradd –g mysql mysql –s /bin/false #创建用户mysql并加入到mysql组,不允许mysql用户直接登录系统sud

Ubuntu 14.10 下Server版本中文乱码问题

在安装Ubuntu server 14.10 时候选择了中文版,但是装好系统发现,里面的中文有乱码,解决办法 打开 /etc/default/locale sudo nano /etc/default/locale 原来是中文编码,修改成英文的 #LANG="zh_CN.UTF-8" #LANGUAGE="zh_CN:zh" LANG="en_US.UTF-8" LANGUAGE="en_US:en" 注销系统,重登就行了 参

Ubuntu 14.10 下HBase错误集

1 如果机群时间不同步,那么启动子节点RegionServer就会出问题 aused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ClockOutOfSyncException): org.apache.hadoop.hbase.ClockOutOfSyncException: Server slave1,60020,1398673427650 has been rejecte