Tephra Apache HBase

TephraApache HBase 的基础上提供了全局一致性的事务支持(腾云科技ty300.com)。HBase (入门教程qkxue.net)提供了强一致性的基于行和区域的 ACID 操作支持,但是牺牲了在跨区域操作的支持。这就要求应用开发者花很大力气来确保区域边界上操作的一致性。而 Tephra 提供了全局事务支持,可以夸区域、跨表以及多个 RPC 上简化了应用的开发。

示例代码:

/**
* A Transactional SecondaryIndexTable.
*/
public class SecondaryIndexTable {
private byte[] secondaryIndex;
private TransactionAwareHTable transactionAwareHTable;
private TransactionAwareHTable secondaryIndexTable;
private TransactionContext transactionContext;
private final TableName secondaryIndexTableName;
private static final byte[] secondaryIndexFamily =
Bytes.toBytes("secondaryIndexFamily");
private static final byte[] secondaryIndexQualifier = Bytes.toBytes(‘r‘);
private static final byte[] DELIMITER = new byte[] {0};

public SecondaryIndexTable(TransactionServiceClient transactionServiceClient,
HTable hTable, byte[] secondaryIndex) {
secondaryIndexTableName =
TableName.valueOf(hTable.getName().getNameAsString() + ".idx");
HTable secondaryIndexHTable = null;
HBaseAdmin hBaseAdmin = null;
try {
hBaseAdmin = new HBaseAdmin(hTable.getConfiguration());
if (!hBaseAdmin.tableExists(secondaryIndexTableName)) {
hBaseAdmin.createTable(new HTableDescriptor(secondaryIndexTableName));
}
secondaryIndexHTable = new HTable(hTable.getConfiguration(),
secondaryIndexTableName);
} catch (Exception e) {
Throwables.propagate(e);
} finally {
try {
hBaseAdmin.close();
} catch (Exception e) {
Throwables.propagate(e);
}
}

this.secondaryIndex = secondaryIndex;
this.transactionAwareHTable = new TransactionAwareHTable(hTable);
this.secondaryIndexTable = new TransactionAwareHTable(secondaryIndexHTable);
this.transactionContext = new TransactionContext(transactionServiceClient,
transactionAwareHTable,
secondaryIndexTable);
}

public Result get(Get get) throws IOException {
return get(Collections.singletonList(get))[0];
}

public Result[] get(List<Get> gets) throws IOException {
try {
transactionContext.start();
Result[] result = transactionAwareHTable.get(gets);
transactionContext.finish();
return result;
} catch (Exception e) {
try {
transactionContext.abort();
} catch (TransactionFailureException e1) {
throw new IOException("Could not rollback transaction", e1);
}
}
return null;
}

public Result[] getByIndex(byte[] value) throws IOException {
try {
transactionContext.start();
Scan scan = new Scan(value, Bytes.add(value, new byte[0]));
scan.addColumn(secondaryIndexFamily, secondaryIndexQualifier);
ResultScanner indexScanner = secondaryIndexTable.getScanner(scan);

ArrayList<Get> gets = new ArrayList<Get>();
for (Result result : indexScanner) {
for (Cell cell : result.listCells()) {
gets.add(new Get(cell.getValue()));
}
}
Result[] results = transactionAwareHTable.get(gets);
transactionContext.finish();
return results;
} catch (Exception e) {
try {
transactionContext.abort();
} catch (TransactionFailureException e1) {
throw new IOException("Could not rollback transaction", e1);
}
}
return null;
}

public void put(Put put) throws IOException {
put(Collections.singletonList(put));
}

public void put(List<Put> puts) throws IOException {
try {
transactionContext.start();
ArrayList<Put> secondaryIndexPuts = new ArrayList<Put>();
for (Put put : puts) {
List<Put> indexPuts = new ArrayList<Put>();
Set<Map.Entry<byte[], List<KeyValue>>> familyMap = put.getFamilyMap().entrySet();
for (Map.Entry<byte [], List<KeyValue>> family : familyMap) {
for (KeyValue value : family.getValue()) {
if (value.getQualifier().equals(secondaryIndex)) {
byte[] secondaryRow = Bytes.add(value.getQualifier(),
DELIMITER,
Bytes.add(value.getValue(),
DELIMITER,
value.getRow()));
Put indexPut = new Put(secondaryRow);
indexPut.add(secondaryIndexFamily, secondaryIndexQualifier, put.getRow());
indexPuts.add(indexPut);
}
}
}
secondaryIndexPuts.addAll(indexPuts);
}
transactionAwareHTable.put(puts);
secondaryIndexTable.put(secondaryIndexPuts);
transactionContext.finish();
} catch (Exception e) {
try {
transactionContext.abort();
} catch (TransactionFailureException e1) {
throw new IOException("Could not rollback transaction", e1);
}
}
}
}

时间: 2024-10-11 04:06:17

Tephra Apache HBase的相关文章

[ 译]Apache HBase Write Path

翻译自cloudera,原文直通车:Apache HBase Write Path Apache HBase也就是Hadoop Database是基于HDFS之上的.HBase可以随机获取和更新存储在HDFS上的记录.但是HDFS 上的文件只能追加而且一旦创建便无法修改.说到这里你或许会问:那HBase是怎么做到在HDFS上低延迟的读和写呢?在这篇 文章里,我们就会通过描述HBase的写的流程来解释数据在HBase中是如何更新的. 写流程描述就是HBase如何完成put或者delete操作.流程

Apache HBase 集群安装文档

简介: Apache HBase 是一个分布式的.面向列的开源 NoSQL 数据库.具有高性能.高可靠性.可伸缩.面向列.分布式存储的特性. HBase 的数据文件最终落地在 HDFS 之上,所以在 Hadoop 集群中,DataNode 节点都需安装 HBase Worker Node. 另外,HBase 受 ZooKeeper 管理,还需安装 ZooKeeper 单机或集群.建议 HBase Master 节点不要与集群中其余 Master 节点安装在同一台物理服务器. HBase Mast

How-to: Enable User Authentication and Authorization in Apache HBase

With the default Apache HBase configuration, everyone is allowed to read from and write to all tables available in the system. For many enterprise setups, this kind of policy is unacceptable. Administrators can set up firewalls that decide which mach

【Hadoop学习】Apache HBase项目简介

原创声明:转载请注明作者和原始链接 http://www.cnblogs.com/zhangningbo/p/4068957.html       英文原版:http://hbase.apache.org/ Apache HBaseTM ,即Hadoop 数据库,是一个分布式的.可缩放的大数据存储方案. 何时使用Apache HBase? 当需要随机.实时读写大数据时,就可以使用Apache HBase了.该项目旨在组织甚大规模的位于商业硬件集群之上的表——数十亿行 × 数百万列.Apache

The Apache HBase? Reference Guide

以下内容由http://hbase.apache.org/book.html#getting_started节选并改编而来. 运行环境:hadoop-1.0.4,hbase-0.94.22,jdk1.7.0_65 Chapter 1. Getting Started create a table in HBase using the hbase shell CLI, insert rows into the table, perform put and scan operations again

apache hbase 发布1.0.0版本

今天apache发布了最新的hbase 1.0.0,下图是版本变迁历史: 详情参考: https://blogs.apache.org/hbase/entry/start_of_a_new_era

Apache Hadoop集群离线安装部署(三)——Hbase安装

Apache Hadoop集群离线安装部署(一)--Hadoop(HDFS.YARN.MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html Apache Hadoop集群离线安装部署(二)--Spark-2.1.0 on Yarn安装:http://www.cnblogs.com/pojishou/p/6366570.html Apache Hadoop集群离线安装部署(三)--Hbase安装:http://www.cnblogs.com/po

Apache Storm 1.1.0 中文文档 | ApacheCN

前言  Apache Storm 是一个免费的,开源的,分布式的实时计算系统. 官方文档: http://storm.apache.org 中文文档: http://storm.apachecn.org ApacheCN 最近组织了翻译 Storm 1.1.0 中文文档 的活动,整体 翻译进度 为 96%. 感谢大家参与到该活动中来 感谢无私奉献的 贡献者,才有了这份 Storm 1.1.0 中文文档 感谢一路有你的陪伴,我们才可以做的更好,走的更快,走的更远,我们一直在努力 ... 网页地址:

hbase单机模式的安装及启动

从apache官网上下载hbase的稳定版本:http://mirror.bit.edu.cn/apache/hbase/stable/hbase-1.1.2-bin.tar.gz 解压到本机上的任何一个目录,在我的电脑上是/home/jason/hbase:tar  xvfz hbase-1.1.2-bin.tar.gz 修改/etc/profile文件增加环境变量 http://my.oschina.net/u/914897/admin/new-blogexport HBASE_HOME=/