Mapreduce atop Apache Phoenix (ScanPlan 初探)

利用Mapreduce/hive查询Phoenix数据时如何划分partition?
PhoenixInputFormat的源码一看便知:

    public List<InputSplit> getSplits(JobContext context) throws IOException, InterruptedException {
        Configuration configuration = context.getConfiguration();
        QueryPlan queryPlan = this.getQueryPlan(context, configuration);
        List allSplits = queryPlan.getSplits();
        List splits = this.generateSplits(queryPlan, allSplits);
        return splits;
    }

根据select查询语句创建查询计划,QueryPlan,实际是子类ScanPlan。getQueryPlan函数有一个特殊操作:
queryPlan.iterator(MapReduceParallelScanGrouper.getInstance());
如果HBase表有多个Region,则会将一个Scan划分为多个,每个Region对应一个Split。这个逻辑跟MR on HBase类似。只是这边的实现过程不同,这边调用的是Phoenix的QueryPlan,而不是HBase API。

以下是一个示例,加深这一过程的理解。

Phoenix 建表

将表presplit为4个region:[-∞,CS), [CS, EU), [EU, NA), [NA, +∞)

CREATE TABLE TEST (HOST VARCHAR NOT NULL PRIMARY KEY, DESCRIPTION VARCHAR) SPLIT ON ('CS','EU','NA');
upsert into test(host, description) values ('CS11', 'cccccccc');
upsert into test(host, description) values ('EU11', 'eeeddddddddd')
upsert into test(host, description) values ('NA11', 'nnnnneeeddddddddd');
0: jdbc:phoenix:localhost> select * from test;
+-------+--------------------+
| HOST  |    DESCRIPTION     |
+-------+--------------------+
| CS11  | cccccccc           |
| EU11  | eeeddddddddd       |
| NA11  | nnnnneeeddddddddd  |
+-------+--------------------+

窥探ScanPlan

import org.apache.hadoop.hbase.client.Scan;
import org.apache.log4j.BasicConfigurator;
import org.apache.phoenix.compile.QueryPlan;
import org.apache.phoenix.iterate.MapReduceParallelScanGrouper;
import org.apache.phoenix.jdbc.PhoenixStatement;

import java.io.IOException;
import java.sql.*;
import java.util.List;

public class LocalPhoenix {
    public static void main(String[] args) throws SQLException, IOException {
        BasicConfigurator.configure();

        Statement stmt = null;
        ResultSet rs = null;

        Connection con = DriverManager.getConnection("jdbc:phoenix:localhost:2181:/hbase");
        stmt = con.createStatement();
        PhoenixStatement pstmt = (PhoenixStatement)stmt;
        QueryPlan queryPlan = pstmt.optimizeQuery("select * from TEST");
        queryPlan.iterator(MapReduceParallelScanGrouper.getInstance());

        Scan scan = queryPlan.getContext().getScan();
        List<List<Scan>> scans = queryPlan.getScans();

        for (List<Scan> sl : scans) {
            System.out.println();
            for (Scan s : sl) {
                System.out.print(s);
            }
        }

        con.close();

    }
}

4个scan如下:

{"loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"CS","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"0":["ALL"]},"caching":100,"maxVersions":1,"timeRange":[0,1523338217847]}
{"loadColumnFamiliesOnDemand":null,"startRow":"CS","stopRow":"EU","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"0":["ALL"]},"caching":100,"maxVersions":1,"timeRange":[0,1523338217847]}
{"loadColumnFamiliesOnDemand":null,"startRow":"EU","stopRow":"NA","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"0":["ALL"]},"caching":100,"maxVersions":1,"timeRange":[0,1523338217847]}
{"loadColumnFamiliesOnDemand":null,"startRow":"NA","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"0":["ALL"]},"caching":100,"maxVersions":1,"timeRange":[0,1523338217847]}Disconnected from the target VM, address: '127.0.0.1:63406', transport: 'socket'

原文地址:https://www.cnblogs.com/luweiseu/p/8783253.html

时间: 2024-10-24 10:33:50

Mapreduce atop Apache Phoenix (ScanPlan 初探)的相关文章

Apache CXF之初探

一.到官网 http://cxf.apache.org/download.html 下载对应的 包. 二.新建Java 工程,把对应的jar包放入去. 三.创建 server 端的程序. 共有3个Java文件: 3.1 新建接口 HelloWorld,此接口只有一个方法,如下代码: package com.yao.cxf.server; public interface HelloWorld { String sayHi(String text); } 3.2 实现接口类 HelloWorldI

Apache Phoenix JDBC 驱动和Spring JDBCTemplate的集成

介绍:Phoenix查询引擎会将SQL查询转换为一个或多个HBase scan,并编排执行以生成标准的JDBC结果集.直接使用HBase API.协同处理器与自定义过滤器,对于简单查询来说,其性能量级是毫秒,对于百万级别的行数来说,其性能量级是秒.更多参考官网:http://phoenix.apache.org/ Phoenix实现了JDBC的驱动,使用Phoenix JDBC和普通的数据库(Mysql)JDBC一样,也可以通过Spring JDBCTemplate的方式,将数据库的操作模块化,

源码安装 linux apache 集成 subversion 初探

分别安装Apache  subversion http://my.oschina.net/u/234018/blog/297849 http://my.oschina.net/u/234018/blog/298292 修改apache 配置 2.1 mod_authz_svn.so 和 mod_dav_svn.so 复制到apache /usr/local/apache/modules/ [[email protected] libexec]# cd /usr/local/subversion/

phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou

环境描述: 操作系统版本:CentOS release 6.5 (Final) 内核版本:2.6.32-431.el6.x86_64 phoenix版本:phoenix-4.10.0 hbase版本:hbase-1.2.6 表SYNC_BUSINESS_INFO_BYDAY数据库量:990万+ 问题描述: 通过phoenix客户端连接hbase数据库,创建二级索引时,报下面的错误: 0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSI

org.apache.phoenix.exception.PhoenixIOException: SYSTEM:CATALOG

Error: SYSTEM:CATALOG (state=08000,code=101)org.apache.phoenix.exception.PhoenixIOException: SYSTEM:CATALOG at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113) at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDa

【异常】Caused by: org.apache.phoenix.coprocessor.HashJoinCacheNotFoundException:

1 详细异常 Caused by: org.apache.phoenix.coprocessor.HashJoinCacheNotFoundException: ERROR 900 (HJ01): Hash Join cache not found joinId: 948789376099633279. The cache might have expired and have been removed. 2 查询到的一些信息 https://community.hortonworks.com/

Apache Phoenix开发实践(1)

目的:解决Phoenix与mybatis结合,实现动态sql查询hbase

org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want

https://stackoverflow.com/questions/38495331/apache-phoenix-unable-to-connect-to-hbase 这个坑不该啊 首选配置hbase 集群是按照官网配置的 配置phoenix 是按照官网上配置的,结果就是报错了,看了stockflow上的答案才知道,,配置了backup-master时,master也cp phoenix-4.9.0-HBase-1.2-server.jar ~/apps/hbase/lib/ 换句话说每个

Apache Shiro用法初探

[每写一次博客,都是对自己学习的一个总结,也希望能帮到遇到同样问题的人] 首先说一下why apache shiro 最近在学着从前端到后端做一个网站来玩,那要搭建一个网站,用户和权限系统肯定是很重要的了.首先是权限系统,可以自己实现一个简单的控制,也可以使用开源的框架.由于自己是学习阶段,所以还是参考开源的框架比较好.网上搜索过后,目前用的比较多的两个开源框架就是apache shiro和spring-security. Spring-security的功能比较强大,但是配置比较麻烦,显然,这