【异常】Caused by: org.apache.phoenix.coprocessor.HashJoinCacheNotFoundException:

1 详细异常

Caused by: org.apache.phoenix.coprocessor.HashJoinCacheNotFoundException: ERROR 900 (HJ01): Hash Join cache not found joinId: 948789376099633279. The cache might have expired and have been removed.

  

2 查询到的一些信息

https://community.hortonworks.com/questions/149867/orgapachephoenixcoprocessorhashjoincachenotfoundex.html

里面提到增加regionserver的这个参数来解决问题

phoenix.coprocessor.maxServerCacheTimeToLiveMs

3 然后修改参数,重启hbase

<property>

<name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>

<value>300000000</value>

</property>

4 并且更新本地client的hbase-site.xml文件,再次查询解决问题

原文地址:https://www.cnblogs.com/QuestionsZhang/p/11254523.html

时间: 2024-10-26 02:36:32

【异常】Caused by: org.apache.phoenix.coprocessor.HashJoinCacheNotFoundException:的相关文章

异常-Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hdfs, access=WRITE, inode=&quot;/hbase&quot;:root:supergroup:drwxr-xr-x

1 详细异常 Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hdfs, access=WRITE, inode="/hbase":root:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAu

org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want

https://stackoverflow.com/questions/38495331/apache-phoenix-unable-to-connect-to-hbase 这个坑不该啊 首选配置hbase 集群是按照官网配置的 配置phoenix 是按照官网上配置的,结果就是报错了,看了stockflow上的答案才知道,,配置了backup-master时,master也cp phoenix-4.9.0-HBase-1.2-server.jar ~/apps/hbase/lib/ 换句话说每个

phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou

环境描述: 操作系统版本:CentOS release 6.5 (Final) 内核版本:2.6.32-431.el6.x86_64 phoenix版本:phoenix-4.10.0 hbase版本:hbase-1.2.6 表SYNC_BUSINESS_INFO_BYDAY数据库量:990万+ 问题描述: 通过phoenix客户端连接hbase数据库,创建二级索引时,报下面的错误: 0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSI

tomcat 异常:Caused by: org.apache.catalina.LifecycleException: The connector cannot start since the specified port value of [-1] is invalid

启动tomcat时出现异常: org.apache.catalina.LifecycleException: Failed to start component [Connector[AJP/1.3-auto-1]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154) at org.apache.catalina.core.StandardService.startInternal(StandardSer

Caused by: org.apache.ibatis.reflection.ReflectionException: There is no getter for property named &#39;zoneId&#39; in &#39;class java.lang.String&#39;

本文为博主原创,未经允许不得而转载: 异常展示: dao层定义的接口为: public int getClientTotal(); 在mybatis中的sql为: <select id="getClientTotal" parameterType="String" resultType="Integer"> SELECT COUNT(*) AS oldNum FROM tbl__client_info <where> &l

org.apache.phoenix.exception.PhoenixIOException: SYSTEM:CATALOG

Error: SYSTEM:CATALOG (state=08000,code=101)org.apache.phoenix.exception.PhoenixIOException: SYSTEM:CATALOG at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113) at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDa

alimama open source mdrill启动后访问蓝鲸任务时出错:Caused by:org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss

启动后,访问:http://IP:1107/mdrill.jsp  蓝鲸任务 即http:/IP:1107/topology.jsp 页面出现如下错误: HTTP ERROR 500 Problem accessing /topology.jsp. Reason: KeeperErrorCode = ConnectionLoss Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperError

Apache Phoenix JDBC 驱动和Spring JDBCTemplate的集成

介绍:Phoenix查询引擎会将SQL查询转换为一个或多个HBase scan,并编排执行以生成标准的JDBC结果集.直接使用HBase API.协同处理器与自定义过滤器,对于简单查询来说,其性能量级是毫秒,对于百万级别的行数来说,其性能量级是秒.更多参考官网:http://phoenix.apache.org/ Phoenix实现了JDBC的驱动,使用Phoenix JDBC和普通的数据库(Mysql)JDBC一样,也可以通过Spring JDBCTemplate的方式,将数据库的操作模块化,

HDFS超租约异常总结(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

HDFS超租约异常总结(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException) 转载 2014年02月22日 14:40:58 9686 异常信息: 13/09/11 12:12:06 INFO hdfs.DFSClient: SMALL_BUFFER_SIZE is 512 org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode