hive-jdbc

public class HiveJdbcClient {

private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";

private static String url = "jdbc:hive://192.168.1.134:10000/default";

private static String user = "hive";

private static String password = "";

private static String sql = "";

private static ResultSet res;

private static final Logger log = Logger.getLogger(HiveJdbcClient.class);

public static void main(String[] args) {

try {

Class.forName(driverName);

Connection conn = DriverManager.getConnection(url, user, password);

Statement stmt = conn.createStatement();

// 创建的表名

String tableName = "user_order";

/** 第一步:存在就先删除 **/

sql = "drop table " + tableName;

stmt.executeQuery(sql);

/** 第二步:不存在就创建 **/

sql = "create table " + tableName + " (order_id int, user_id int,create_time string)  row format delimited fields terminated by ‘,‘";

stmt.executeQuery(sql);

// 执行“show tables”操作

sql = "show tables ‘" + tableName + "‘";

System.out.println("Running:" + sql);

res = stmt.executeQuery(sql);

System.out.println("执行“show tables”运行结果:");

if (res.next()) {

System.out.println(res.getString(1));

}

// 执行“describe table”操作

sql = "describe " + tableName;

System.out.println("Running:" + sql);

res = stmt.executeQuery(sql);

System.out.println("执行“describe table”运行结果:");

while (res.next()) {

System.out.println(res.getString(1) + "\t" + res.getString(2));

}

// 执行“load data into table”操作

String filepath = "/home/order.txt";

sql = "load data local inpath ‘" + filepath + "‘ into table " + tableName;

System.out.println("Running:" + sql);

res = stmt.executeQuery(sql);

// 执行“select * query”操作

sql = "select * from " + tableName;

System.out.println("Running:" + sql);

res = stmt.executeQuery(sql);

System.out.println("执行“select * query”运行结果:");

while (res.next()) {

System.out.println(res.getInt(1) + "\t" + res.getString(2));

}

// 执行“regular hive query”操作

sql = "select count(1) from " + tableName;

System.out.println("Running:" + sql);

res = stmt.executeQuery(sql);

System.out.println("执行“regular hive query”运行结果:");

while (res.next()) {

System.out.println(res.getString(1));

}

conn.close();

conn = null;

} catch (ClassNotFoundException e) {

e.printStackTrace();

} catch (SQLException e) {

e.printStackTrace();

log.error("Connection error!", e);

}

}

}

时间: 2024-10-14 11:54:44

hive-jdbc的相关文章

Hive JDBC——深入浅出学Hive

第一部分:搭建Hive JDBC开发环境 搭建:Steps ?新建工程hiveTest ?导入Hive依赖的包 ?Hive  命令行启动Thrift服务 ?hive --service hiveserver & 第二部分:基本操作对象的介绍 Connection ?说明:与Hive连接的Connection对象 ?Hive 的连接 ?jdbc:hive://IP:10000/default" ?获取Connection的方法 ?DriverManager.getConnection(&q

hive jdbc connection refused

hive jdbc 连接时抛异常: Exception in thread "main" java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://192.168.206.128:10000/default: java.net.ConnectException: Connection refused: connect     at org.apache.hive.jdbc.Hiv

HIVE JDBC连接详解

package org.conan.myhadoop.mr; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; public class HiveJDBCConnection {     private static String driverName = "or

hive jdbc 调用

HIVE学习总结 Hive只需要装载一台机器上,可以通过webui,console,thrift接口访问(jdbc,odbc),仅适合离线数据分析,降低数据分析成本(不用编写mapreduce). Hive优势 1.      简单易上手,类sql的hql. 2.      有大数据集的计算和扩展能力,mr作为计算引擎,hdfs作为存储系统 3.      统一的元数据管理(可与pig.presto)等共享 Hive缺点 1.      Hive表达能力有限.迭代和复杂运算不易表达 2.    

Hive JDBC 操作 例子

pom.xml配置 <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-jdbc</artifactId> <version>0.13.1</version> </dependency> 测试例程 1 import org.junit.Test; 2 3 import java.sql.SQLException; 4 import j

Hive JDBC 连接hiveserver2

1.启动hiveserver2 nohup /home/hadoop/hive-1.1.0-cdh5.5.2/bin/hiveserver2 >> /home/hadoop/gtq_dir/logs/hiveserver.log 2>&1 & 2.代码如下: package cn.hive; import java.sql.*; /** * Created by jieyue on 2017/12/18. */ public class HiveJdbcTest { pr

Hive JDBC:java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate anonymous

今天使用JDBC来操作Hive时,首先启动了hive远程服务模式:hiveserver2 &(表示后台运行),然后到eclipse中运行程序时出现错误: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://192.168.182.11:10000/default: Failed to open new session: java.lang.RuntimeException: org.

Hive 8、Hive2 beeline 和 Hive jdbc

1.Hive2 beeline  Beeline 要与HiveServer2配合使用,支持嵌入模式和远程模式 启动beeline 打开两个Shell窗口,一个启动Hive2 一个beeline连接hive2 #启动HiverServer2 , ./bin/hiveserver2 [[email protected] ~]# hiveserver2 16/02/23 22:55:25 WARN conf.HiveConf: HiveConf of name hive.metastore.local

hive jdbc 例子及遇到问题

呼呼,解决hive的jdbc问题花了将近一天,而且解决办法竟然是这么的简单 遇到问题 select * from flag where 1 =1 and cust_no = 'A3325221981121080410' limit 5java.sql.SQLException: Error while processing statement: FAILED: RuntimeException org.apache.hadoop.security.AccessControlException:

Hive JDBC:Permission denied: user=anonymous, access=EXECUTE, inode=”/tmp”

今天使用JDBC来操作Hive时,首先启动了hive远程服务模式:hiveserver2 &(表示后台运行),然后到eclipse中运行程序时出现错误: Permission denied: user=anonymous, access=EXECUTE, inode="/tmp" 解决办法:报错内容提示hive没有/tmp目录的权限,赋予权限即可: hdfs dfs -chmod 777 /tmp 原文地址:https://www.cnblogs.com/lijinze-tsi