[Hive-Tutorial] Querying and Inserting Data 查询和插入数据

Querying and Inserting Data

The Hive query operations are documented in Select, and the insert operations are documented in Inserting data into Hive Tables from queries and Writing data into the filesystem from queries.

Simple Query 简单查询

For all the active users, one can use the query of the following form:

INSERT OVERWRITE TABLE user_active

SELECT user.*

FROM user

WHERE user.active = 1;

Note that unlike SQL, we always insert the results into a table. We will illustrate later how the user can inspect these results and even dump them to a local file. You can also run the following query on Hive CLI:

SELECT user.*

FROM user

WHERE user.active = 1;

This will be internally rewritten to some temporary file and displayed to the Hive client side.

Partition Based Query 分区查询

What partitions to use in a query is determined automatically by the system on the basis of where clause conditions on partition columns. For example, in order to get all the page_views in the month of 03/2008 referred from domain xyz.com, one could write the following query:

INSERT OVERWRITE TABLE xyz_com_page_views

SELECT page_views.*

FROM page_views

WHERE page_views.date >= ‘2008-03-01‘ AND page_views.date <= ‘2008-03-31‘ AND

      page_views.referrer_url like ‘%xyz.com‘;

Note that page_views.date is used here because the table (above) was defined with PARTITIONED BY(date DATETIME, country STRING) ; if you name your partition something different, don‘t expect .date to do what you think!

Joins  表连接

In order to get a demographic breakdown (by gender) of page_view of 2008-03-03 one would need to join the page_view table and the user table on the userid column. This can be accomplished with a join as shown in the following query:

INSERT OVERWRITE TABLE pv_users

SELECT pv.*, u.gender, u.age

FROM user u JOIN page_view pv ON (pv.userid = u.id)

WHERE pv.date = ‘2008-03-03‘;

In order to do outer joins the user can qualify the join with LEFT OUTER, RIGHT OUTER or FULL OUTER keywords in order to indicate the kind of outer join (left preserved, right preserved or both sides preserved). For example, in order to do a full outer join in the query above, the corresponding syntax would look like the following query:

INSERT OVERWRITE TABLE pv_users

SELECT pv.*, u.gender, u.age

FROM user u FULL OUTER JOIN page_view pv ON (pv.userid = u.id)

WHERE pv.date = ‘2008-03-03‘;

In order check the existence of a key in another table, the user can use LEFT SEMI JOIN as illustrated by the following example.

INSERT OVERWRITE TABLE pv_users

SELECT u.*

FROM user u LEFT SEMI JOIN page_view pv ON (pv.userid = u.id)

WHERE pv.date = ‘2008-03-03‘;

In order to join more than one tables, the user can use the following syntax:

INSERT OVERWRITE TABLE pv_friends

SELECT pv.*, u.gender, u.age, f.friends

FROM page_view pv JOIN user u ON (pv.userid = u.id) JOIN friend_list f ON (u.id = f.uid)

WHERE pv.date = ‘2008-03-03‘;

Note that Hive only supports equi-joins. Also it is best to put the largest table on the rightmost side of the join to get the best performance.

目前Hive只支持等连接,并且最好将达标放到表连接的右边。

Aggregations  聚合操作

In order to count the number of distinct users by gender one could write the following query:

INSERT OVERWRITE TABLE pv_gender_sum

SELECT pv_users.gender, count (DISTINCT pv_users.userid)

FROM pv_users

GROUP BY pv_users.gender;

Multiple aggregations can be done at the same time, however, no two aggregations can have different DISTINCT columns .e.g while the following is possible

DISTINCT关键在不允许在两个聚合函数里,针对不同的列。

INSERT OVERWRITE TABLE pv_gender_agg

SELECT pv_users.gender, count(DISTINCT pv_users.userid), count(*), sum(DISTINCT pv_users.userid)

FROM pv_users

GROUP BY pv_users.gender;

however, the following query is not allowed

下面这种写法是不允许的:

INSERT OVERWRITE TABLE pv_gender_agg

SELECT pv_users.gender, count(DISTINCT pv_users.userid), count(DISTINCT pv_users.ip)

FROM pv_users

GROUP BY pv_users.gender;

Multi Table/File Inserts  多表插入和文件插入

The output of the aggregations or simple selects can be further sent into multiple tables or even to hadoop dfs files (which can then be manipulated using hdfs utilities). e.g. if along with the gender breakdown, one needed to find the breakdown of unique page views by age, one could accomplish that with the following query:

FROM pv_users

INSERT OVERWRITE TABLE pv_gender_sum

    SELECT pv_users.gender, count_distinct(pv_users.userid)

    GROUP BY pv_users.gender

INSERT OVERWRITE DIRECTORY ‘/user/data/tmp/pv_age_sum‘

    SELECT pv_users.age, count_distinct(pv_users.userid)

    GROUP BY pv_users.age;

The first insert clause sends the results of the first group by to a Hive table while the second one sends the results to a hadoop dfs files.

Dynamic-Partition Insert 动态分区插入

In the previous examples, the user has to know which partition to insert into and only one partition can be inserted in one insert statement. If you want to load into multiple partitions, you have to use multi-insert statement as illustrated below.

FROM page_view_stg pvs

INSERT OVERWRITE TABLE page_view PARTITION(dt=‘2008-06-08‘, country=‘US‘)

       SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, nullnull, pvs.ip WHERE pvs.country = ‘US‘

INSERT OVERWRITE TABLE page_view PARTITION(dt=‘2008-06-08‘, country=‘CA‘)

       SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, nullnull, pvs.ip WHERE pvs.country = ‘CA‘

INSERT OVERWRITE TABLE page_view PARTITION(dt=‘2008-06-08‘, country=‘UK‘)

       SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, nullnull, pvs.ip WHERE pvs.country = ‘UK‘;

In order to load data into all country partitions in a particular day, you have to add an insert statement for each country in the input data. This is very inconvenient since you have to have the priori knowledge of the list of countries exist in the input data and create the partitions beforehand. If the list changed for another day, you have to modify your insert DML as well as the partition creation DDLs. It is also inefficient since each insert statement may be turned into a MapReduce Job.

Dynamic-partition insert (or multi-partition insert) is designed to solve this problem by dynamically determining which partitions should be created and populated while scanning the input table. This is a newly added feature that is only available from version 0.6.0. In the dynamic partition insert, the input column values are evaluated to determine which partition this row should be inserted into. If that partition has not been created, it will create that partition automatically. Using this feature you need only one insert statement to create and populate all necessary partitions. In addition, since there is only one insert statement, there is only one corresponding MapReduce job. This significantly improves performance and reduce the Hadoop cluster workload comparing to the multiple insert case.

Below is an example of loading data to all country partitions using one insert statement:

FROM page_view_stg pvs

INSERT OVERWRITE TABLE page_view PARTITION(dt=‘2008-06-08‘, country)

       SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, nullnull, pvs.ip, pvs.country

There are several syntactic differences from the multi-insert statement:

  • country appears in the PARTITION specification, but with no value associated. In this case, country is a dynamic partition column. On the other hand, ds has a value associated with it, which means it is a static partition column. If a column is dynamic partition column, its value will be coming from the input column. Currently we only allow dynamic partition columns to be the last column(s) in the partition clause because the partition column order indicates its hierarchical order (meaning dt is the root partition, and country is the child partition). 目前只支持最后一个分区列为动态分区列。You cannot specify a partition clause with (dt, country=‘US‘) because that means you need to update all partitions with any date and its country sub-partition is ‘US‘.
  • An additional pvs.country column is added in the select statement. This is the corresponding input column for the dynamic partition column. Note that you do not need to add an input column for the static partition column because its value is already known in the PARTITION clause. Note that the dynamic partition values are selected by ordering, not name, and taken as the last columns from the select clause.

Semantics of the dynamic partition insert statement:

  • When there are already non-empty partitions exists for the dynamic partition columns, (e.g., country=‘CA‘ exists under some ds root partition), it will be overwritten if the dynamic partition insert saw the same value (say ‘CA‘) in the input data. This is in line with the ‘insert overwrite‘ semantics. However, if the partition value ‘CA‘ does not appear in the input data, the existing partition will not be overwritten.如果有对应的分区数据,则会被覆盖。没有则不会被覆盖。
  • Since a Hive partition corresponds to a directory in HDFS, the partition value has to conform to the HDFS path format (URI in Java). Any character having a special meaning in URI (e.g., ‘%‘, ‘:‘, ‘/‘, ‘#‘) will be escaped with ‘%‘ followed by 2 bytes of its ASCII value.
  • If the input column is a type different than STRING, its value will be first converted to STRING to be used to construct the HDFS path.
  • If the input column value is NULL or empty string, the row will be put into a special partition, whose name is controlled by the hive parameter hive.exec.default.partition.name. The default value is HIVE_DEFAULT_PARTITION{}. Basically this partition will contain all "bad" rows whose value are not valid partition names. The caveat of this approach is that the bad value will be lost and is replaced by HIVE_DEFAULT_PARTITION{} if you select them Hive. JIRA HIVE-1309 is a solution to let user specify "bad file" to retain the input partition column values as well. (空值使用默认的partition名字)
  • Dynamic partition insert could potentially resource hog in that it could generate a large number of partitions in a short time. To get yourself buckled, we define three parameters:(因为动态分区插入过程中,会在短时间内产生大量的partition,可以通过一下参数进行控制:)
    • hive.exec.max.dynamic.partitions.pernode (default value being 100) is the maximum dynamic partitions that can be created by each mapper or reducer. If one mapper or reducer created more than that the threshold, a fatal error will be raised from the mapper/reducer (through counter) and the whole job will be killed. (单个Mapper或者Redcuer最大的动态Partition数量)
    • hive.exec.max.dynamic.partitions (default value being 1000) is the total number of dynamic partitions could be created by one DML. If each mapper/reducer did not exceed the limit but the total number of dynamic partitions does, then an exception is raised at the end of the job before the intermediate data are moved to the final destination.(在最后一个JOB,执行中间数据迁移到最终目的地的时候,如果一个DML语句中所有的Mapper/Reducer产生的动态分区之后大于设定值,则会抛出异常。)
    • hive.exec.max.created.files (default value being 100000) is the maximum total number of files created by all mappers and reducers. This is implemented by updating a Hadoop counter by each mapper/reducer whenever a new file is created. If the total number is exceeding hive.exec.max.created.files, a fatal error will be thrown and the job will be killed.(该参数定义为:所有的Mapper和Reducer产生的文件数量不能超过此参数设定值,否则当前JOB会被KILL)
  • Another situation we want to protect against dynamic partition insert is that the user may accidentally specify all partitions to be dynamic partitions without specifying one static partition, while the original intention is to just overwrite the sub-partitions of one root partition. We define another parameter hive.exec.dynamic.partition.mode=strict to prevent the all-dynamic partition case. In the strict mode, you have to specify at least one static partition. The default mode is strict. In addition, we have a parameter hive.exec.dynamic.partition=true/false to control whether to allow dynamic partition at all. The default value is false.(为了防止用户不小心将所有的分区设置成动态分区,没有静态分区,通过hive.exec.dynamic.partition.mode=strict参数来解决此问题,当设置成strict时候,用户至少指定一个静态分区。
  • In Hive 0.6, dynamic partition insert does not work with hive.merge.mapfiles=true or hive.merge.mapredfiles=true, so it internally turns off the merge parameters. Merging files in dynamic partition inserts are supported in Hive 0.7 (see JIRA HIVE-1307 for details).

Troubleshooting and best practices: 问题解决及最佳实践

  • As stated above, there are too many dynamic partitions created by a particular mapper/reducer, a fatal error could be raised and the job will be killed. The error message looks something like:

        hive> set hive.exec.dynamic.partition.mode=nonstrict;

        hive> FROM page_view_stg pvs

              INSERT OVERWRITE TABLE page_view PARTITION(dt, country)

                     SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, nullnull, pvs.ip,

                            from_unixtimestamp(pvs.viewTime, ‘yyyy-MM-dd‘) ds, pvs.country;

    ...

    2010-05-07 11:10:19,816 Stage-1 map = 0%,  reduce = 0%

    [Fatal Error] Operator FS_28 (id=41): fatal error. Killing the job.

    Ended Job = job_201005052204_28178 with errors

    ...

    The problem of this that one mapper will take a random set of rows and it is very likely that the number of distinct (dt, country) pairs will exceed the limit of hive.exec.max.dynamic.partitions.pernode. One way around it is to group the rows by the dynamic partition columns in the mapper and distribute them to the reducers where the dynamic partitions will be created. In this case the number of distinct dynamic partitions will be significantly reduced. The above example query could be rewritten to:

    hive> set hive.exec.dynamic.partition.mode=nonstrict;

    hive> FROM page_view_stg pvs

          INSERT OVERWRITE TABLE page_view PARTITION(dt, country)

                 SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, nullnull, pvs.ip,

                        from_unixtimestamp(pvs.viewTime, ‘yyyy-MM-dd‘) ds, pvs.country

                 DISTRIBUTE BY ds, country;

    This query will generate a MapReduce job rather than Map-only job. The SELECT-clause will be converted to a plan to the mappers and the output will be distributed to the reducers based on the value of (ds, country) pairs. The INSERT-clause will be converted to the plan in the reducer which writes to the dynamic partitions.

Additional documentation:

Inserting into Local Files  添加本地文件

In certain situations you would want to write the output into a local file so that you could load it into an excel spreadsheet. This can be accomplished with the following command:

INSERT OVERWRITE LOCAL DIRECTORY ‘/tmp/pv_gender_sum‘

SELECT pv_gender_sum.*

FROM pv_gender_sum;

Sampling 抽样插入

The sampling clause allows the users to write queries for samples of the data instead of the whole table. Currently the sampling is done on the columns that are specified in the CLUSTERED BY clause of the CREATE TABLE statement. In the following example we choose 3rd bucket out of the 32 buckets of the pv_gender_sum table:

INSERT OVERWRITE TABLE pv_gender_sum_sample

SELECT pv_gender_sum.*

FROM pv_gender_sum TABLESAMPLE(BUCKET 3 OUT OF 32);

In general the TABLESAMPLE syntax looks like:

TABLESAMPLE(BUCKET x OUT OF y)

y has to be a multiple or divisor of the number of buckets in that table as specified at the table creation time. The buckets chosen are determined if bucket_number module y is equal to x. So in the above example the following tablesample clause

TABLESAMPLE(BUCKET 3 OUT OF 16)

would pick out the 3rd and 19th buckets. The buckets are numbered starting from 0.

On the other hand the tablesample clause

TABLESAMPLE(BUCKET 3 OUT OF 64 ON userid)

would pick out half of the 3rd bucket.

Union All

The language also supports union all, e.g. if we suppose there are two different tables that track which user has published a video and which user has published a comment, the following query joins the results of a union all with the user table to create a single annotated stream for all the video publishing and comment publishing events:

INSERT OVERWRITE TABLE actions_users

SELECT u.id, actions.date

FROM (

    SELECT av.uid AS uid

    FROM action_video av

    WHERE av.date = ‘2008-06-03‘

    UNION ALL

    SELECT ac.uid AS uid

    FROM action_comment ac

    WHERE ac.date = ‘2008-06-03‘

    ) actions JOIN users u ON(u.id = actions.uid);

Array Operations 数组操作

Array columns in tables can be as follows:

CREATE TABLE array_table (int_array_column ARRAY<INT>);

Assuming that pv.friends is of the type ARRAY<INT> (i.e. it is an array of integers), the user can get a specific element in the array by its index as shown in the following command:

SELECT pv.friends[2]

FROM page_views pv;

The select expression gets the third item in the pv.friends array.

The user can also get the length of the array using the size function as shown below:

SELECT pv.userid, size(pv.friends)

FROM page_view pv;

Map (Associative Arrays) Operations

Maps provide collections similar to associative arrays. Such structures can only be created programmatically currently. We will be extending this soon. For the purpose of the current example assume that pv.properties is of the type map<String, String> i.e. it is an associative array from strings to string. Accordingly, the following query:

INSERT OVERWRITE page_views_map

SELECT pv.userid, pv.properties[‘page type‘]

FROM page_views pv;

can be used to select the ‘page_type‘ property from the page_views table.

Similar to arrays, the size function can also be used to get the number of elements in a map as shown in the following query:

SELECT size(pv.properties)

FROM page_view pv;

Custom Map/Reduce Scripts  [??]

Users can also plug in their own custom mappers and reducers in the data stream by using features natively supported in the Hive language. e.g. in order to run a custom mapper script - map_script - and a custom reducer script - reduce_script - the user can issue the following command which uses the TRANSFORM clause to embed the mapper and the reducer scripts.

Note that columns will be transformed to string and delimited by TAB before feeding to the user script, and the standard output of the user script will be treated as TAB-separated string columns. User scripts can output debug information to standard error which will be shown on the task detail page on hadoop.

FROM (

     FROM pv_users

     MAP pv_users.userid, pv_users.date

     USING ‘map_script‘

     AS dt, uid

     CLUSTER BY dt) map_output

 INSERT OVERWRITE TABLE pv_users_reduced

     REDUCE map_output.dt, map_output.uid

     USING ‘reduce_script‘

     AS date, count;

Sample map script (weekday_mapper.py )

import sys

import datetime

for line in sys.stdin:

  line = line.strip()

  userid, unixtime = line.split(‘\t‘)

  weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()

  print ‘,‘.join([userid, str(weekday)])

Of course, both MAP and REDUCE are "syntactic sugar" for the more general select transform. The inner query could also have been written as such:

SELECT TRANSFORM(pv_users.userid, pv_users.date) USING ‘map_script‘ AS dt, uid CLUSTER BY dt FROM pv_users;

Schema-less map/reduce: If there is no "AS" clause after "USING map_script", Hive assumes the output of the script contains 2 parts: key which is before the first tab, and value which is the rest after the first tab. Note that this is different from specifying "AS key, value" because in that case value will only contains the portion between the first tab and the second tab if there are multiple tabs.

In this way, we allow users to migrate old map/reduce scripts without knowing the schema of the map output. User still needs to know the reduce output schema because that has to match what is in the table that we are inserting to.

FROM (

    FROM pv_users

    MAP pv_users.userid, pv_users.date

    USING ‘map_script‘

    CLUSTER BY key) map_output

INSERT OVERWRITE TABLE pv_users_reduced

    REDUCE map_output.dt, map_output.uid

    USING ‘reduce_script‘

    AS date, count;

Distribute By and Sort By: Instead of specifying "cluster by", the user can specify "distribute by" and "sort by", so the partition columns and sort columns can be different. The usual case is that the partition columns are a prefix of sort columns, but that is not required.

FROM (

    FROM pv_users

    MAP pv_users.userid, pv_users.date

    USING ‘map_script‘

    AS c1, c2, c3

    DISTRIBUTE BY c2

    SORT BY c2, c1) map_output

INSERT OVERWRITE TABLE pv_users_reduced

    REDUCE map_output.c1, map_output.c2, map_output.c3

    USING ‘reduce_script‘

    AS date, count;

Co-Groups

Amongst the user community using map/reduce, cogroup is a fairly common operation wherein the data from multiple tables are sent to a custom reducer such that the rows are grouped by the values of certain columns on the tables. With the UNION ALL operator and the CLUSTER BY specification, this can be achieved in the Hive query language in the following way. Suppose we wanted to cogroup the rows from the actions_video and action_comments table on the uid column and send them to the ‘reduce_script‘ custom reducer, the following syntax can be used by the user:

FROM (

     FROM (

             FROM action_video av

             SELECT av.uid AS uid, av.id AS id, av.date AS date

            UNION ALL

             FROM action_comment ac

             SELECT ac.uid AS uid, ac.id AS id, ac.date AS date

     ) union_actions

     SELECT union_actions.uid, union_actions.id, union_actions.date

     CLUSTER BY union_actions.uid) map

 INSERT OVERWRITE TABLE actions_reduced

     SELECT TRANSFORM(map.uid, map.id, map.date) USING ‘reduce_script‘ AS (uid, id, reduced_val);

时间: 2024-11-07 06:00:16

[Hive-Tutorial] Querying and Inserting Data 查询和插入数据的相关文章

python 编写 SQLSERVER,ORACLE,MYSQL 数据查询及插入数据

SQLSERVER方法: 插入数据代码演示(上下文管理器方法): import pymssql,uuidfrom class_area.class_ReadConf import ReadDate #导入读取配置文件模块方法sql_data=ReadDate('sqlserver.conf','DATABASE','config').readdata() read_data=ReadDate('area.conf','AREAS','config').readdata()area_read_2=

MySql数据库-查询、插入数据时转义函数的使用

最近在看一部php的基础视频教程,在做案例的时,当通过用户名查询用户信息的时候,先使用了转义函数对客户提交的内容进行过滤之后再交给sql语句进行后续的操作.虽然能看到转义函数本身的作用,但是仍然有一些疑惑. 疑惑一:当转义数据后,数据中会增加一些反斜杠,为了能查找出对应的数据,那么原来存在数据库中的数据是不是也已经被保存成含有反斜杠的了? 疑惑二:转义数据后再向数据库中插入数据,保存在数据库中的数据是否会含有过滤后的反斜杠? 带着这些疑问对用户提交的表单进行测试. echo get_magic_

php连接mysql数据库(查询,插入数据)

'; echo ' '; // 显示字段名称 echo " "; for($i = 0; $i ' . mysql_field_name ( $result, $i ); echo " "; } echo " "; // 定位到第一条记录 mysql_data_seek ( $result, 0 ); // 循环取出记录 while ( $row = mysql_fetch_row ( $result ) ) { echo " &quo

oracle 通过查询灵活插入数据 insert into ...select..

insert into reg_user (id,name,password,area_code,reg_time,first_pswd,record_type) select l.reg_user_id,l.entry_person,'4QrcOUm6Wau+VuBX8g+IPg==',l.area_code,sysdate,'123456',l.type from location_info l where l.type=5;

Hive通过查询语句向表中插入数据过程中发现的坑

前言 最近在学习使用Hive(版本0.13.1)的过程中,发现了一些坑,它们或许是Hive提倡的比关系数据库更加自由的体现(同时引来一些问题),或许是一些bug.总而言之,这些都需要使用Hive的开发人员额外注意.本文旨在列举我发现的2个通过查询语句向表中插入数据过程中的问题,希望大家注意. 数据准备 为了验证接下来出现的问题,需要先准备两张表employees和staged_employees,并准备好测试数据.首先使用以下语句创建表employees: create table employ

Hive学习之Union和子查询

Union的语法格式如下: select_statement UNION ALL select_statement UNION ALL select_statement ... Union用于将多个SELECT语句的查询结果合并到一个结果集中,目前Hive只支持UNION ALL,也就是结果集中的重复记录不会被删除.SELECT语句返回列的数目和名称必须相同,否则会报schema错误.Union语句还可以嵌套在FROM子句中: SELECT * FROM ( select_statement U

hive从查询中获取数据插入到表或动态分区

(前人写的不错,很实用,负责任转发)转自:http://www.crazyant.net/1197.html Hive的insert语句能够从查询语句中获取数据,并同时将数据Load到目标表中.现在假定有一个已有数据的表staged_employees(雇员信息全量表),所属国家cnty和所属州st是该表的两个属性,我们做个试验将该表中的数据查询出来插入到另一个表employees中. 1 2 3 4 INSERT OVERWRITE TABLE employees PARTITION (cou

【解决】hive与hbase表结合级联查询的问题

[Author]: kwu [解决]hive与hbase表结合级联查询的问题,hive两个表以上,关联查询时出现长时无法返回的情况.同时也不出现,mr的进度百分比. 查询日志如图所示: 解决这个问题,需要修改配置 set hive.auto.convert.join = false; 或者 <property> <name>hive.auto.convert.join</name> <value>false</value> </proper

Hive/Impala批量插入数据

问题描述 现有几千条数据,需要插入到对应的Hive/Impala表中.安排给了一个同事做,但是等了好久,反馈还没有插入完成--看到他的做法是:对每条数据进行处理转换为对应的insert语句,但是,实际执行起来,速度很慢,每条数据都要耗时1s左右.比在MySQL中批量插入数据慢多了,因而抱怨Impala不太好用 问题分析 首先,必须明确的是,把每条数据处理成insert语句的方式,肯定是最低效的,不管是在MySQL中,还是在分布式组件Hive.Impala中. 这种方式的资源消耗,更多的花在了连接