1、创建表
First, create a table with tab-delimited text file format:
(1)CREATE TABLE u_data ( userid INT, movieid INT, rating INT, unixtime STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t‘ STORED AS TEXTFILE; (2)//creates a table called invites with two columns and a partition column called ds. //The partition column is a virtual column. //It is not part of the data itself but is derived from the partition that a particular dataset is loaded into.
CREATE TABLE invites (foo INT, bar STRING) PARTITIONED BY (ds STRING); (3)创建一个带有分区的表CREATE TABLE page_view(viewTime INT, userid BIGINT, page_url STRING, referrer_url STRING, ip STRING COMMENT ‘IP Address of the User‘)COMMENT ‘This is the page view table‘PARTITIONED BY(dt STRING, country STRING)CLUSTERED BY(userid) SORTED BY(viewTime)INTO
32
BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘1‘STORED AS SEQUENCEFILE;说明:In the example above, the table is clustered by a hash function of userid into 32 buckets. Within each bucket the data is sorted in increasing order of viewTime. Such an organization allows the user to do efficient sampling on the clustered column - in this case userid. The sorting property allows internal operators to take advantage of the better-known data structure while evaluating queries with greater efficiency.
2、修改表和删除表
(1)修改表名
ALTER TABLE events RENAME TO 3koobecaf;
(2)给表增加字段
ALTER TABLE pokes ADD COLUMNS (new_col INT);
(3)给表增加字段和字段注释
ALTER TABLE invites ADD COLUMNS (new_col2 INT COMMENT ‘a comment‘);
(4)替换标的字段
ALTER TABLE invites REPLACE COLUMNS (foo INT, bar STRING, baz INT COMMENT ‘baz replaces new_col2‘);//这里是替换表的前三列,其他的列会被删除
(5)删除表
DROP TABLE pokes; 删除标的分区:ALTER TABLE pv_users DROP PARTITION (ds=
‘2008-08-08‘
)
3、加载本地数据到hive的表中
(1) load u.data
into the table that was just created:LOAD DATA LOCAL INPATH ‘<path>/u.data‘ OVERWRITE INTO TABLE u_data;
(2)Count the number of rows in table u_data:SELECT COUNT(*) FROM u_data
4、加载hdfs上的数据到hive的表中
LOAD DATA INPATH ‘/examples/files/kv1.txt‘ OVERWRITE INTO TABLE pokes;
5、将hive中表的数据加载到本地和hdfs上
(1)将hive中的表的数据加载到本地
insert overwrite local directory ‘/home/liujiyu/hivetolocal‘ select a.* from inc a;
(2)将hive中的表的数据加载到hdfs上
insert overwrite directory ‘/user/liujiyu/hivetolocal‘ select a.* from inc a;
6、JOIN的操作
FROM pokes t1 JOIN invites t2 ON (t1.bar = t2.bar) INSERT OVERWRITE TABLE events SELECT t1.bar, t1.foo, t2.foo;
时间: 2024-10-11 00:37:35