1. hadoop fs -ls 可以查看HDFS文件
后面不加目录参数的话,默认当前用户的目录。/user/当前用户
$ hadoop fs -ls 16/05/19 10:40:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Found 3 items drwxr-xr-x - yy yy 0 2016-04-24 08:00 .Trash drwx------ - yy yy 0 2016-05-06 06:00 .staging drwxr-xr-x - yy yy 0 2016-05-06 06:00 oozie-oozi
也可以加目录,显示指定目录的HDFS文件。
$ hadoop fs -ls /user/yy 16/05/19 10:44:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Found 3 items drwxr-xr-x - yy yy 0 2016-04-24 08:00 /user/yy/.Trash drwx------ - yy yy 0 2016-05-06 06:00 /user/yy/.staging drwxr-xr-x - yy yy 0 2016-05-06 06:00 /user/yy/oozie-oozi
2. hadoop fs -mkdir 可以创建文件夹
$ hadoop fs -mkdir upload
hadoop fs –rmr 可以删除文件夹/文件
3. hadoop fs -put 可以上传本机的HDFS文件
hadoop fs -put pc/* upload
hadoop fs -get 可以把HDFS的文件下载到本机
hadoop fs -put upload/collect_20160518.txt /home/yy
4. hadoop fs -cat 可以读取HDFS文件
$ hadoop fs -cat upload/collect_20160515.txt|head -10
5. HDFS 和hive表
查看分区: show partitions 表名;
external 外部分区表:
1)HDFS文件,要按分区存储,比如下面,分区为dt,对应的是2016-05-19下面的文件。
/user/yy/upload/wireless/2016-05-19
2)创建external表指向该存储(分区的上一层)
drop table if exists external_weblog_wireless; create external table external_weblog_wireless ( thedate string, time_stamp string, url_title string ) partitioned by (dt string) row format delimited fields terminated by ‘,‘ stored as textfile location ‘/user/yy/upload/wireless/‘;
3)添加新分区,指向分区目录
alter table external_weblog_wireless add partition (dt=‘2016-05-19‘) location ‘/user/yy/upload/wireless/2016-05-19‘;
4) 这种外部表分区存储,很适合增量数据。
external外部非分区表:
直接指向存储的最终location,建表即生成了数据表。
drop table if exists external_weblog_wireless; create external table external_weblog_wireless ( thedate string, time_stamp string, url_title string ) partitioned by (dt string) row format delimited fields terminated by ‘,‘ stored as textfile location ‘/user/yy/upload/wireless/2016-05-19‘;
时间: 2024-10-15 22:11:46