ImportTsv 工具是通过map reduce 完成的。所以要启动yarn. 工具要使用jar包,所以注意配置classpath。ImportTsv默认是通过hbase api 插入数据的
[[email protected] work]$ cat /home/hadoop-user/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
JAVA_HOME=/usr/java/jdk1.8.0_171-amd64
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=$CLASSPATH:$JAVA_HOME/lib
HADOOP_HOME=/home/hadoop-user/hadoop-2.8.0
PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib
HBASE_HOME=/home/hadoop-user/hbase-2.0.0
PATH=$PATH:$HBASE_HOME/bin
CLASSPATH=$CLASSPATH:$HBASE_HOME/lib
ZOOKEEPER_HOME=/home/hadoop-user/zookeeper-3.4.12
PATH=$PATH:$ZOOKEEPER_HOME/bin
PHOENIX_HOME=/home/hadoop-user/apache-phoenix-5.0.0-alpha-HBase-2.0-bin
PATH=$PATH:$PHOENIX_HOME/bin
export PATH
创建表
hbase(main):033:0> create 'test','cf'
创建要导入的文件
[[email protected] work]$ cat /home/hadoop-user/work/sample1.csv
row10,"mjj10"
row11,"mjj11"
row12,"mjj12"
row14,"mjj13"
将文件放入hdfs
[[email protected] work]$ hdfs dfs -put /home/hadoop-user/work/sample1.csv /sample1.csv
ImportTsv导入命令
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator="," -Dimporttsv.columns=HBASE_ROW_KEY,cf:a test /sample1.csv
注: HBASE_ROW_KEY表示文件rowid的位置,后面是列的定义。这里意思是导入的列为列族为cf,列名为a。要导入的文件是hdfs中/sample1.csv
帮助中的解释
Usage: importtsv -Dimporttsv.columns=a,b,c <tablename> <inputdir>
Imports the given input directory of TSV data into the specified table.
The column names of the TSV data must be specified using the -Dimporttsv.columns
option. This option takes the form of comma-separated column names, where each
column name is either a simple column family, or a columnfamily:qualifier. The special
column name HBASE_ROW_KEY is used to designate that this column should be used
as the row key for each imported record. You must specify exactly one column
to be the row key, and you must specify a column name for every column that exists in the
input data. Another special columnHBASE_TS_KEY designates that this column should be
used as timestamp for each record. Unlike HBASE_ROW_KEY, HBASE_TS_KEY is optional.
You must specify at most one column as timestamp key for each imported record.
Record with invalid timestamps (blank, non-numeric) will be treated as bad record.
Note: if you use this option, then 'importtsv.timestamp' option will be ignored.
注意: ImportTsv导入的内容,phoenix看不到。事实上,hbase创建的表,Phoenix看不到。phoenix创建的表,hbase能看到,但是内容是编码后的内容。
原文地址:http://blog.51cto.com/291268154/2133823