splittability A SequenceFile can be split by Hadoop and distributed across map jobs whereas a GZIP file cannot be.

splittability

CompressedStorage

Skip to end of metadata

Go to start of metadata

Compressed Data Storage

Keeping data compressed in Hive tables has, in some cases, been known to give better performance than uncompressed storage; both in terms of disk usage and query performance.

You can import text files compressed with Gzip or Bzip2 directly into a table stored as TextFile. The compression will be detected automatically and the file will be decompressed on-the-fly during query execution. For example:

CREATE TABLE raw (line STRING)

   ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t‘ LINES TERMINATED BY ‘\n‘;

LOAD DATA LOCAL INPATH ‘/tmp/weblogs/20090603-access.log.gz‘ INTO TABLE raw;

The table ‘raw‘ is stored as a TextFile, which is the default storage. However, in this case Hadoop will not be able to split your file into chunks/blocks and run multiple maps in parallel. This can cause underutilization of your cluster‘s ‘mapping‘ power.

【 A SequenceFile can be split by Hadoop and distributed across map jobs whereas a GZIP file cannot be.】

The recommended practice is to insert data into another table, which is stored as a SequenceFile. A SequenceFile can be split by Hadoop and distributed across map jobs whereas a GZIP file cannot be. For example:

CREATE TABLE raw (line STRING)

   ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t‘ LINES TERMINATED BY ‘\n‘;

CREATE TABLE raw_sequence (line STRING)

   STORED AS SEQUENCEFILE;

LOAD DATA LOCAL INPATH ‘/tmp/weblogs/20090603-access.log.gz‘ INTO TABLE raw;

SET hive.exec.compress.output=true;

SET io.seqfile.compression.type=BLOCK; -- NONE/RECORD/BLOCK (see below)

INSERT OVERWRITE TABLE raw_sequence SELECT * FROM raw;

The value for io.seqfile.compression.type determines how the compression is performed. Record compresses each value individually while BLOCK buffers up 1MB (default) before doing compression.

LZO Compression

See LZO Compression for information about using LZO with Hive.

时间: 2024-11-02 17:22:13

splittability A SequenceFile can be split by Hadoop and distributed across map jobs whereas a GZIP file cannot be.的相关文章

Hadoop的HDFS和Map/Reduce

HDFS HDFS是一个具有高度容错性的分布式文件系统,适合部署在廉价的机器上,它具有以下几个特点: 1)适合存储非常大的文件 2)适合流式数据读取,即适合"只写一次,读多次"的数据处理模式 3)适合部署在廉价的机器上 但HDFS不适合以下场景(任何东西都要分两面看,只有适合自己业务的技术才是真正的好技术): 1)不适合存储大量的小文件,因为受Namenode内存大小限制 2)不适合实时数据读取,高吞吐量和实时性是相悖的,HDFS选择前者 3)不适合需要经常修改数据的场景 HDFS的架

Hadoop 2.4.1 Map/Reduce小结

看了下MapReduce的例子.再看了下Mapper和Reducer源码,理清了参数的意义,就o了. public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> public class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> Map是打散过程,把输入的数据,拆分成若干的键值对.Reduce是重组的,根据前面的键值对,重组数据. 自己写Map/Reduce的话,理解了如何拆分数据.组装数据,理解了

hadoop输入分片计算(Map Task个数的确定)

作业从JobClient端的submitJobInternal()方法提交作业的同时,调用InputFormat接口的getSplits()方法来创建split.默认是使用InputFormat的子类FileInputFormat来计算分片,而split的默认实现为FileSplit(其父接口为InputSplit).这里要注意,split只是逻辑上的概念,并不对文件做实际的切分.一个split记录了一个Map Task要处理的文件区间,所以分片要记录其对应的文件偏移量以及长度等.每个split

Hadoop 使用Combiner提高Map/Reduce程序效率

众所周知,Hadoop框架使用Mapper将数据处理成一个<key,value>键值对,再网络节点间对其进行整理(shuffle),然后使用Reducer处理数据并进行最终输出. 在上述过程中,我们看到至少两个性能瓶颈: 如果我们有10亿个数据,Mapper会生成10亿个键值对在网络间进行传输,但如果我们只是对数据求最大值,那么很明显的Mapper只需要输出它所知道的最大值即可.这样做不仅可以减轻网络压力,同样也可以大幅度提高程序效率. 使用专利中的国家一项来阐述数据倾斜这个定义.这样的数据远

linux下用hadoop streaming 跑php总是jobs fail!

用php写了简单map reduce程序,使用cat test.txt | php mapper.php | php reducer.php 测试过脚本没有问题,然而使用hadoop时就总是jobs fail. 因此得出结论是在上传脚本到hadoop服务器上执行时的命令输入错误.以下总结两个易错点: 1. –map 'php mapper.php'不应该加php,加了之后容易fail 2. –file 'mapper.php' 应该使用-file参数将map和reduce的文件上传到hadoop

【hadoop】如何向map和reduce脚本传递参数,加载文件和目录

本文主要讲解三个问题: 1 使用Java编写MapReduce程序时,如何向map.reduce函数传递参数. 2 使用Streaming编写MapReduce程序(C/C++, Shell, Python)时,如何向map.reduce脚本传递参数. 3 使用Streaming编写MapReduce程序(C/C++, Shell, Python)时,如何向map.reduce脚本传递文件或文件夹. (1) streaming 加载本地单个文件 (2) streaming 加载本地多个文件 (3

hadoop 启动后执行wordcount解析(No such file or directory错误)

hadoop 启动后执行wordcount解析 第一个  hadoop fs -mkdir input 结果出现了错误No such file or directory 查资料,应该是 执行命令为:hadoop fs -mkdir /input 1.x是可以执行的,而2.x的执行命令为:hadoop fs -mkdir /

hadoop用mutipleInputs实现map读取不同格式的文件

mapmap读取不同格式的文件这个问题一直就有,之前的读取方式是在map里获取文件的名称,按照名称不同分不同的方式读取,例如下面的方式 //取文件名称 InputSplit inputSplit = context.getInputSplit(); String fileName = ((FileSplit) inputSplit).getPath().toString(); if(fileName.contains("track")) { } else if(fileName.con

不像Hadoop只提供了Map和Reduce两种操作

http://www.blogbus.com/hrl-logs/295790033.htmlhttp://www.blogbus.com/anylt-logs/295790056.htmlhttp://www.blogbus.com/anylt-logs/295790234.htmlhttp://www.blogbus.com/anylt-logs/295790328.htmlhttp://www.blogbus.com/hrl-logs/295790378.htmlhttp://www.blo