MapReduce编程系列 — 3:数据去重

1、项目名称:

2、程序代码:

package com.dedup;

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class Dedup {
    //map将输入中的value复制到输出数据的key上,并直接输出,注意参数类型和个数
    public static class Map extends Mapper<Object, Text, Text, Text>{
        public static Text line = new Text();
        //注意参数类型和个数
        public void map(Object key , Text value , Context context) throws IOException,InterruptedException{
            System.out.println("mapper.......");
            System.out.println("key:"+key+"  value:"+value);
            line = value;
            context.write(line, new Text(" "));
            System.out.println("line:"+ line +" value"+ value +"  context:" + context);
        }
    }
    //reduce将输入中的key复制到输出数据的key上,并直接输出,注意参数类型和个数
    public static class Reduce extends Reducer<Text, Text, Text, Text>{
        //注意参数类型和个数
        public void reduce(Text key , Iterable<Text> values, Context context)throws IOException,InterruptedException{
            System.out.println("reducer.......");
            System.out.println("key:"+key+"  values:"+values);
            context.write(key, new Text(" "));
            System.out.println("key:"+key+"  values"+values+"  context:"+context);
        }
    }

    public static void main(String [] args)throws Exception{
        Configuration conf = new Configuration();
        String otherArgs[] = new GenericOptionsParser(conf,args).getRemainingArgs();
        if(otherArgs.length!=2){
            System.out.println("Usage:dedup <in> <out>");
            System.exit(2);
        }
        Job job = new Job(conf,"Data Deduplication");
        job.setJarByClass(Dedup.class);

        job.setMapperClass(Map.class);
        job.setReducerClass(Reduce.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
        System.exit(job.waitForCompletion(true)? 0 : 1 );
    }
}

3、测试数据:

file1:

2006-6-9 a
2006-6-10 b
2006-6-11 c
2006-6-12 d
2006-6-13 a
2006-6-14 b
2006-6-15 c
2006-6-11 c

file2:

2006-6-9 b
2006-6-10 a
2006-6-11 b
2006-6-12 d
2006-6-13 a
2006-6-14 c
2006-6-15 d
2006-6-11 c

4、运行过程:

14/09/21 16:51:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/09/21 16:51:16 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/21 16:51:16 INFO input.FileInputFormat: Total input paths to process : 2
14/09/21 16:51:16 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/21 16:51:16 INFO mapred.JobClient: Running job: job_local_0001
14/09/21 16:51:16 INFO util.ProcessTree: setsid exited with exit code 0
14/09/21 16:51:16 INFO mapred.Task:  Using ResourceCalculatorPlugin : [email protected]
14/09/21 16:51:16 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 16:51:16 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 16:51:16 INFO mapred.MapTask: record buffer = 262144/327680
mapper.......
key:0  value:2006-6-9 a
line:2006-6-9 a value2006-6-9 a  context:[email protected]
mapper.......
key:11  value:2006-6-10 b
line:2006-6-10 b value2006-6-10 b  context:[email protected]
mapper.......
key:23  value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c  context:[email protected]
mapper.......
key:35  value:2006-6-12 d
line:2006-6-12 d value2006-6-12 d  context:[email protected]
mapper.......
key:47  value:2006-6-13 a
line:2006-6-13 a value2006-6-13 a  context:[email protected]
mapper.......
key:59  value:2006-6-14 b
line:2006-6-14 b value2006-6-14 b  context:[email protected]
mapper.......
key:71  value:2006-6-15 c
line:2006-6-15 c value2006-6-15 c  context:[email protected]
mapper.......
key:83  value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c  context:[email protected]
14/09/21 16:51:16 INFO mapred.MapTask: Starting flush of map output
14/09/21 16:51:16 INFO mapred.MapTask: Finished spill 0
14/09/21 16:51:16 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
14/09/21 16:51:17 INFO mapred.JobClient:  map 0% reduce 0%
14/09/21 16:51:19 INFO mapred.LocalJobRunner:
14/09/21 16:51:19 INFO mapred.Task: Task ‘attempt_local_0001_m_000000_0‘ done.
14/09/21 16:51:19 INFO mapred.Task:  Using ResourceCalculatorPlugin : [email protected]
14/09/21 16:51:19 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 16:51:19 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 16:51:19 INFO mapred.MapTask: record buffer = 262144/327680
mapper.......
key:0  value:2006-6-9 b
line:2006-6-9 b value2006-6-9 b  context:[email protected]
mapper.......
key:11  value:2006-6-10 a
line:2006-6-10 a value2006-6-10 a  context:[email protected]
mapper.......
key:23  value:2006-6-11 b
line:2006-6-11 b value2006-6-11 b  context:[email protected]
mapper.......
key:35  value:2006-6-12 d
line:2006-6-12 d value2006-6-12 d  context:[email protected]
mapper.......
key:47  value:2006-6-13 a
line:2006-6-13 a value2006-6-13 a  context:[email protected]
mapper.......
key:59  value:2006-6-14 c
line:2006-6-14 c value2006-6-14 c  context:[email protected]
mapper.......
key:71  value:2006-6-15 d
line:2006-6-15 d value2006-6-15 d  context:[email protected]
mapper.......
key:83  value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c  context:[email protected]
14/09/21 16:51:19 INFO mapred.MapTask: Starting flush of map output
14/09/21 16:51:19 INFO mapred.MapTask: Finished spill 0
14/09/21 16:51:19 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
14/09/21 16:51:20 INFO mapred.JobClient:  map 100% reduce 0%
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Task: Task ‘attempt_local_0001_m_000001_0‘ done.
14/09/21 16:51:22 INFO mapred.Task:  Using ResourceCalculatorPlugin : [email protected]
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Merger: Merging 2 sorted segments
14/09/21 16:51:22 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 258 bytes
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
reducer.......
key:2006-6-10 a  values:[email protected]
key:2006-6-10 a  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-10 b  values:[email protected]
key:2006-6-10 b  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-11 b  values:[email protected]
key:2006-6-11 b  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-11 c  values:[email protected]
key:2006-6-11 c  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-12 d  values:[email protected]
key:2006-6-12 d  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-13 a  values:[email protected]
key:2006-6-13 a  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-14 b  values:[email protected]
key:2006-6-14 b  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-14 c  values:[email protected]
key:2006-6-14 c  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-15 c  values:[email protected]
key:2006-6-15 c  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-15 d  values:[email protected]
key:2006-6-15 d  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-9 a  values:[email protected]
key:2006-6-9 a  [email protected]8fd78  context:[email protected]
reducer.......
key:2006-6-9 b  values:[email protected]
key:2006-6-9 b  [email protected]8fd78  context:[email protected]
14/09/21 16:51:22 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
14/09/21 16:51:22 INFO output.FileOutputCommitter: Saved output of task ‘attempt_local_0001_r_000000_0‘ to hdfs://localhost:9000/user/hadoop/dedup_output
14/09/21 16:51:25 INFO mapred.LocalJobRunner: reduce > reduce
14/09/21 16:51:25 INFO mapred.Task: Task ‘attempt_local_0001_r_000000_0‘ done.
14/09/21 16:51:26 INFO mapred.JobClient:  map 100% reduce 100%
14/09/21 16:51:26 INFO mapred.JobClient: Job complete: job_local_0001
14/09/21 16:51:26 INFO mapred.JobClient: Counters: 22
14/09/21 16:51:26 INFO mapred.JobClient:   Map-Reduce Framework
14/09/21 16:51:26 INFO mapred.JobClient:     Spilled Records=32
14/09/21 16:51:26 INFO mapred.JobClient:     Map output materialized bytes=266
14/09/21 16:51:26 INFO mapred.JobClient:     Reduce input records=16
14/09/21 16:51:26 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
14/09/21 16:51:26 INFO mapred.JobClient:     Map input records=16
14/09/21 16:51:26 INFO mapred.JobClient:     SPLIT_RAW_BYTES=232
14/09/21 16:51:26 INFO mapred.JobClient:     Map output bytes=222
14/09/21 16:51:26 INFO mapred.JobClient:     Reduce shuffle bytes=0
14/09/21 16:51:26 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
14/09/21 16:51:26 INFO mapred.JobClient:     Reduce input groups=12
14/09/21 16:51:26 INFO mapred.JobClient:     Combine output records=0
14/09/21 16:51:26 INFO mapred.JobClient:     Reduce output records=12
14/09/21 16:51:26 INFO mapred.JobClient:     Map output records=16
14/09/21 16:51:26 INFO mapred.JobClient:     Combine input records=0
14/09/21 16:51:26 INFO mapred.JobClient:     CPU time spent (ms)=0
14/09/21 16:51:26 INFO mapred.JobClient:     Total committed heap usage (bytes)=813170688
14/09/21 16:51:26 INFO mapred.JobClient:   File Input Format Counters
14/09/21 16:51:26 INFO mapred.JobClient:     Bytes Read=190
14/09/21 16:51:26 INFO mapred.JobClient:   FileSystemCounters
14/09/21 16:51:26 INFO mapred.JobClient:     HDFS_BYTES_READ=475
14/09/21 16:51:26 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=122061
14/09/21 16:51:26 INFO mapred.JobClient:     FILE_BYTES_READ=1665
14/09/21 16:51:26 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=166
14/09/21 16:51:26 INFO mapred.JobClient:   File Output Format Counters
14/09/21 16:51:26 INFO mapred.JobClient:     Bytes Written=166

5、运行结果:

2006-6-10 a    
2006-6-10 b    
2006-6-11 b    
2006-6-11 c    
2006-6-12 d    
2006-6-13 a    
2006-6-14 b    
2006-6-14 c    
2006-6-15 c    
2006-6-15 d    
2006-6-9 a    
2006-6-9 b

时间: 2024-10-04 15:53:58

MapReduce编程系列 — 3:数据去重的相关文章

MapReduce 编程 系列五 MapReduce 主要过程梳理

前面4篇文章介绍了如何编写一个简单的日志提取程序,读取HDFS share/logs目录下的所有csv日志文件,然后提取数据后,最终输出到share/output目录下. 本篇停留一下,梳理一下主要过程,然后提出新的改进目标. 首先声明一下,所有的代码都是maven工程的,没有使用任何IDE.  这是我一贯的编程风格,用Emacs + JDEE开发.需要使用IDE的只需要学习如何在IDE中使用maven即可. 可比较的序列化 第一个是序列化,这是各种编程技术中常用的.MapReduce的特别之处

MapReduce 编程 系列十二 用Hadoop Streaming技术集成newLISP脚本

本文环境和之前的Hadoop 1.x不同,是在Hadoop 2.x环境下测试.功能和前面的日志处理程序一样. 第一个newLISP脚本,起到mapper的作用,在stdin中读取文本数据,将did作为key, value为1,然后将结果输出到stdout 第二个newLISP脚本,起到reducer的作用,在stdin中读取<key, values>, key是dic, values是所有的value,简单对value求和后,写到stdout中 最后应该可以在HDFS下看到结果. 用脚本编程的

MapReduce编程系列 — 2:计算平均分

1.项目名称: 2.程序代码: package com.averagescorecount; import java.io.IOException; import java.util.Iterator; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWrit

MapReduce编程系列 — 5:单表关联

1.项目名称: 2.项目数据: chile    parentTom    LucyTom    JackJone    LucyJone    JackLucy    MaryLucy    BenJack    AliceJack    JesseTerry    AliceTerry    JessePhilip    TerryPhilip    AlimaMark    TerryMark    Alma 3.设计思路: 分析这个实例,显然需要进行单表连接,连接的是左表的parent列

MapReduce 编程 系列八 根据输入路径产生输出路径和清除HDFS目录

有了前面的MultipleOutputs的使用经验,就可以将HDFS输入目录的路径解析出来,组成输出路径,这在业务上是十分常用的.这样其实是没有多文件名输出,仅仅是调用了MultipleOutputs的addNamedOutput方法一次,设置文件名为result. 同时为了保证计算的可重入性,每次都需要将已经存在的输出目录删除. 先看pom.xml, 现在参数只有一个输入目录了,输出目录会在该路径后面自动加上/output. <project xmlns="http://maven.ap

MapReduce 编程 系列六 MultipleOutputs使用

在前面的例子中,输出文件名是默认的: _logs part-r-00001 part-r-00003 part-r-00005 part-r-00007 part-r-00009 part-r-00011 part-r-00013 _SUCCESS part-r-00000 part-r-00002 part-r-00004 part-r-00006 part-r-00008 part-r-00010 part-r-00012 part-r-00014 part-r-0000N 还有一个_SUC

MapReduce 编程 系列九 Reducer数目

本篇介绍怎样控制reduce的数目.前面观察结果文件,都会发现通常是以part-r-00000 形式出现多个文件,事实上这个reducer的数目有关系.reducer数目多,结果文件数目就多. 在初始化job的时候.是能够设置reducer的数目的.example4在example的基础上做了改动.改动了pom.xml.使得结束一个參数作为reducer的数目.改动了LogJob.java的代码,作为设置reducer数目. xsi:schemaLocation="http://maven.ap

MapReduce 编程 系列四 MapReduce例子程序运行

MapReduce程序编译是可以在普通的Java环境下进行,现在来到真实的环境上运行. 首先,将日志文件放到HDFS目录下 $ hdfs dfs -put *.csv /user/chenshu/share/logs/ 14/09/27 17:03:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where app

MapReduce编程系列 — 4:排序

1.项目名称: 2.程序代码: package com.sort; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce