Hadoop读书笔记(八)MapReduce 打成jar包demo

Hadoop读书笔记(一)Hadoop介绍http://blog.csdn.net/caicongyang/article/details/39898629

Hadoop读书笔记(二)HDFS的shell操作http://blog.csdn.net/caicongyang/article/details/41253927

Hadoop读书笔记(三)Java API操作HDFShttp://blog.csdn.net/caicongyang/article/details/41290955

Hadoop读书笔记(四)HDFS体系结构 :http://blog.csdn.net/caicongyang/article/details/41322649

Hadoop读书笔记(五)MapReduce统计单词demohttp://blog.csdn.net/caicongyang/article/details/41453579

Hadoop读书笔记(六)MapReduce自定义数据类型demohttp://blog.csdn.net/caicongyang/article/details/41490379

Hadoop读书笔记(七)MapReduce
0.x版本API使用demo
http://blog.csdn.net/caicongyang/article/details/41493325

1.代码改造

KpiApp.java

package cmd;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.mapreduce.lib.partition.HashPartitioner;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
/**
 *
 * <p>
 * Title: KpiApp.java
 * Package mapReduce
 * </p>
 * <p>
 * Description: 统计流量  (打包jar命令行运行) :extends Configured implements Tool
 * <p>
 * @author Tom.Cai
 * @created 2014-11-25 下午10:23:33
 * @version V1.0
 *
 */
public class KpiApp extends Configured implements Tool{

	public static void main(String[] args) throws Exception {
		ToolRunner.run(new KpiApp(), args);
	}

	@Override
	public int run(String[] arg0) throws Exception {
		String INPUT_PATH = arg0[0];
		String OUT_PATH = arg0[1];
		FileSystem fileSystem = FileSystem.get(new URI(INPUT_PATH), new Configuration());
		Path outPath = new Path(OUT_PATH);
		if (fileSystem.exists(outPath)) {
			fileSystem.delete(outPath, true);
		}

		Job job = new Job(new Configuration(), KpiApp.class.getSimpleName());
		FileInputFormat.setInputPaths(job, INPUT_PATH);
		job.setInputFormatClass(TextInputFormat.class);

		job.setMapperClass(KpiMapper.class);
		job.setMapOutputKeyClass(Text.class);
		job.setMapOutputValueClass(KpiWite.class);

		job.setPartitionerClass(HashPartitioner.class);
		job.setNumReduceTasks(1);

		job.setReducerClass(KpiReducer.class);
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(KpiWite.class);

		FileOutputFormat.setOutputPath(job, new Path(OUT_PATH));
		job.setOutputFormatClass(TextOutputFormat.class);

		job.waitForCompletion(true);
		return 0;
	}

	static class KpiMapper extends Mapper<LongWritable, Text, Text, KpiWite> {
		@Override
		protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
			String[] splited = value.toString().split("\t");
			String num = splited[1];
			KpiWite kpi = new KpiWite(splited[6], splited[7], splited[8], splited[9]);
			context.write(new Text(num), kpi);
		}
	}

	static class KpiReducer extends Reducer<Text, KpiWite, Text, KpiWite> {
		@Override
		protected void reduce(Text key, Iterable<KpiWite> value, Context context) throws IOException, InterruptedException {
			long upPackNum = 0L;
			long downPackNum = 0L;
			long upPayLoad = 0L;
			long downPayLoad = 0L;
			for (KpiWite kpi : value) {
				upPackNum += kpi.upPackNum;
				downPackNum += kpi.downPackNum;
				upPayLoad += kpi.upPayLoad;
				downPayLoad += kpi.downPayLoad;
			}
			context.write(key, new KpiWite(String.valueOf(upPackNum), String.valueOf(downPackNum), String.valueOf(upPayLoad), String.valueOf(downPayLoad)));
		}
	}
}

class KpiWite implements Writable {
	long upPackNum;
	long downPackNum;
	long upPayLoad;
	long downPayLoad;

	public KpiWite() {
	}

	public KpiWite(String upPackNum, String downPackNum, String upPayLoad, String downPayLoad) {
		this.upPackNum = Long.parseLong(upPackNum);
		this.downPackNum = Long.parseLong(downPackNum);
		this.upPayLoad = Long.parseLong(upPayLoad);
		this.downPayLoad = Long.parseLong(downPayLoad);
	}

	@Override
	public void readFields(DataInput in) throws IOException {
		this.upPackNum = in.readLong();
		this.downPackNum = in.readLong();
		this.upPayLoad = in.readLong();
		this.downPayLoad = in.readLong();
	}

	@Override
	public void write(DataOutput out) throws IOException {
		out.writeLong(upPackNum);
		out.writeLong(downPackNum);
		out.writeLong(upPayLoad);
		out.writeLong(downPayLoad);
	}

}

2.打成jar

通过Eclipse的Export将上述的类打成kpi.jar

将jar包上传到Linux下通过 hadoop jar xxx.jar [parameter] [parameter]命令执行即可

例如:hadoop jar kpi.jar hdfs://192.168.80.100:9000/wlan hdfs://192.168.80.100:9000/wlan_out

即将上篇的代码完成改造可以在命令行下运行!

欢迎大家一起讨论学习!

有用的自己收!

记录与分享,让你我共成长!欢迎查看我的其他博客;我的博客地址:http://blog.csdn.net/caicongyang

时间: 2024-10-13 00:55:21

Hadoop读书笔记(八)MapReduce 打成jar包demo的相关文章

java jvm学习笔记八(实现jar包的代码签名)

 欢迎装载请说明出处:http://blog.csdn.net/yfqnihao/article/details/8267669 课程源码:http://download.csdn.net/detail/yfqnihao/4866500 这一节,以实践为主,在跟着我做相应的操作之前,我希望你已经能够理解笔记七所提到的概念,至少你应该对于笔记七的那个大图有所了解. 好了!对于习惯用ecplise的朋友今天不得不逼迫你把jdk的环境搭建出来!下面让我们动手来实践一下对jar进行签名吧!  第一步,首

Hadoop读书笔记(十一)MapReduce中的partition分组

Hadoop读书笔记系列文章:http://blog.csdn.net/caicongyang/article/category/2166855 1.partition分组 partition是指定分组算法,以及通过setNumReduceTasks设定Reduce的任务个数 2.代码 KpiApp.ava package cmd; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; i

Hadoop读书笔记(七)MapReduce 0.x版本API使用demo

Hadoop读书笔记(一)Hadoop介绍:http://blog.csdn.net/caicongyang/article/details/39898629 Hadoop读书笔记(二)HDFS的shell操作:http://blog.csdn.net/caicongyang/article/details/41253927 Hadoop读书笔记(三)Java API操作HDFS:http://blog.csdn.net/caicongyang/article/details/41290955

Hadoop读书笔记(六)MapReduce自定义数据类型demo

Hadoop读书笔记(一)Hadoop介绍:http://blog.csdn.net/caicongyang/article/details/39898629 Hadoop读书笔记(二)HDFS的shell操作:http://blog.csdn.net/caicongyang/article/details/41253927 Hadoop读书笔记(三)Java API操作HDFS:http://blog.csdn.net/caicongyang/article/details/41290955

Hadoop读书笔记(十四)MapReduce中TopK算法(Top100算法)

Hadoop读书笔记系列文章:http://blog.csdn.net/caicongyang/article/category/2166855 (系列文章会逐步修整完成,添加数据文件格式预计相关注释) 1.说明: 从给定的文件中的找到最大的100个值,给定的数据文件格式如下: 533 16565 17800 2929 11374 9826 6852 20679 18224 21222 8227 5336 912 29525 3382 2100 10673 12284 31634 27405 1

Hadoop读书笔记(十)MapReduce中的从计数器理解combiner归约

Hadoop读书笔记系列文章:http://blog.csdn.net/caicongyang/article/category/2166855 1.combiner 问:什么是combiner: 答:Combiner发生在Mapper端,对数据进行归约处理,使传到reducer端的数据变小了,传输时间变端,作业时间变短,Combiner不能夸Mapper执行,(只有reduce可以接受多个Mapper的任务). 并不是所有的算法都适合归约处理,例如求平均数 2.代码实现 WordCount.j

Hadoop读书笔记(五)MapReduce统计单词demo

Hadoop读书笔记(一)Hadoop介绍:http://blog.csdn.net/caicongyang/article/details/39898629 Hadoop读书笔记(二)HDFS的shell操作:http://blog.csdn.net/caicongyang/article/details/41253927 Hadoop读书笔记(三)Java API操作HDFS:http://blog.csdn.net/caicongyang/article/details/41290955

Hadoop读书笔记(九)MapReduce计数器

Hadoop读书笔记系列文章:http://blog.csdn.net/caicongyang/article/category/2166855 1.MapReduce 计数器的作用 统计Map.Reduce以及Combiner执行的次数,可以用户简单判断代码的执行流程 2.MapReduce自带的计数器 14/11/26 22:28:51 INFO mapred.JobClient: Counters: 19 14/11/26 22:28:51 INFO mapred.JobClient: F

Hadoop读书笔记(十二)MapReduce自定义排序

Hadoop读书笔记系列文章:http://blog.csdn.net/caicongyang/article/category/2166855 1.说明: 对给出的两列数据首先按照第一列升序排列,当第一列相同时,第二列升序排列 数据格式: 3 3 3 2 3 1 2 2 2 1 1 1 2.代码 SortApp.java package sort; import java.io.DataInput; import java.io.DataOutput; import java.io.IOExc