之前习惯用hadoop streaming环境编写python程序,下面总结编辑java的eclipse环境配置总结,及一个WordCount例子运行。
一 下载eclipse安装包及hadoop插件
1去官网下载linux版本的eclipse安装包(或者在本人为了大家方便下载,上传到了csdn下载,网址:
2下载插件:hadoop-eclipse-plugin-2.6.0.jar
二 安装elicpse及hadoop插件
1 把eclipse解压到路径 /user/local/eclipse
2 把插件hadoop-eclipse-plugin-2.6.0.jar拷贝到eclipse路径:/user/local/eclipse/plugins/hadoop-eclipse-plugin-2.6.0.jar
3 启动eclipse
./user/local/eclipse/eclipse -clean
三 配置eclipse的hadoop环境
1选择 Window 菜单下的 Preference
配置hadoop路径: /usr/local/hadoop:
2 切换 Map/Reduce 开发视图。选择 Window 菜单下选择 Open Perspective -> Other-> Map/Reduce 选项即可进行切换。
3 建立与 Hadoop 集群的连接。点击 Eclipse软件右下角的 Map/Reduce Locations 面板,在面板中单击右键,选择 New Hadoop Location
4 查看效果,这样有一个好处是可视化了文件系统,要不只能输入命令查看,然而本人仍认为输入命令比较好,结合使用吧。可视化文件系统效果如下:
四 wordcount例子运行
1创建项目:点击 File 菜单,选择 New -> Project,选择 Map/Reduce Project,点击 Next,填写 Project name 为 WordCount 即可,点击 Finish 就创建好了项目。
2创建class类:接着右键点击刚创建的 WordCount 项目,选择 New -> Class;需要填写两个地方:在 Package 处填写 org.apache.hadoop.examples;在 Name 处填写 WordCount。
3填充代码:
package org.apache.hadoop.examples; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
4 运行之前终端输入以下命令,目的是通过配置文件修改默认的本地系统为hadoop文件系统和不输出一个警告;
cp /usr/local/hadoop/etc/hadoop/core-site.xml ~/workspace/WordCount/src cp /usr/local/hadoop/etc/hadoop/hdfs-site.xml ~/workspace/WordCount/src cp /usr/local/hadoop/etc/hadoop/log4j.properties ~/workspace/WordCount/src
5设置参数,输入和输出。特别指出:这个input和output实际是文件系统的路径,具体为/user/hadoop/input 和 /user/hadoop/output
6 在文件系统中的output,查看输出结果
参考:http://www.powerxing.com/hadoop-build-project-using-eclipse/ 本文图片来自这篇博客,截图太麻烦了