1、Java下Spark开发环境搭建(from http://www.cnblogs.com/eczhou/p/5216918.html)
1.1、jdk安装
安装oracle下的jdk,我安装的是jdk 1.7,安装完新建系统环境变量JAVA_HOME,变量值为“C:\Program Files\Java\jdk1.7.0_79”,视自己安装路劲而定。
同时在系统变量Path下添加C:\Program Files\Java\jdk1.7.0_79\bin和C:\Program Files\Java\jre7\bin。
1.2 spark环境变量配置
去http://spark.apache.org/downloads.html网站下载相应hadoop对应的版本,我下载的是spark-1.6.0-bin-hadoop2.6.tgz,spark版本是1.6,对应的hadoop版本是2.6
解压下载的文件,假设解压 目录为:D:\spark-1.6.0-bin-hadoop2.6。将D:\spark-1.6.0-bin-hadoop2.6\bin添加到系统Path变量,同时新建SPARK_HOME变量,变量值为:D:\spark-1.6.0-bin-hadoop2.6
1.3 hadoop工具包安装
spark是基于hadoop之上的,运行过程中会调用相关hadoop库,如果没配置相关hadoop运行环境,会提示相关出错信息,虽然也不影响运行,但是这里还是把hadoop相关库也配置好吧。
1.3.1 去下载hadoop 2.6编译好的包https://www.barik.net/archive/2015/01/19/172716/,我下载的是hadoop-2.6.0.tar.gz,
1.3.2 解压下载的文件夹,将相关库添加到系统Path变量中:D:\hadoop-2.6.0\bin;同时新建HADOOP_HOME变量,变量值为:D:\hadoop-2.6.0
1.4 eclipse环境
直接新建java工程,将D:\spark-1.6.0-bin-hadoop2.6\lib下的spark-assembly-1.6.0-hadoop2.6.0.jar添加到工程中就可以了。
2、Java写Spark WordCount程序‘
package cn.spark.study; import java.util.Arrays; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.FlatMapFunction; import org.apache.spark.api.java.function.Function2; import org.apache.spark.api.java.function.PairFunction; import org.apache.spark.api.java.function.VoidFunction; import scala.Tuple2; public class WordCount { public static void main(String[] args) { //创建 SparkConf对象,对程序进行必要的配置 SparkConf conf = new SparkConf() .setAppName("WordCount").setMaster("local"); //通过conf创建上下文对象 JavaSparkContext sc = new JavaSparkContext(conf); //创建初始RDD JavaRDD<String> lines = sc.textFile("D://spark.txt"); //----用各种Transformation算子对RDD进行操作----------------------------------------- JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() { private static final long serialVersionUID = 1L; @Override public Iterable<String> call(String line) throws Exception { // TODO Auto-generated method stub return Arrays.asList(line.split(" ")); } }); JavaPairRDD<String,Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>() { private static final long serialVersionUID = 1L; @Override public Tuple2<String, Integer> call(String word) throws Exception { // TODO Auto-generated method stub return new Tuple2<String,Integer>(word,1); } }); JavaPairRDD<String,Integer> wordCounts = pairs.reduceByKey(new Function2<Integer, Integer, Integer>() { private static final long serialVersionUID = 1L; @Override public Integer call(Integer v1, Integer v2) throws Exception { // TODO Auto-generated method stub return v1 + v2; } }); //----用一个 action 算子触发job----------------------------------------- wordCounts.foreach(new VoidFunction<Tuple2<String,Integer>>() { @Override public void call(Tuple2<String, Integer> wordCount) throws Exception { // TODO Auto-generated method stub System.out.println(wordCount._1 + " appeared " + wordCount._2 + " times"); } }); }