1、安装完spark,进入spark中bin目录: bin/spark-shell
scala> val textFile = sc.textFile("/Users/admin/spark/spark-1.6.1-bin-hadoop2.6/README.md")
scala> textFile.flatMap(_.split(" ")).filter(!_.isEmpty).map((_,1)).reduceByKey(_+_).collect().foreach(println)
result:
(-Psparkr,1)
(Build,1)
(built,1)
(-Phive-thriftserver,1)
(2.4.0,1)
(-Phadoop-2.4,1)
(Spark,1)
(-Pyarn,1)
(1.5.1,1)
(flags:,1)
(for,1)
(-Phive,1)
(-DzincPort=3034,1)
(Hadoop,1)
时间: 2024-10-24 18:11:38