【转】Scala reduceLeft examples

原文链接 http://alvinalexander.com/scala/scala-reduceleft-examples

The reduceLeft method on the Scala collections is fun. Just start with a collection:

scala> val a = Array(20, 12, 6, 15, 2, 9)
a: Array[Int] = Array(20, 12, 6, 15, 2, 9)

Then give reduceLeft a simple function to work with, and let it do its thing:

scala> a.reduceLeft(_ + _)
res0: Int = 64

scala> a.reduceLeft(_ * _)
res1: Int = 388800

scala> a.reduceLeft(_ min _)
res2: Int = 2

scala> a.reduceLeft(_ max _)
res3: Int = 20

Use a function

When your comparison operation gets long, just create a function first, then pass the function intoreduceLeft:

scala> val a = Array(20, 12, 6, 15, 2, 9)
a: Array[Int] = Array(20, 12, 6, 15, 2, 9)

scala> val f = (x:Int, y:Int) => x max y
f: (Int, Int) => Int = <function2>

scala> a.reduceLeft(f)
res0: Int = 20

Admittedly that was a simple function, but we‘ll look at a longer one next.

How reduceLeft works

The reduceLeft method words by applying the function/operation you give it, and applying it to successive elements in the collection. The result of the first comparison is used in the second comparison, and so on. It works from left to right, beginning with the first element in the collection.

We can demonstrate this by creating a bigger function now. We‘ll do a max comparison like we did earlier, but now we‘ll add some debugging code to the function so we can see how reduceLeft works. Here‘s the function:

// returns the max of the two elements
val findMax = (x: Int, y: Int) => {
  val winner = x max y
  println("compared %d to %d, %d was larger".format(x,y,winner))
  winner
}

Next, let‘s move the numbers in the array around a little bit, so the output will be more interesting:

val a = Array(12, 6, 15, 2, 20, 9)

Now we call reduceLeft on our new array, giving it our new function, and we see how reduceLeftworks:

scala> a.reduceLeft(findMax)
compared 12 to 6, 12 was larger
compared 12 to 15, 15 was larger
compared 15 to 2, 15 was larger
compared 15 to 20, 20 was larger
compared 20 to 9, 20 was larger
res0: Int = 20

Boo-yah! Here‘s how the process worked:

  • reduceLeft started by calling findMax to test the first two elements in the array, andfindMax returned 12 (because 12 is larger than 6).
  • reduceLeft took that result (12), and called findMax(12, 15). 12 was the result of the first comparison, and 15 is the next element in the collection. 15 is larger, so it became the new result.
  • reduceLeft kept taking the result from the function and comparing it to the next element in the collection, until it marched through all the elements in the collection and ended up with the number 20.
  • reduceLeft doesn‘t know it‘s finding the largest element in the collection. It just marches through the collection, using the function you provide to (a) compare one element to the next, (b) get the result, then (c) compare that result to the next element in the collection, again using your function to perform the comparison.

One subtle but important note that we just saw: Your function must return the same data type that‘s stored in the collection. This is necessary so reduceLeft can compare that result to the next element in the collection.

Working with other collection types

The collection can be any sequence, including ListArrayVectorSeq, and more. The type of collection can be anything you need. For instance, determining the longest or shortest string in a collection of strings is also pretty easy:

scala> val peeps = Vector("al", "hannah", "emily", "christina", "aleka")
peeps: scala.collection.immutable.Vector[java.lang.String] = Vector(al, hannah, emily, christina, aleka)

// longest
scala> peeps.reduceLeft((x,y) => if (x.length > y.length) x else y)
res0: java.lang.String = christina

// shortest
scala> peeps.reduceLeft((x,y) => if (x.length < y.length) x else y)
res1: java.lang.String = al

Just take a similar approach with your own data types, and you can use Scala‘s reduceLeft collection method to handle all sorts of problems like this.

Summary

If you were wondering how Scala‘s reduceLeft collections method worked, I hope these examples have been helpful.

tags:

    reduceleft

    scala

    scala

    related

    时间: 2024-10-18 00:43:32

    【转】Scala reduceLeft examples的相关文章

    How do I add elements to a Scala List?

    Scala List FAQ: How do I add elements to a Scala List? This is actually a trick question, because you can't add elements to a ScalaList; it's an immutable data structure, like a Java String. Prepending elements to Scala Lists One thing you can do whe

    How to add elements to a List in Scala

    Scala List FAQ: How do I add elements to a Scala List? This is actually a trick question, because you can't add elements to a ScalaList; it's an immutable data structure, like a Java String. Prepending elements to Scala Lists One thing you can do whe

    scala 基础

    // ::链接字符串 链接字符和list scala> val a = List(1,2,3) a: List[Int] = List(1, 2, 3) scala> val b = 0::a b: List[Int] = List(0, 1, 2, 3) scala> val c = "x"::"y"::"z" :: Nil c: List[String] = List(x, y, z) // :::链接两个list Any公

    Spark 与 Hadoop 关于 TeraGen/TeraSort 的对比实验(包含源代码)

    自从 Hadoop 问世以来,MapReduce 在很长时间内都是排序基准测试的纪录保持者,但这一垄断在最近被基于内存计算的 Spark 打破了.在今年Databricks与AWS一起完成的一个Daytona Gray类别的Sort Benchmark中,Spark 完胜 Hadoop MapReduce:"1/10计算资源,1/3耗时".这是个很有意思的对比实验,因此笔者也在一个小规模集群上做了一个微缩版的类似试验. 1.Hadoop 与 Spark 集群环境完全相同: - Hado

    spark 类标签的稀疏 特征向量

    本地向量和矩阵 本地向量(Local Vector)存储在单台机器上,索引采用0开始的整型表示,值采用Double类型的值表示.Spark MLlib中支持两种类型的矩阵,分别是密度向量(Dense Vector)和稀疏向量(Spasre Vector),密度向量会存储所有的值包括零值,而稀疏向量存储的是索引位置及值,不存储零值,在数据量比较大时,稀疏向量才能体现它的优势和价值.下面给出其应用示例: import org.apache.spark.mllib.linalg.{Vector, Ve

    Can you share some Scala List class examples?

    Scala List FAQ: Can you share some Scala List class examples? The Scala List class may be the most commonly used data structure in Scala applications. Therefore, it's very helpful to know how create lists, merge lists, select items from lists, operat

    scala flatMap reduceLeft foldLeft

    object collection_t1 { def flatMap1(): Unit = { val li = List(1,2,3) val res = li.flatMap(x => x match { case 3 => List('a','b') case _ => List(x*2) }) println(res) } def map1(): Unit = { val li = List(1,2,3) val res = li.map(x => x match { ca

    一天一段scala代码(十三)

    为了更好的驾驭spark,最近在学习scala语言特性,主要看<快学scala>,顺便把一些自己认为有用的代码记下来. package examples object Example13 extends App{ //可变集合和不可变集合 val iMap = scala.collection.immutable.Map("1"->2,"2"->2) println(iMap) println(iMap+("3"->

    scala脚本编写1(从文件里读取数据)

    创建文件 a.scala,内容如下: import scala.io.Source object Sample { def widthOfLength(s: String) = s.length.toString.length  def main(args: Array[String]) {  if (args.length > 0) {    val lines = Source.fromFile(args(0)).getLines.toList    val longestLine = li