Higher order Array functions such as filter, map and reduce are great for functional programming, but they can incur performance problems.
var ary = [1,2,3,4,5,6]; var res = ary.filter(function(x, i, arr){ console.log("filter: " + x); console.log("create new array: " + (arr === ary)); return x%2==0; }) .map(function(x, i, arr){ console.log("map: " + x); return x+"!"; }) .reduce(function(r, x, i, arr){ console.log("reduce: " + x); return r+x; }); console.log(res); /* "filter: 1" "create new array: true" "filter: 2" "create new array: true" "filter: 3" "create new array: true" "filter: 4" "create new array: true" "filter: 5" "create new array: true" "filter: 6" "create new array: true" "map: 2" "map: 4" "map: 6" "reduce: 4!" "reduce: 6!" "2!4!6!" */
In the example, filter & map function will return a new array. That‘s good because it pushes forward the idea of immutability. However, it‘s bad because that means I‘m allocating a new array. I‘m iterating over it only once, and then I‘ve got to garbage-collect it later. This could get really expensive if you‘re dealing with very large source arrays or you‘re doing this quite often.
Using RxJS:
var source = Rx.Observable.fromArray([1,2,3,4,5,6]); source.filter(function(x){ console.log("filter: " + x); return x%2==0; }) .map(function(x){ console.log("map: " + x); return x+"!"; }) .reduce(function(r, x){ console.log("reduce: " + x); return r+x; }).subscribe(function(res){ console.log(res); });
/* "filter: 1" "filter: 2" "map: 2" "filter: 3" "filter: 4" "map: 4" "reduce: 4!" "filter: 5" "filter: 6" "map: 6" "reduce: 6!" "2!4!6!" */
The biggest thing is that now you‘ll see it goes through each -- the filter, the map, and the reduce -- at each step.
Differences:
The first example: it creates two intermediary arrays (during filter and map). Those arrays needed to be iterated over each time, and now they‘ll also have to be garbage-collected.
The RxJS example: it takes every item all the way through to the end without creating any intermediary arrays.