之所以要测该场景,是因为merge多数据源结果的时候,有时候只是单个子查询结果了,而此时采用sql数据库处理并不一定能够合理(网络延迟太大)。
测试数据10万行,结果1000行
limit 20 offset 0的延时如下:
package com.hundsun.ta.base.service; import com.hundsun.ta.utils.JsonUtils; import lombok.AllArgsConstructor; import lombok.NoArgsConstructor; import java.math.BigDecimal; import java.util.*; import java.util.stream.Collectors; import static java.util.stream.Collectors.*; /** * @author zjhua * @description * @date 2019/10/3 15:35 */ public class JavaStreamCommonSQLTest { public static void main(String[] args) { List<Person> persons = new ArrayList<>(); for (int i=100000;i>0;i--) { persons.add(new Person("Person " + (i+1)%1000, i % 100, i % 1000,new BigDecimal(i),i)); } System.out.println(System.currentTimeMillis()); Map<String,Map<Integer, Data>> result = persons.stream().collect( groupingBy(Person::getName,Collectors.groupingBy(Person::getAge, collectingAndThen(summarizingDouble(Person::getQuantity), dss -> new Data((long)dss.getAverage(), (long)dss.getSum()))))); List<ResultGroup> list = new ArrayList<>(); result.forEach((k,v)->{ v.forEach((ik,iv)->{ ResultGroup e = new ResultGroup(k,ik,iv.average,iv.sum); list.add(e); }); }); list.sort(Comparator.comparing(ResultGroup::getSum).thenComparing(ResultGroup::getAverage)); list.subList(0,20); System.out.println(System.currentTimeMillis()); System.out.println(JsonUtils.toJson(list)); } } @[email protected]@AllArgsConstructor class Person { String name; int group; int age; BigDecimal balance; double quantity; } @[email protected]@AllArgsConstructor @Deprecated class ResultGroup { String name; int group; long average; long sum; } class Data { long average; long sum; public Data(long average, long sum) { this.average = average; this.sum = sum; } }
开始:1570093479002
结束:1570093479235 --200多毫秒
测试数据10万行,结果90000行
limit 20 offset 10000的延时如下:
package com.hundsun.ta.base.service; import com.hundsun.ta.utils.JsonUtils; import lombok.AllArgsConstructor; import lombok.NoArgsConstructor; import java.math.BigDecimal; import java.util.*; import java.util.stream.Collectors; import static java.util.stream.Collectors.*; /** * @author zjhua * @description * @date 2019/10/3 15:35 */ public class JavaStreamCommonSQLTest { public static void main(String[] args) { List<Person> persons = new ArrayList<>(); for (int i=100000;i>0;i--) { persons.add(new Person("Person " + (i+1)%1000, i>90000 ? i%10000:i, i % 1000,new BigDecimal(i),i)); } System.out.println(System.currentTimeMillis()); Map<String,Map<Integer, Data>> result = persons.stream().collect( groupingBy(Person::getName,Collectors.groupingBy(Person::getGroup, collectingAndThen(summarizingDouble(Person::getQuantity), dss -> new Data((long)dss.getAverage(), (long)dss.getSum()))))); List<ResultGroup> list = new ArrayList<>(); result.forEach((k,v)->{ v.forEach((ik,iv)->{ ResultGroup e = new ResultGroup(k,ik,iv.average,iv.sum); list.add(e); }); }); list.sort(Comparator.comparing(ResultGroup::getSum).thenComparing(ResultGroup::getAverage)); System.out.println(list.size()); list.subList(10000,10020); System.out.println(System.currentTimeMillis()); System.out.println(JsonUtils.toJson(list)); } } @[email protected]@AllArgsConstructor class Person { String name; int group; int age; BigDecimal balance; double quantity; } @[email protected]@AllArgsConstructor @Deprecated class ResultGroup { String name; int group; long average; long sum; } class Data { long average; long sum; public Data(long average, long sum) { this.average = average; this.sum = sum; } }
开始:1570093823404
结束:1570093823758 -- 350多毫秒
总的来说,到现在为止,java stream还无法较低成本的直接替换sql,比如典型的group by 多个字段不支持,需要多级map(不仅复杂,性能也低),而且group by的统计i结果还必须在单独的类中。开发成本就太高。
参考:https://stackoverflow.com/questions/32071726/java-8-stream-groupingby-with-multiple-collectors
原文地址:https://www.cnblogs.com/zhjh256/p/11619840.html
时间: 2024-10-31 09:48:39