spark 累加历史+统计全部

spark 累加历史主要用到了窗口函数,而进行全部统计,则需要用到rollup函数

1  应用场景:

  1、我们需要统计用户的总使用时长(累加历史)

  2、前台展现页面需要对多个维度进行查询,如:产品、地区等等

2 原始数据:

product_code |event_date |duration |
-------------|-----------|---------|
1438         |2016-05-13 |165      |
1438         |2016-05-14 |595      |
1438         |2016-05-15 |105      |
1629         |2016-05-13 |12340    |
1629         |2016-05-14 |13850    |
1629         |2016-05-15 |227      |

3 业务场景实现

3.1 业务场景1:累加历史:

如数据源所示:我们已经有当天用户的使用时长,我们期望在进行统计的时候,14号能累加13号的,15号能累加14、13号的,以此类推

3.1.1 spark-sql实现

//spark sql 使用窗口函数累加历史数据
sqlContext.sql(
"""
  select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc) as sum_duration
  from userlogs_date
""").show
+-----+----------+------------+
|pcode|event_date|sum_duration|
+-----+----------+------------+
| 1438|2016-05-13|         165|
| 1438|2016-05-14|         760|
| 1438|2016-05-15|         865|
| 1629|2016-05-13|       12340|
| 1629|2016-05-14|       26190|
| 1629|2016-05-15|       26417|
+-----+----------+------------+

3.1.2 dataframe实现

//使用Column提供的over 函数,传入窗口操作
import org.apache.spark.sql.expressions._
val first_2_now_window = Window.partitionBy("pcode").orderBy("event_date")
df_userlogs_date.select(
    $"pcode",
    $"event_date",
    sum($"duration").over(first_2_now_window).as("sum_duration")
).show

+-----+----------+------------+
|pcode|event_date|sum_duration|
+-----+----------+------------+
| 1438|2016-05-13|         165|
| 1438|2016-05-14|         760|
| 1438|2016-05-15|         865|
| 1629|2016-05-13|       12340|
| 1629|2016-05-14|       26190|
| 1629|2016-05-15|       26417|
+-----+----------+------------+

 3.1.3 扩展 累加一段时间范围内

实际业务中的累加逻辑远比上面复杂,比如,累加之前N天,累加前N天到后N天等等。以下我们来实现:

 3.1.3.1 累加历史所有:

select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc) as sum_duration from userlogs_dateselect pcode,event_date,sum(duration) over (partition by pcode order by event_date asc rows between unbounded preceding and current row) as sum_duration from userlogs_dateWindow.partitionBy("pcode").orderBy("event_date").rowsBetween(Long.MinValue,0)Window.partitionBy("pcode").orderBy("event_date")上边四种写法完全相等

3.1.3.2 累加N天之前,假设N=3
select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc rows between 3 preceding and current row) as sum_duration from userlogs_dateWindow.partitionBy("pcode").orderBy("event_date").rowsBetween(-3,0) 

  3.1.3.3 累加前N天,后M天: 假设N=3 M=5 

select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc rows between 3 preceding and 5 following ) as sum_duration from userlogs_dateWindow.partitionBy("pcode").orderBy("event_date").rowsBetween(-3,5)3.1.3.4 累加该分区内所有行
select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc rows between unbounded preceding and unbounded following ) as sum_duration from userlogs_date
Window.partitionBy("pcode").orderBy("event_date").rowsBetween(Long.MinValue,Long.MaxValue)

总结如下:

preceding:用于累加前N行(分区之内)。若是从分区第一行头开始,则为 unbounded。 N为:相对当前行向前的偏移量following :与preceding相反,累加后N行(分区之内)。若是累加到该分区结束,则为 unbounded。N为:相对当前行向后的偏移量current row:顾名思义,当前行,偏移量为0
说明:上边的前N,后M,以及current row均会累加该偏移量所在行3.1.3.4 实测结果
累加历史:分区内当天及之前所有 写法1:select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc) as sum_duration from userlogs_date


+-----+----------+------------+
|pcode|event_date|sum_duration|
+-----+----------+------------+
| 1438|2016-05-13|         165|
| 1438|2016-05-14|         760|
| 1438|2016-05-15|         865|
| 1629|2016-05-13|       12340|
| 1629|2016-05-14|       26190|
| 1629|2016-05-15|       26417|
+-----+----------+------------+
累加历史:分区内当天及之前所有 写法2:select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc rows between unbounded preceding and current row) as sum_duration from userlogs_date
+-----+----------+------------+
|pcode|event_date|sum_duration|
+-----+----------+------------+
| 1438|2016-05-13|         165|
| 1438|2016-05-14|         760|
| 1438|2016-05-15|         865|
| 1629|2016-05-13|       12340|
| 1629|2016-05-14|       26190|
| 1629|2016-05-15|       26417|
+-----+----------+------------+
累加当日和昨天:select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc rows between 1 preceding and current row) as sum_duration from userlogs_date
+-----+----------+------------+
|pcode|event_date|sum_duration|
+-----+----------+------------+
| 1438|2016-05-13|         165|
| 1438|2016-05-14|         760|
| 1438|2016-05-15|         700|
| 1629|2016-05-13|       12340|
| 1629|2016-05-14|       26190|
| 1629|2016-05-15|       14077|
+-----+----------+------------+
累加当日、昨日、明日:select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc rows between 1 preceding and 1 following ) as sum_duration from userlogs_date
+-----+----------+------------+
|pcode|event_date|sum_duration|
+-----+----------+------------+
| 1438|2016-05-13|         760|
| 1438|2016-05-14|         865|
| 1438|2016-05-15|         700|
| 1629|2016-05-13|       26190|
| 1629|2016-05-14|       26417|
| 1629|2016-05-15|       14077|
+-----+----------+------------+
累加分区内所有:当天和之前之后所有:select pcode,event_date,sum(duration) over (partition by pcode order by event_date asc rows between unbounded preceding and unbounded following ) as sum_duration from userlogs_date
+-----+----------+------------+
|pcode|event_date|sum_duration|
+-----+----------+------------+
| 1438|2016-05-13|         865|
| 1438|2016-05-14|         865|
| 1438|2016-05-15|         865|
| 1629|2016-05-13|       26417|
| 1629|2016-05-14|       26417|
| 1629|2016-05-15|       26417|
+-----+----------+------------+
3.2 业务场景2:统计全部

3.2.1 spark sql实现

//spark sql 使用rollup添加all统计
sqlContext.sql(
"""
  select pcode,event_date,sum(duration) as sum_duration
  from userlogs_date_1
  group by pcode,event_date with rollup
  order by pcode,event_date
""").show()

+-----+----------+------------+
|pcode|event_date|sum_duration|
+-----+----------+------------+
| null|      null|       27282|
| 1438|      null|         865|
| 1438|2016-05-13|         165|
| 1438|2016-05-14|         595|
| 1438|2016-05-15|         105|
| 1629|      null|       26417|
| 1629|2016-05-13|       12340|
| 1629|2016-05-14|       13850|
| 1629|2016-05-15|         227|
+-----+----------+------------+

3.2.2 dataframe函数实现

//使用dataframe提供的rollup函数,进行多维度all统计
df_userlogs_date.rollup($"pcode", $"event_date").agg(sum($"duration")).orderBy($"pcode", $"event_date")

+-----+----------+-------------+
|pcode|event_date|sum(duration)|
+-----+----------+-------------+
| null|      null|        27282|
| 1438|      null|          865|
| 1438|2016-05-13|          165|
| 1438|2016-05-14|          595|
| 1438|2016-05-15|          105|
| 1629|      null|        26417|
| 1629|2016-05-13|        12340|
| 1629|2016-05-14|        13850|
| 1629|2016-05-15|          227|
+-----+----------+-------------+

附录

下面是这两个函数的官方api说明:

org.apache.spark.sql.scala
def rollup(col1: String, cols: String*): GroupedData
Create a multi-dimensional rollup for the current DataFrame using the specified columns, so we can run aggregation on them. See GroupedData for all the available aggregate functions.
This is a variant of rollup that can only group by existing columns using column names (i.e. cannot construct expressions).

// Compute the average for all numeric columns rolluped by department and group.
df.rollup("department", "group").avg()

// Compute the max age and average salary, rolluped by department and gender.
df.rollup($"department", $"gender").agg(Map(
  "salary" -> "avg",
  "age" -> "max"
))
def rollup(cols: Column*): GroupedData
Create a multi-dimensional rollup for the current DataFrame using the specified columns, so we can run aggregation on them. See GroupedData for all the available aggregate functions.
df.rollup($"department", $"group").avg()

// Compute the max age and average salary, rolluped by department and gender.
df.rollup($"department", $"gender").agg(Map(
  "salary" -> "avg",
  "age" -> "max"
))
org.apache.spark.sql.Column.scala
def over(window: WindowSpec): Column
Define a windowing column.

val w = Window.partitionBy("name").orderBy("id")
df.select(
  sum("price").over(w.rangeBetween(Long.MinValue, 2)),
  avg("price").over(w.rowsBetween(0, 4))
)
时间: 2024-08-06 15:55:33

spark 累加历史+统计全部的相关文章

Spark MLlib Statistics统计

1.Spark MLlib Statistics统计 Spark Mllib 统计模块代码结构如下: 1.1 列统计汇总 计算每列最大值.最小值.平均值.方差值.L1范数.L2范数. //读取数据,转换成RDD[Vector]类型 val data_path = "/home/jb-huangmeiling/sample_stat.txt" val data = sc.textFile(data_path).map(_.split("\t")).map(f =>

spark+kafka+redis统计网站访问者IP

*目的是为了防采集.需要对网站的日志信息,进行一个实时的IP访问监控. 1.kafka版本是最新的0.10.0.0 2.spark版本是1.61 3.下载对应的spark-streaming-kafka-assembly_2.10-1.6.1.jar放到spark目录下的lib目录下 4.利用flume将nginx日志写入到kafka(后续补充) 5.编写python脚本,命名为test_spark_collect_ip.py # coding:utf-8 __author__ = 'chenh

Spark 加强版WordCount ,统计日志中文件访问数量

原文地址:http://blog.csdn.net/whzhaochao/article/details/72416956 写在前面 学习Scala和Spark基本语法比较枯燥无味,搞搞简单的实际运用可以有效的加深你对基本知识点的记忆,前面我们完成了最基本的WordCount功能的http://blog.csdn.net/whzhaochao/article/details/72358215,这篇主要是结合实际生产情况编写一个简单的功能,功能就是通过分析CDN或者Nginx的日志文件,统计出访问

SPARK启动历史任务查看

SPARK历史任务查看需要一下配置: 修改spark-defaults.conf配置文件 spark.eventLog.enabled             true spark.eventLog.dir                       hdfs://192.168.9.110:9000/eventLogs spark.eventLog.compress          true 修改spark-env.sh配置文件 export SPARK_HISTORY_OPTS="-Dsp

Spark MLib 基本统计汇总

1.  概括统计 summary statistics MLlib支持RDD[Vector]列式的概括统计,它通过调用 Statistics 的 colStats方法实现. colStats返回一个 MultivariateStatisticalSummary 对象,这个对象包含列式的最大值.最小值.均值.方差等等. import org.apache.spark.mllib.linalg.Vector import org.apache.spark.mllib.stat.{Multivaria

spark 省份次数统计实例

//统计access.log文件里面IP地址对应的省份,并把结果存入到mysql package access1 import java.sql.DriverManager import org.apache.spark.broadcast.Broadcast import org.apache.spark.{SparkConf, SparkContext} object AccessIp { def main(args: Array[String]): Unit = { //new sc va

02 使用spark进行词频统计【scala交互】

我们已经在CentOS7中安装了spark,本节将展示如何在spark中通过scala方式交互的进行词频统计. 1 系统.软件以及前提约束 CentOS 7 64 工作站 作者的机子ip是192.168.100.200,主机名为danji,请读者根据自己实际情况设置 hadoop已经安装完毕并启动 https://www.jianshu.com/p/b7ae3b51e559 spark已经安装完毕并启动 https://www.jianshu.com/p/8384ab76e8d4 为去除权限对操

Spark 启动历史任务记录进程,报错 Logging directory must be specified解决

最近在自己电脑上装了Spark 单机运行模式,Spark 启动没有任何问题,可是启动spark history时,一直报错,错误信息如下: Spark assembly has been built with Hive, including Datanucleus jars on classpath Spark Command: /usr/local/java/jdk1.7.0_67/bin/java -cp ::/usr/local/spark/conf:/usr/local/spark/li

分区表(主对象、子对象)历史统计信息

select  case when subobject_name  is null then       obj.object_id else null end object_id ,--主对象idcase when obj.object_type  = 'TABLE' then       obj.object_type else null end object_type ,--主对象typecase when obj.object_type  = 'TABLE' then       obj