Spark2 DataFrame数据框常用操作(九)之分析函数--排名函数row_number,rank,dense_rank,percent_rank

select gender,
       age,
       row_number() over(partition by gender order by age) as rowNumber,
       rank() over(partition by gender order by age) as ranks,
       dense_rank() over(partition by gender order by age) as denseRank,
       percent_rank() over(partition by gender order by age) as percentRank
  from Affairs

val spark = SparkSession.builder().appName("Spark SQL basic example").config("spark.some.config.option", "some-value").getOrCreate()

// For implicit conversions like converting RDDs to DataFrames
import spark.implicits._

val dataList: List[(Double, String, Double, Double, String, Double, Double, Double, Double)] = List(
      (0, "male", 37, 10, "no", 3, 18, 7, 4),
      (0, "female", 27, 4, "no", 4, 14, 6, 4),
      (0, "female", 32, 15, "yes", 1, 12, 1, 4),
      (0, "male", 57, 15, "yes", 5, 18, 6, 5),
      (0, "male", 22, 0.75, "no", 2, 17, 6, 3),
      (0, "female", 32, 1.5, "no", 2, 17, 5, 5),
      (0, "female", 22, 0.75, "no", 2, 12, 1, 3),
      (0, "male", 57, 15, "yes", 2, 14, 4, 4),
      (0, "female", 32, 15, "yes", 4, 16, 1, 2),
      (0, "male", 22, 1.5, "no", 4, 14, 4, 5),
      (0, "male", 37, 15, "yes", 2, 20, 7, 2),
      (0, "male", 27, 4, "yes", 4, 18, 6, 4),
      (0, "male", 47, 15, "yes", 5, 17, 6, 4),
      (0, "female", 22, 1.5, "no", 2, 17, 5, 4),
      (0, "female", 27, 4, "no", 4, 14, 5, 4),
      (0, "female", 37, 15, "yes", 1, 17, 5, 5),
      (0, "female", 37, 15, "yes", 2, 18, 4, 3),
      (0, "female", 22, 0.75, "no", 3, 16, 5, 4),
      (0, "female", 22, 1.5, "no", 2, 16, 5, 5),
      (0, "female", 27, 10, "yes", 2, 14, 1, 5),
      (0, "female", 22, 1.5, "no", 2, 16, 5, 5),
      (0, "female", 22, 1.5, "no", 2, 16, 5, 5),
      (0, "female", 27, 10, "yes", 4, 16, 5, 4),
      (0, "female", 32, 10, "yes", 3, 14, 1, 5),
      (0, "male", 37, 4, "yes", 2, 20, 6, 4))

val data = dataList.toDF("affairs", "gender", "age", "yearsmarried", "children", "religiousness", "education", "occupation", "rating")

data.printSchema()

// 创建视图
data.createOrReplaceTempView("Affairs")

val s1="row_number() over(partition by gender order by age) as rowNumber,"
val s2="rank() over(partition by gender order by age) as ranks,"
val s3="dense_rank() over(partition by gender order by age) as denseRank,"
val s4="percent_rank() over(partition by gender order by age) as percentRank"
val df8=spark.sql("select gender,age,"+s1+s2+s3+s4+" from Affairs")

df8.show(50)
+------+----+---------+-----+---------+------------------+
|gender| age|rowNumber|ranks|denseRank|       percentRank|
+------+----+---------+-----+---------+------------------+
|female|22.0|        1|    1|        1|               0.0|
|female|22.0|        2|    1|        1|               0.0|
|female|22.0|        3|    1|        1|               0.0|
|female|22.0|        4|    1|        1|               0.0|
|female|22.0|        5|    1|        1|               0.0|
|female|22.0|        6|    1|        1|               0.0|
|female|27.0|        7|    7|        2|               0.4|
|female|27.0|        8|    7|        2|               0.4|
|female|27.0|        9|    7|        2|               0.4|
|female|27.0|       10|    7|        2|               0.4|
|female|32.0|       11|   11|        3|0.6666666666666666|
|female|32.0|       12|   11|        3|0.6666666666666666|
|female|32.0|       13|   11|        3|0.6666666666666666|
|female|32.0|       14|   11|        3|0.6666666666666666|
|female|37.0|       15|   15|        4|0.9333333333333333|
|female|37.0|       16|   15|        4|0.9333333333333333|
|  male|22.0|        1|    1|        1|               0.0|
|  male|22.0|        2|    1|        1|               0.0|
|  male|27.0|        3|    3|        2|              0.25|
|  male|37.0|        4|    4|        3|             0.375|
|  male|37.0|        5|    4|        3|             0.375|
|  male|37.0|        6|    4|        3|             0.375|
|  male|47.0|        7|    7|        4|              0.75|
|  male|57.0|        8|    8|        5|             0.875|
|  male|57.0|        9|    8|        5|             0.875|
+------+----+---------+-----+---------+------------------+
时间: 2024-10-23 07:35:34

Spark2 DataFrame数据框常用操作(九)之分析函数--排名函数row_number,rank,dense_rank,percent_rank的相关文章

Spark2 DataFrame数据框常用操作(三)

import org.apache.spark.sql.functions._ // 对整个DataFrame的数据去重 data.distinct() data.dropDuplicates() // 对指定列的去重 val colArray=Array("affairs", "gender") data.dropDuplicates(colArray) //data.dropDuplicates("affairs", "gender

Spark2 DataFrame数据框常用操作(八)之cube与rollup

val df6 = spark.sql("select gender,children,max(age),avg(age),count(age) from Affairs group by Cube(gender,children) order by 1,2") df6.show +------+--------+--------+--------+----------+ |gender|children|max(age)|avg(age)|count(age)| +------+--

Spark2 DataFrame数据框常用操作(七)之统计指标:mean均值,variance方差,stddev标准差,corr(Pearson相关系数),skewness偏度,kurtosis峰度

val df4=spark.sql("SELECT mean(age),variance(age),stddev(age),corr(age,yearsmarried),skewness(age),kurtosis(age) FROM Affairs") df4.show +--------+------------------+------------------+-----------------------+-----------------+------------------

转载:R语言Data Frame数据框常用操作

Data Frame一般被翻译为数据框,感觉就像是R中的表,由行和列组成,与Matrix不同的是,每个列可以是不同的数据类型,而Matrix是必须相同的. Data Frame每一列有列名,每一行也可以指定行名.如果不指定行名,那么就是从1开始自增的Sequence来标识每一行. 初始化 使用data.frame函数就可以初始化一个Data Frame.比如我们要初始化一个student的Data Frame其中包含ID和Name还有Gender以及Birthdate,那么代码为: studen

R语言Data Frame数据框常用操作

Data Frame一般被翻译为数据框,感觉就像是R中的表,由行和列组成,与Matrix不同的是,每个列可以是不同的数据类型,而Matrix是必须相同的. Data Frame每一列有列名,每一行也可以指定行名.如果不指定行名,那么就是从1开始自增的Sequence来标识每一行. 初始化 使用data.frame函数就可以初始化一个Data Frame.比如我们要初始化一个student的Data Frame其中包含ID和Name还有Gender以及Birthdate,那么代码为: studen

byte数据的常用操作函数[转发]

1 /// <summary> 2 /// 本类提供了对byte数据的常用操作函数 3 /// </summary> 4 public class ByteUtil 5 { 6 private static char[] HEX_CHARS = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; 7 private static byte[] BITS = {0x01, 0x02, 0x04, 0x0

pandas dataframe 数据框

数据框是一个二维数据结构,类似于SQL中的表格.借助字典,数组,列表和序列等可以构造数据框. 1.字典创建数据框,则列的名称为key的名称: d = {'one':pd.Series([1,2,3],index= ['a','b','c']), 'two':pd.Series([1,2,3,4],index=['a','b','c','d']) } print(pd.DataFrame(d)) 2.列表创建数据框: d = pd.DataFrame([[1,2,3,4],[5,6,7,8],[1

R 数据框的操作

1.插入一列 根据自带数据集beaver 进行操作,比如插入一列id. > colnames(beaver1) [1] "day" "time" "temp" "activ" > nrow(beaver1) [1] 114 方法1: new_beaver1$id = rep(1,114) 方法2 new_beaver1 = data.frame(id = rep(1,114),beaver1) 方法3 x = da

R语言读取Excel和对数据框的操作

找整理了下资料,你看看,希望对你有帮助 你下载一本资料http://down.51cto.com/data/957270 导入Excel数据请参考书中的2.3.3, 使用SQL语句操作数据框请参考书中的4.11节 你可以根据自己的功能分写成不同的脚本,使用的时候直接使用就可以了 我使用了以下方法: install.packages("XLConnect") library("XLConnect") 这里有weatherday.xlsx, 我放在workspace下面