Let's say I have the following DataFrame
:
[Row(user='bob', values=[0.5, 0.3, 0.2]),
Row(user='bob', values=[0.1, 0.3, 0.6]),
Row(user='bob', values=[0.8, 0.1, 0.1])]
I would like to groupBy
user
and do something like avg(values)
where the average is taken over each index of the array values
like this:
[Row(user='bob', avgerages=[0.466667, 0.233333, 0.3])]
How can I do this in PySpark?
You can expand array and compute average for each index.
Python
from pyspark.sql.functions import array, avg, col
n = len(df.select("values").first()[0])
df.groupBy("user").agg(
array(*[avg(col("values")[i]) for i in range(n)]).alias("averages")
)
Scala
import spark.implicits._
import org.apache.spark.functions.{avg, size}
val df = Seq(
("bob", Seq(0.5, 0.3, 0.2)),
("bob", Seq(0.1, 0.3, 0.6))
).toDF("user", "values")
val n = df.select(size($"values")).as[Int].first
val values = (0 to n).map(i => $"values"(i))
df.select($"user" +: values: _*).groupBy($"user").avg()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With