I have a Spark dataframe with the following data (I use spark-csv to load the data in):
key,value
1,10
2,12
3,0
1,20
is there anything similar to spark RDD reduceByKey
which can return a Spark DataFrame as: (basically, summing up for the same key values)
key,value
1,30
2,12
3,0
(I can transform the data to RDD and do a reduceByKey
operation, but is there a more Spark DataFrame API way to do this?)
If you don't care about column names you can use groupBy
followed by sum
:
df.groupBy($"key").sum("value")
otherwise it is better to replace sum
with agg
:
df.groupBy($"key").agg(sum($"value").alias("value"))
Finally you can use raw SQL:
df.registerTempTable("df")
sqlContext.sql("SELECT key, SUM(value) AS value FROM df GROUP BY key")
See also DataFrame / Dataset groupBy behaviour/optimization
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With