I have a dataframe, with columns time,a,b,c,d,val. I would like to create a dataframe, with additional column, that will contain the row number of the row, within each group, where a,b,c,d is a group key.
I tried with spark sql, by defining a window function, in particular, in sql it will look like this:
select time, a,b,c,d,val, row_number() over(partition by a,b,c,d order by time) as rn from table
group by a,b,c,d,val
I would like to do this on the dataframe itslef, without using sparksql.
Thanks
I don't know the python api too much, but I will give it a try. You can try something like:
from pyspark.sql import functions as F
df.withColumn("row_number", F.row_number().over(Window.partitionBy("a","b","c","d").orderBy("time"))).show()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With