Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

multiple criteria for aggregation on pySpark Dataframe

Tags:

I have a pySpark dataframe that looks like this:

+-------------+----------+ |          sku|      date| +-------------+----------+ |MLA-603526656|02/09/2016| |MLA-603526656|01/09/2016| |MLA-604172009|02/10/2016| |MLA-605470584|02/09/2016| |MLA-605502281|02/10/2016| |MLA-605502281|02/09/2016| +-------------+----------+ 

I want to group by sku, and then calculate the min and max dates. If I do this:

df_testing.groupBy('sku') \     .agg({'date': 'min', 'date':'max'}) \     .limit(10) \     .show() 

the behavior is the same as Pandas, where I only get the sku and max(date) columns. In Pandas I would normally do the following to get the results I want:

df_testing.groupBy('sku') \     .agg({'day': ['min','max']}) \     .limit(10) \     .show() 

However on pySpark this does not work, and I get a java.util.ArrayList cannot be cast to java.lang.String error. Could anyone please point me to the correct syntax?

Thanks.

like image 236
masta-g3 Avatar asked Oct 27 '16 01:10

masta-g3


1 Answers

You cannot use dict. Use:

>>> from pyspark.sql import functions as F >>> >>> df_testing.groupBy('sku').agg(F.min('date'), F.max('date')) 
like image 129
user6022341 Avatar answered Sep 18 '22 15:09

user6022341