I have a pySpark dataframe that looks like this:
+-------------+----------+ | sku| date| +-------------+----------+ |MLA-603526656|02/09/2016| |MLA-603526656|01/09/2016| |MLA-604172009|02/10/2016| |MLA-605470584|02/09/2016| |MLA-605502281|02/10/2016| |MLA-605502281|02/09/2016| +-------------+----------+
I want to group by sku, and then calculate the min and max dates. If I do this:
df_testing.groupBy('sku') \ .agg({'date': 'min', 'date':'max'}) \ .limit(10) \ .show()
the behavior is the same as Pandas, where I only get the sku
and max(date)
columns. In Pandas I would normally do the following to get the results I want:
df_testing.groupBy('sku') \ .agg({'day': ['min','max']}) \ .limit(10) \ .show()
However on pySpark this does not work, and I get a java.util.ArrayList cannot be cast to java.lang.String
error. Could anyone please point me to the correct syntax?
Thanks.
You cannot use dict. Use:
>>> from pyspark.sql import functions as F >>> >>> df_testing.groupBy('sku').agg(F.min('date'), F.max('date'))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With