I have a Spark Dataframe looking like this:
| time | col1 | col2 |
|----------------------|
| 123456 | 2 | A |
| 123457 | 4 | B |
| 123458 | 7 | C |
| 123459 | 5 | D |
| 123460 | 3 | E |
| 123461 | 1 | F |
| 123462 | 9 | G |
| 123463 | 8 | H |
| 123464 | 6 | I |
Now I need to sort the "col1" - Column, but the other columns have to remain in the same order: (Using pyspark)
| time | col1 | col2 | col1_sorted |
|-----------------------------------|
| same | same | same | sorted |
|-----------------------------------|
| 123456 | 2 | A | 1 |
| 123457 | 4 | B | 2 |
| 123458 | 7 | C | 3 |
| 123459 | 5 | D | 4 |
| 123460 | 3 | E | 5 |
| 123461 | 1 | F | 6 |
| 123462 | 9 | G | 7 |
| 123463 | 8 | H | 8 |
| 123464 | 6 | I | 9 |
Thanks in advance for any help!
For Spark 2.3.1, you can try pandas_udf, see below (assume the original dataframe is sorted by the time
column)
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql.types import StructType
schema = StructType.fromJson(df.schema.jsonValue()).add('col1_sorted', 'integer')
@pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def get_col1_sorted(pdf):
return pdf.sort_values(['time']).assign(col1_sorted=sorted(pdf["col1"]))
df.groupby().apply(get_col1_sorted).show()
+------+----+----+-----------+
| time|col1|col2|col1_sorted|
+------+----+----+-----------+
|123456| 2| A| 1|
|123457| 4| B| 2|
|123458| 7| C| 3|
|123459| 5| D| 4|
|123460| 3| E| 5|
|123461| 1| F| 6|
|123462| 9| G| 7|
|123463| 8| H| 8|
|123464| 6| I| 9|
+------+----+----+-----------+
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With