Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to get the name of column with maximum value in pyspark dataframe

How do we get the name of the column pyspark dataframe ?

   Alice  Eleonora  Mike  Helen       MAX
0      2         7     8      6      Mike
1     11         5     9      4     Alice
2      6        15    12      3  Eleonora
3      5         3     7      8     Helen

I need something like this. name of the columns no the max values, i am able to get the max values, i need the name

like image 987
Vikas Bishnoi Avatar asked Oct 18 '17 21:10

Vikas Bishnoi


People also ask

How do I get PySpark DataFrame column names?

You can find all column names & data types (DataType) of PySpark DataFrame by using df. dtypes and df. schema and you can also retrieve the data type of a specific column name using df. schema["name"].

What is ILOC in PySpark?

. iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a conditional boolean Series. Allowed inputs are: An integer for column selection, e.g. 5 . A list or array of integers for row selection with distinct index values, e.g. [3, 4, 0]


1 Answers

You can chain conditions to find which columns is equal to the maximum value:

cond = "psf.when" + ".when".join(["(psf.col('" + c + "') == psf.col('max_value'), psf.lit('" + c + "'))" for c in df.columns])
import pyspark.sql.functions as psf
df.withColumn("max_value", psf.greatest(*df.columns))\
    .withColumn("MAX", eval(cond))\
    .show()

    +-----+--------+----+-----+---------+--------+
    |Alice|Eleonora|Mike|Helen|max_value|     MAX|
    +-----+--------+----+-----+---------+--------+
    |    2|       7|   8|    6|        8|    Mike|
    |   11|       5|   9|    4|       11|   Alice|
    |    6|      15|  12|    3|       15|Eleonora|
    |    5|       3|   7|    8|        8|   Helen|
    +-----+--------+----+-----+---------+--------+

OR: explode and filter

from itertools import chain
df.withColumn("max_value", psf.greatest(*df.columns))\
    .select("*", psf.posexplode(psf.create_map(list(chain(*[(psf.lit(c), psf.col(c)) for c in df.columns])))))\
    .filter("max_value = value")\
    .select(df.columns + [psf.col("key").alias("MAX")])\
    .show()

OR: using a UDF on a dictionary:

from pyspark.sql.types import *
argmax_udf = psf.udf(lambda m: max(m, key=m.get), StringType())
df.withColumn("map", psf.create_map(list(chain(*[(psf.lit(c), psf.col(c)) for c in df.columns]))))\
    .withColumn("MAX", argmax_udf("map"))\
    .drop("map")\
    .show()

OR: using a UDF with a parameter:

from pyspark.sql.types import *
def argmax(cols, *args):
    return [c for c, v in zip(cols, args) if v == max(args)][0]
argmax_udf = lambda cols: psf.udf(lambda *args: argmax(cols, *args), StringType())
df.withColumn("MAX", argmax_udf(df.columns)(*df.columns))\
    .show()
like image 51
MaFF Avatar answered Sep 27 '22 22:09

MaFF