Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pyspark - Calculate number of null values in each dataframe column

I have a dataframe with many columns. My aim is to produce a dataframe thats lists each column name, along with the number of null values in that column.

Example:

+-------------+-------------+
| Column_Name | NULL_Values |
+-------------+-------------+
|  Column_1   |      15     |
|  Column_2   |      56     |
|  Column_3   |      18     |
|     ...     |     ...     |
+-------------+-------------+

I have managed to get the number of null values for ONE column like so:

df.agg(F.count(F.when(F.isnull(c), c)).alias('NULL_Count'))

where c is a column in the dataframe. However, it does not show the name of the column. The output is:

+------------+
| NULL_Count |
+------------+
|     15     |
+------------+

Any ideas?

like image 701
LEJ Avatar asked Dec 10 '22 05:12

LEJ


1 Answers

You can use a list comprehension to loop over all of your columns in the agg, and use alias to rename the output column:

import pyspark.sql.functions as F

df_agg = df.agg(*[F.count(F.when(F.isnull(c), c)).alias(c) for c in df.columns])

However, this will return the results in one row as shown below:

df_agg.show()
#+--------+--------+--------+
#|Column_1|Column_2|Column_3|
#+--------+--------+--------+
#|      15|      56|      18|
#+--------+--------+--------+

If you wanted the results in one column instead, you could union each column from df_agg using functools.reduce as follows:

from functools import reduce
df_agg_col = reduce(
    lambda a, b: a.union(b),
    (
        df_agg.select(F.lit(c).alias("Column_Name"), F.col(c).alias("NULL_Count")) 
        for c in df_agg.columns
    )
)
df_agg_col.show()
#+-----------+----------+
#|Column_Name|NULL_Count|
#+-----------+----------+
#|   Column_1|        15|
#|   Column_2|        56|
#|   Column_3|        18|
#+-----------+----------+

Or you can skip the intermediate step of creating df_agg and do:

df_agg_col = reduce(
    lambda a, b: a.union(b),
    (
        df.agg(
            F.count(F.when(F.isnull(c), c)).alias('NULL_Count')
        ).select(F.lit(c).alias("Column_Name"), "NULL_Count")
        for c in df.columns
    )
)
like image 105
pault Avatar answered Dec 12 '22 18:12

pault