I have a dataframe (df) and within the dataframe I have a column user_id
df = sc.parallelize([(1, "not_set"),
(2, "user_001"),
(3, "user_002"),
(4, "n/a"),
(5, "N/A"),
(6, "userid_not_set"),
(7, "user_003"),
(8, "user_004")]).toDF(["key", "user_id"])
df:
+---+--------------+
|key| user_id|
+---+--------------+
| 1| not_set|
| 2| user_003|
| 3| user_004|
| 4| n/a|
| 5| N/A|
| 6|userid_not_set|
| 7| user_003|
| 8| user_004|
+---+--------------+
I would like to replace the following values: not_set, n/a, N/A and userid_not_set with null.
It would be good if I could add any new values to a list and they to could be changed.
I am currently using a CASE statement within spark.sql to preform this and would like to change this to pyspark.
None inside the when() function corresponds to the null. In case you wish to fill in anything else instead of null, you have to fill it in it's place.
from pyspark.sql.functions import col
df = df.withColumn(
"user_id",
when(
col("user_id").isin('not_set', 'n/a', 'N/A', 'userid_not_set'),
None
).otherwise(col("user_id"))
)
df.show()
+---+--------+
|key| user_id|
+---+--------+
| 1| null|
| 2|user_001|
| 3|user_002|
| 4| null|
| 5| null|
| 6| null|
| 7|user_003|
| 8|user_004|
+---+--------+
You can use the in-built when function, which is the equivalent of a case expression.
from pyspark.sql import functions as f
df.select(df.key,f.when(df.user_id.isin(['not_set', 'n/a', 'N/A']),None).otherwise(df.user_id)).show()
Also the values needed can be stored in a list and be referenced.
val_list = ['not_set', 'n/a', 'N/A']
df.select(df.key,f.when(df.user_id.isin(val_list),None).otherwise(df.user_id)).show()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With