Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

replace values of one column in a spark df by dictionary key-values (pyspark)

I got stucked with a data transformation task in pyspark. I want to replace all values of one column in a df with key-value-pairs specified in a dictionary.

dict = {'A':1, 'B':2, 'C':3}

My df looks like this:

+-----------++-----------+
|       col1||       col2|
+-----------++-----------+
|          B||          A|
|          A||          A|
|          A||          A|
|          C||          B|
|          A||          A|
+-----------++-----------+

Now I want to replace all values of col1 by the key-values pairs defined in dict.

Desired Output:

+-----------++-----------+
|       col1||       col2|
+-----------++-----------+
|          2||          A|
|          1||          A|
|          1||          A|
|          3||          B|
|          1||          A|
+-----------++-----------+

I tried

df.na.replace(dict, 1).show()

but that also replaces the values on col2, which shall stay untouched.

Thank you for your help. Greetings :)

like image 413
getaway22 Avatar asked Jun 27 '17 09:06

getaway22


People also ask

How do you replace values in a column in PySpark?

You can replace column values of PySpark DataFrame by using SQL string functions regexp_replace(), translate(), and overlay() with Python examples.

How do you use the Replace function in PySpark DataFrame?

The function withColumn is called to add (or replace, if the name exists) a column to the data frame. The function regexp_replace will generate a new column by replacing all substrings that match the pattern.

Can I use dictionary in PySpark?

As I said in the beginning, PySpark doesn't have a Dictionary type instead it uses MapType to store the dictionary object, below is an example of how to create a DataFrame column MapType using pyspark. sql. types. StructType .

What is F explode in PySpark?

Introduction to PySpark explode. PYSPARK EXPLODE is an Explode function that is used in the PySpark data model to explode an array or map-related columns to row in PySpark. It explodes the columns and separates them not a new row in PySpark. It returns a new row for each element in an array or map.


1 Answers

Your data:

print df
DataFrame[col1: string, col2: string]
df.show()   
+----+----+
|col1|col2|
+----+----+
|   B|   A|
|   A|   A|
|   A|   A|
|   C|   B|
|   A|   A|
+----+----+

diz = {"A":1, "B":2, "C":3}

Convert values of your dictionary from integer to string, in order to not get errors of replacing different types:

diz = {k:str(v) for k,v in zip(diz.keys(),diz.values())}

print diz
{'A': '1', 'C': '3', 'B': '2'}

Replace value of col1

df2 = df.na.replace(diz,1,"col1")
print df2
DataFrame[col1: string, col2: string]

df2.show()
+----+----+
|col1|col2|
+----+----+
|   2|   A|
|   1|   A|
|   1|   A|
|   3|   B|
|   1|   A|
+----+----+

If you need to cast your values from String to Integer

from pyspark.sql.types import *

df3 = df2.select(df2["col1"].cast(IntegerType()),df2["col2"]) 
print df3
DataFrame[col1: int, col2: string]

df3.show()
+----+----+
|col1|col2|
+----+----+
|   2|   A|
|   1|   A|
|   1|   A| 
|   3|   B|
|   1|   A|
+----+----+
like image 169
titiro89 Avatar answered Oct 23 '22 22:10

titiro89