Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Convert PySpark dataframe column from list to string

Tags:

I have this PySpark dataframe

+-----------+--------------------+
|uuid       |   test_123         |    
+-----------+--------------------+
|      1    |[test, test2, test3]|
|      2    |[test4, test, test6]|
|      3    |[test6, test9, t55o]|

and I want to convert the column test_123 to be like this:

+-----------+--------------------+
|uuid       |   test_123         |    
+-----------+--------------------+
|      1    |"test,test2,test3"  |
|      2    |"test4,test,test6"  |
|      3    |"test6,test9,t55o"  |

so from list to be string.

how can I do it with PySpark?

like image 955
Dkova Avatar asked Jul 14 '17 17:07

Dkova


People also ask

How do you convert a column to a string in PySpark?

In order to convert array to a string, PySpark SQL provides a built-in function concat_ws() which takes delimiter of your choice as a first argument and array column (type Column) as the second argument. In order to use concat_ws() function, you need to import it using pyspark.

How do you transform columns in PySpark?

You can do update a PySpark DataFrame Column using withColum(), select() and sql(), since DataFrame's are distributed immutable collection you can't really change the column values however when you change the value using withColumn() or any approach, PySpark returns a new Dataframe with updated values.

How do I turn a List into a string?

To convert a list to a string, use Python List Comprehension and the join() function. The list comprehension will traverse the elements one by one, and the join() method will concatenate the list's elements into a new string and return it as output.


2 Answers

While you can use a UserDefinedFunction it is very inefficient. Instead it is better to use concat_ws function:

from pyspark.sql.functions import concat_ws

df.withColumn("test_123", concat_ws(",", "test_123")).show()
+----+----------------+
|uuid|        test_123|
+----+----------------+
|   1|test,test2,test3|
|   2|test4,test,test6|
|   3|test6,test9,t55o|
+----+----------------+
like image 50
zero323 Avatar answered Oct 19 '22 13:10

zero323


You can create a udf that joins array/list and then apply it to the test column:

from pyspark.sql.functions import udf, col

join_udf = udf(lambda x: ",".join(x))
df.withColumn("test_123", join_udf(col("test_123"))).show()

+----+----------------+
|uuid|        test_123|
+----+----------------+
|   1|test,test2,test3|
|   2|test4,test,test6|
|   3|test6,test9,t55o|
+----+----------------+

The initial data frame is created from:

from pyspark.sql.types import StructType, StructField
schema = StructType([StructField("uuid",IntegerType(),True),StructField("test_123",ArrayType(StringType(),True),True)])
rdd = sc.parallelize([[1, ["test","test2","test3"]], [2, ["test4","test","test6"]],[3,["test6","test9","t55o"]]])
df = spark.createDataFrame(rdd, schema)

df.show()
+----+--------------------+
|uuid|            test_123|
+----+--------------------+
|   1|[test, test2, test3]|
|   2|[test4, test, test6]|
|   3|[test6, test9, t55o]|
+----+--------------------+
like image 36
Psidom Avatar answered Oct 19 '22 13:10

Psidom