Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark: Explode a dataframe array of structs and append id

I currently have a dataframe with an id and a column which is an array of structs:

 root
 |-- id: integer (nullable = true)
 |-- lists: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- text: string (nullable = true)
 |    |    |-- amount: double (nullable = true)

Here is an example table with data:

 id | lists
 -----------
 1  | [[a, 1.0], [b, 2.0]]
 2  | [[c, 3.0]]

How do I transform the above dataframe to the one below? I need to "explode" the array and append the id at the same time.

 id | col1  | col2
 -----------------
 1  | a     | 1.0
 1  | b     | 2.0
 2  | c     | 3.0

Edited Note:

Note there is a difference between the two examples below. The first one contains "an array of structs of elements". While the later just contains "an array of elements".

 root
 |-- id: integer (nullable = true)
 |-- lists: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- text: string (nullable = true)
 |    |    |-- amount: double (nullable = true)


root
 |-- a: long (nullable = true)
 |-- b: array (nullable = true)
 |    |-- element: long (containsNull = true)
like image 682
Steve Avatar asked Feb 20 '17 21:02

Steve


1 Answers

explode is exactly the function:

import org.apache.spark.sql.functions._

df.select($"id", explode($"lists")).select($"id", $"col.text", $"col.amount")
like image 97
user7595317 Avatar answered Oct 17 '22 06:10

user7595317