I currently have a dataframe with an id and a column which is an array of structs:
root
|-- id: integer (nullable = true)
|-- lists: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- text: string (nullable = true)
| | |-- amount: double (nullable = true)
Here is an example table with data:
id | lists
-----------
1 | [[a, 1.0], [b, 2.0]]
2 | [[c, 3.0]]
How do I transform the above dataframe to the one below? I need to "explode" the array and append the id at the same time.
id | col1 | col2
-----------------
1 | a | 1.0
1 | b | 2.0
2 | c | 3.0
Edited Note:
Note there is a difference between the two examples below. The first one contains "an array of structs of elements". While the later just contains "an array of elements".
root
|-- id: integer (nullable = true)
|-- lists: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- text: string (nullable = true)
| | |-- amount: double (nullable = true)
root
|-- a: long (nullable = true)
|-- b: array (nullable = true)
| |-- element: long (containsNull = true)
explode
is exactly the function:
import org.apache.spark.sql.functions._
df.select($"id", explode($"lists")).select($"id", $"col.text", $"col.amount")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With