I have two tables, namely table1
and table2
. table1
is big, whereas table2
is small. Also, I have a UDF function whose interface is defined as below:
--table1--
id
1
2
3
--table2--
category
a
b
c
d
e
f
g
UDF: foo(id: Int): List[String]
I intend to call UDF firstly to get the corresponding categories: foo(table1.id)
, which will return a WrappedArray, then I want to join every category
in table2
to do some more manipulation. The expected result should look like this:
--view--
id,category
1,a
1,c
1,d
2,b
2,c
3,e
3,f
3,g
I try to find a unnest method in Hive, but with no luck, could anyone help me out? Thanks!
I believe that you want to use explode
function or Dataset's flatMap
operator.
explode
function creates a new row for each element in the given array or map column.
flatMap
operator returns a new Dataset by first applying a function to all elements of this Dataset, and then flattening the results.
After you execute your UDF foo(id: Int): List[String]
you'll end up with a Dataset
with the column of type array
.
val fooUDF = udf { id: Int => ('a' to ('a'.toInt + id).toChar).map(_.toString) }
// table1 with fooUDF applied
val table1 = spark.range(3).withColumn("foo", fooUDF('id))
scala> table1.show
+---+---------+
| id| foo|
+---+---------+
| 0| [a]|
| 1| [a, b]|
| 2|[a, b, c]|
+---+---------+
scala> table1.printSchema
root
|-- id: long (nullable = false)
|-- foo: array (nullable = true)
| |-- element: string (containsNull = true)
scala> table1.withColumn("fooExploded", explode($"foo")).show
+---+---------+-----------+
| id| foo|fooExploded|
+---+---------+-----------+
| 0| [a]| a|
| 1| [a, b]| a|
| 1| [a, b]| b|
| 2|[a, b, c]| a|
| 2|[a, b, c]| b|
| 2|[a, b, c]| c|
+---+---------+-----------+
With that, join
should be quite easy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With