Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark (Scala) filter array of structs without explode

I have a dataframe with a key and a column with an array of structs in a dataframe column. Each row contains a column a looks something like this:

[
    {"id" : 1, "someProperty" : "xxx", "someOtherProperty" : "1", "propertyToFilterOn" : 1},
    {"id" : 2, "someProperty" : "yyy", "someOtherProperty" : "223", "propertyToFilterOn" : 0},
    {"id" : 3, "someProperty" : "zzz", "someOtherProperty" : "345", "propertyToFilterOn" : 1}
]

Now I would like to do two things:

  1. Filter on "propertyToFilterOn" = 1
  2. Apply some logic on other properties - for example concatenate

So that the result is:

[
{"id" : 1, "newProperty" : "xxx_1"},
{"id" : 3, "newProperty" : "zzz_345"}
]

I know how to do it with explode but explode also requires groupBy on the key when putting it back together. But as this is a streaming Dataframe I would also have to put a watermark on it which I am trying to avoid.

Is there any other way to achieve this without using explode? I am sure there is some Scala magic that can achieve this!

Thanks!

like image 719
t138 Avatar asked Mar 02 '19 02:03

t138


1 Answers

With spark 2.4+ came many higher order functions for arrays. (see https://docs.databricks.com/spark/2.x/spark-sql/language-manual/functions.html)

val dataframe = Seq(
("a", 1, "xxx", "1", 1),
("a", 2, "yyy", "223", 0),
("a", 3, "zzz", "345", 1)
).toDF( "grouping_key", "id" , "someProperty" , "someOtherProperty", "propertyToFilterOn" )
.groupBy("grouping_key")
.agg(collect_list(struct("id" , "someProperty" , "someOtherProperty", "propertyToFilterOn")).as("your_array"))

dataframe.select("your_array").show(false)

+----------------------------------------------------+
|your_array                                          |
+----------------------------------------------------+
|[[1, xxx, 1, 1], [2, yyy, 223, 0], [3, zzz, 345, 1]]|
+----------------------------------------------------+

You can filter elements within an array using the array filter higher order function like this:

val filteredDataframe = dataframe.select(expr("filter(your_array, your_struct -> your_struct.propertyToFilterOn == 1)").as("filtered_arrays"))

filteredDataframe.show(false)

+----------------------------------+
|filtered_arrays                   |
+----------------------------------+
|[[1, xxx, 1, 1], [3, zzz, 345, 1]]|
+----------------------------------+

for the "other logic" your talking about you should be able to use the transform higher order array function like so:

val tranformedDataframe = filteredDataframe
.select(expr("transform(filtered_arrays, your_struct -> struct(concat(your_struct.someProperty, '_', your_struct.someOtherProperty))"))

but there are issues with returning structs from the transform function as described in this post:

http://mail-archives.apache.org/mod_mbox/spark-user/201811.mbox/%3CCALZs8eBgWqntiPGU8N=ENW2Qvu8XJMhnViKy-225ktW+_c0czA@mail.gmail.com%3E

so you are best using the dataset api for the transform like so:

case class YourStruct(id:String, someProperty: String, someOtherProperty: String)
case class YourArray(filtered_arrays: Seq[YourStruct])

case class YourNewStruct(id:String, newProperty: String)

val transformedDataset = filteredDataframe.as[YourArray].map(_.filtered_arrays.map(ys => YourNewStruct(ys.id, ys.someProperty + "_" + ys.someOtherProperty)))

val transformedDataset.show(false)

+--------------------------+
|value                     |
+--------------------------+
|[[1, xxx_1], [3, zzz_345]]|
+--------------------------+
like image 144
randal25 Avatar answered Nov 06 '22 03:11

randal25