The difficulty is is that I'm trying to avoid UDFs as much as possible.
I have a dataset "wordsDS", which contains many null values:
+------+------+------+------+
|word_0|word_1|word_2|word_3|
+------+------+------+------+
| a| b| null| d|
| null| f| m| null|
| null| null| d| null|
+--------------+------+-----|
I need to collect all of the columns for each row to array. I don't know the number of columns in advance, so I'm using columns() method.
groupedQueries = wordsDS.withColumn("collected",
functions.array(Arrays.stream(wordsDS.columns())
.map(functions::col).toArray(Column[]::new)));;
But this approach produces empty elements:
+--------------------+
| collected|
+--------------------+
| [a, b,,d]|
| [, f, m,,]|
| [,, d,,]|
+--------------------+
Instead, I need the following result:
+--------------------+
| collected|
+--------------------+
| [a, b, d]|
| [f, m]|
| [d]|
+--------------------+
So basically, I need to collect all of the columns for each row to array with the following requirements:
I've also though of the approach of filter the dataset's "collected" column for empty values, but can't come up with anything else except UDF. I'm trying to avoid UDFs in order not to kill performance, if anyone could suggest a way to filter the dataset's "collected" column for empty values with as little overhead as possible, that would be really helpful.
you can use array("*")
to get all the elements into 1 array, then use array_except
(needs Spark 2.4+) to filter out nulls:
df
.select(array_except(array("*"),array(lit(null))).as("collected"))
.show()
gives
+---------+
|collected|
+---------+
|[a, b, d]|
| [f, m]|
| [d]|
+---------+
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With