Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use UDF to return multiple columns?

Tags:

Is it possible to create a UDF which would return the set of columns?

I.e. having a data frame as follows:

| Feature1 | Feature2 | Feature 3 | | 1.3      | 3.4      | 4.5       | 

Now I would like to extract a new feature, which can be described as a vector of let's say two elements (e.g. as seen in a linear regression - slope and offset). Desired dataset shall look as follows:

| Feature1 | Feature2 | Feature 3 | Slope | Offset | | 1.3      | 3.4      | 4.5       | 0.5   | 3      | 

Is it possible to create multiple columns with single UDF or do I need to follow the rule: "single column per single UDF"?

like image 781
TechCrap Avatar asked Feb 25 '18 23:02

TechCrap


People also ask

Can UDF return multiple columns?

UDF can return only a single column at the time.

Why we should not use UDF in Spark?

It is well known that the use of UDFs (User Defined Functions) in Apache Spark, and especially in using the Python API, can compromise our application performace. For this reason, at Damavis we try to avoid their use as much as possible infavour of using native functions or SQL .

What is difference between UDF and UDAF in Spark SQL?

Spark SQL supports integration of Hive UDFs, UDAFs and UDTFs. Similar to Spark UDFs and UDAFs, Hive UDFs work on a single row as input and generate a single row as output, while Hive UDAFs operate on multiple rows and return a single aggregated row as a result.


1 Answers

Struct method

You can define the udf function as

def myFunc: (String => (String, String)) = { s => (s.toLowerCase, s.toUpperCase)}  import org.apache.spark.sql.functions.udf val myUDF = udf(myFunc) 

and use .* as

val newDF = df.withColumn("newCol", myUDF(df("Feature2"))).select("Feature1", "Feature2", "Feature 3", "newCol.*") 

I have returned Tuple2 for testing purpose (higher order tuples can be used according to how many multiple columns are required) from udf function and it would be treated as struct column. Then you can use .* to select all the elements in separate columns and finally rename them.

You should have output as

+--------+--------+---------+---+---+ |Feature1|Feature2|Feature 3|_1 |_2 | +--------+--------+---------+---+---+ |1.3     |3.4     |4.5      |3.4|3.4| +--------+--------+---------+---+---+ 

You can rename _1 and _2

Array method

udf function should return an array

def myFunc: (String => Array[String]) = { s => Array("s".toLowerCase, s.toUpperCase)}  import org.apache.spark.sql.functions.udf val myUDF = udf(myFunc) 

And the you can select elements of the array and use alias to rename them

val newDF = df.withColumn("newCol", myUDF(df("Feature2"))).select($"Feature1", $"Feature2", $"Feature 3", $"newCol"(0).as("Slope"), $"newCol"(1).as("Offset")) 

You should have

+--------+--------+---------+-----+------+ |Feature1|Feature2|Feature 3|Slope|Offset| +--------+--------+---------+-----+------+ |1.3     |3.4     |4.5      |s    |3.4   | +--------+--------+---------+-----+------+ 
like image 126
Ramesh Maharjan Avatar answered Sep 26 '22 20:09

Ramesh Maharjan