Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Adding two columns to existing DataFrame using withColumn

I have a DataFrame with a few columns. Now I want to add two more columns to the existing DataFrame.

Currently I am doing this using withColumn method in DataFrame.

for example:

df.withColumn("newColumn1", udf(col("somecolumn")))   .withColumn("newColumn2", udf(col("somecolumn"))) 

Actually I can return both newcoOlumn values in single UDF method using Array[String]. But currently this is how I am doing it.

Is there anyway, I can do this effectively? using explode is the good option here?

Even if I have to use explode, I have to use withColumn once, then return the column value as Array[String], then using explode, create two more columns.

Which one is effective? or is there any alternatives?

like image 205
Shankar Avatar asked Dec 04 '16 14:12

Shankar


People also ask

How do you add multiple columns in withColumn Pyspark?

There isn't a withColumns method, so most PySpark newbies call withColumn multiple times when they need to add multiple columns to a DataFrame. The * selects all of the existing DataFrame columns and the other columns are appended.

How do I add two columns in a DataFrame Pyspark?

The lit() function present in Pyspark is used to add a new column in a Pyspark Dataframe by assigning a constant or literal value.

Does withColumn replace existing column?

The withColumn creates a new column with a given name. It creates a new column with same name if there exist already and drops the old one.

How do I add a column to an existing spark DataFrame?

In PySpark, to add a new column to DataFrame use lit() function by importing from pyspark. sql. functions import lit , lit() function takes a constant value you wanted to add and returns a Column type, if you wanted to add a NULL / None use lit(None) .

What happens when we add two columns in a Dataframe?

So when we add two columns in which one or two-column contains NaN values then we will see that we also get the result as NaN. Let see this with the help of an example. We know that a dataframe is a group of series.

How to add a new column in pandas Dataframe?

We can use a Python dictionary to add a new column in pandas DataFrame. Use an existing column as the key values and their respective values will be the values for new column. import pandas as pd data = {'Name': ['Jai', 'Princi', 'Gaurav', 'Anuj'],

How to add a new column to a spark dataframe?

Adding a new column or multiple columns to Spark DataFrame can be done using withColumn () and select () methods of DataFrame, In this article, I will explain how to add a new column from the existing column, adding a constant or literal value and finally adding a list column to DataFrame.

What happens when you add two columns to a string?

Here we see that when we add two columns then a series will be formed.] Note: We can’t add a string with int or float. We can only add a string with a string or a number with a number.


1 Answers

AFAIk you need to call withColumn twice (once for each new column). But if your udf is computationally expensive, you can avoid to call it twice with storing the "complex" result in a temporary column and then "unpacking" the result e.g. using the apply method of column (which gives access to the array element). Note that sometimes it's necessary to cache the intermediate result (to prevent that the UDF is called twice per row during unpacking), sometimes it's not needed. This seems to depend on how spark the optimizes the plan :

val myUDf = udf((s:String) => Array(s.toUpperCase(),s.toLowerCase()))  val df = sc.parallelize(Seq("Peter","John")).toDF("name")  val newDf = df   .withColumn("udfResult",myUDf(col("name"))).cache    .withColumn("uppercaseColumn", col("udfResult")(0))   .withColumn("lowercaseColumn", col("udfResult")(1))   .drop("udfResult")  newDf.show() 

gives

+-----+---------------+---------------+ | name|uppercaseColumn|lowercaseColumn| +-----+---------------+---------------+ |Peter|          PETER|          peter| | John|           JOHN|           john| +-----+---------------+---------------+ 

With an UDF returning a tuple, the unpacking would look like this:

val newDf = df     .withColumn("udfResult",myUDf(col("name"))).cache     .withColumn("lowercaseColumn", col("udfResult._1"))     .withColumn("uppercaseColumn", col("udfResult._2"))     .drop("udfResult") 
like image 68
Raphael Roth Avatar answered Oct 20 '22 03:10

Raphael Roth