Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

spark - scala: not a member of org.apache.spark.sql.Row

I am trying to convert a data frame to RDD, then perform some operations below to return tuples:

df.rdd.map { t=>
 (t._2 + "_" + t._3 , t)
}.take(5)

Then I got the error below. Anyone have any ideas? Thanks!

<console>:37: error: value _2 is not a member of org.apache.spark.sql.Row
               (t._2 + "_" + t._3 , t)
                  ^
like image 929
Edamame Avatar asked May 19 '16 23:05

Edamame


2 Answers

When you convert a DataFrame to RDD, you get an RDD[Row], so when you use map, your function receives a Row as parameter. Therefore, you must use the Row methods to access its members (note that the index starts from 0):

df.rdd.map { 
  row: Row => (row.getString(1) + "_" + row.getString(2), row)
}.take(5)

You can view more examples and check all methods available for Row objects in the Spark scaladoc.

Edit: I don't know the reason why you are doing this operation, but for concatenating String columns of a DataFrame you may consider the following option:

import org.apache.spark.sql.functions._
val newDF = df.withColumn("concat", concat(df("col2"), lit("_"), df("col3")))
like image 107
Daniel de Paula Avatar answered Sep 30 '22 12:09

Daniel de Paula


You can access every element of a Row like if it was a List or Array, it means using (index), however you can use the method get also.

For example:

df.rdd.map {t =>
  (t(2).toString + "_" + t(3).toString, t)
}.take(5)
like image 30
Alberto Bonsanto Avatar answered Sep 30 '22 12:09

Alberto Bonsanto