I am trying to write sample Apache Spark program that converts RDD to Dataset. But in that process, I am getting compile time error.
Here is my sample code and error:
code:
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.sql.Dataset
object Hello {
case class Person(name: String, age: Int)
def main(args: Array[String]){
val conf = new SparkConf()
.setAppName("first example")
.setMaster("local")
val sc = new SparkContext(conf)
val peopleRDD: RDD[Person] = sc.parallelize(Seq(Person("John", 27)))
val people = peopleRDD.toDS
}
}
and my error is:
value toDS is not a member of org.apache.spark.rdd.RDD[Person]
I have added Spark core and spark SQL jars.
and my versions are:
Spark 1.6.2
scala 2.10
There are two mistakes I can see in your code.
First you have to import sqlContext.implicits._
as toDS
and toDF
are defined in implicits of sqlContext.
Second is that case class
should be defined outside class scope where the case class is being used otherwise task not serializable exception
will occur
Complete solution is as following
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.sql.Dataset
object Hello {
def main(args: Array[String]){
val conf = new SparkConf()
.setAppName("first example")
.setMaster("local")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val peopleRDD: RDD[Person] = sc.parallelize(Seq(Person("John", 27)))
val people = peopleRDD.toDS
people.show(false)
}
}
case class Person(name: String, age: Int)
toDS
is available with sqlContext.implicits._
val sqlContext = new SQLContext(sc);
import sqlContext.implicits._
val people = peopleRDD.toDS()
val spark: SparkSession = SparkSession.builder
.config(conf)
.getOrCreate;
import spark.implicits._
val people = peopleRDD.toDS()
HIH
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With