The version of scala is 2.11.8 ; jdk is 1.8 ; spark is 2.0.2
I try to run an example of LDA model in the offical site of spark apache, I got error message from following sentence:
val dataset = spark.read.format("libsvm")
.load("data/libsvm_data.txt")
The error massage is:
Error:(49, 25) value read is not a member of org.apache.spark.SparkContext val dataset = spark.read.format("libsvm") ^
I don't know how to solve it.
Looks like you're trying to call read
on a SparkContext
, instead of an SQLContext
or a SparkSession
:
// New 2.0.+ API: create SparkSession and use it for all purposes:
val session = SparkSession.builder().appName("test").master("local").getOrCreate()
session.read.load("/file") // OK
// Old <= 1.6.* API: create SparkContext, then create a SQLContext for DataFrame API usage:
val sc = new SparkContext("local", "test") // used for RDD operations only
val sqlContext = new SQLContext(sc) // used for DataFrame / DataSet APIs
sqlContext.read.load("/file") // OK
sc.read.load("/file") // NOT OK
Add these lines:
import org.apache.spark.sql.SparkSession
val session = SparkSession.builder().appName("app_name").master("local").getOrCreate()
val training = session.read.format("format_name").load("path_to_file")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With