I'm trying to load an SVM file and convert it to a DataFrame
so I can use the ML module (Pipeline
ML) from Spark. I've just installed a fresh Spark 1.5.0 on an Ubuntu 14.04 (no spark-env.sh
configured).
My my_script.py
is:
from pyspark.mllib.util import MLUtils from pyspark import SparkContext sc = SparkContext("local", "Teste Original") data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()
and I'm running using: ./spark-submit my_script.py
And I get the error:
Traceback (most recent call last): File "/home/fred-spark/spark-1.5.0-bin-hadoop2.6/pipeline_teste_original.py", line 34, in <module> data = MLUtils.loadLibSVMFile(sc, "/home/fred-spark/svm_capture").toDF() AttributeError: 'PipelinedRDD' object has no attribute 'toDF'
What I can't understand is that if I run:
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()
directly inside PySpark shell, it works.
toDF
method is a monkey patch executed inside SparkSession
(SQLContext
constructor in 1.x) constructor so to be able to use it you have to create a SQLContext
(or SparkSession
) first:
# SQLContext or HiveContext in Spark 1.x from pyspark.sql import SparkSession from pyspark import SparkContext sc = SparkContext() rdd = sc.parallelize([("a", 1)]) hasattr(rdd, "toDF") ## False spark = SparkSession(sc) hasattr(rdd, "toDF") ## True rdd.toDF().show() ## +---+---+ ## | _1| _2| ## +---+---+ ## | a| 1| ## +---+---+
Not to mention you need a SQLContext
or SparkSession
to work with DataFrames
in the first place.
Make sure you have spark session too.
sc = SparkContext("local", "first app") spark = SparkSession(sc)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With