Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Column features must be of type org.apache.spark.ml.linalg.VectorUDT

I want to run this code in pyspark (spark 2.1.1):

from pyspark.ml.feature import PCA

bankPCA = PCA(k=3, inputCol="features", outputCol="pcaFeatures") 
pcaModel = bankPCA.fit(bankDf)    
pcaResult = pcaModel.transform(bankDF).select("label", "pcaFeatures")    
pcaResult.show(truncate= false)

But I get this error:

requirement failed: Column features must be of type org.apache.spark.ml.linalg.Vect orUDT@3bfc3ba7 but was actually org.apache.spark.mllib.linalg.VectorUDT@f71b0bce.

like image 834
S.Lotfi Avatar asked Jun 01 '17 09:06

S.Lotfi


1 Answers

Example that you can find here:

from pyspark.ml.feature import PCA
from pyspark.ml.linalg import Vectors

data = [(Vectors.sparse(5, [(1, 1.0), (3, 7.0)]),),
    (Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0]),),
    (Vectors.dense([4.0, 0.0, 0.0, 6.0, 7.0]),)]
df = spark.createDataFrame(data, ["features"])

pca = PCA(k=3, inputCol="features", outputCol="pcaFeatures")
model = pca.fit(df)

... other code ...

As you can see above, df is a dataframe which contains Vectors.sparse() and Vectors.dense() that are imported from pyspark.ml.linalg.

Probably, your bankDf contains Vectors imported from pyspark.mllib.linalg.

So you have to set that Vectors in your dataframes are imported

from pyspark.ml.linalg import Vectors 

instead of:

from pyspark.mllib.linalg import Vectors

Maybe you can find interesting this stackoverflow question.

like image 77
titiro89 Avatar answered Sep 30 '22 19:09

titiro89