I want to run this code in pyspark (spark 2.1.1):
from pyspark.ml.feature import PCA
bankPCA = PCA(k=3, inputCol="features", outputCol="pcaFeatures")
pcaModel = bankPCA.fit(bankDf)
pcaResult = pcaModel.transform(bankDF).select("label", "pcaFeatures")
pcaResult.show(truncate= false)
But I get this error:
requirement failed: Column features must be of type
org.apache.spark.ml.linalg.Vect orUDT@3bfc3ba7
but was actuallyorg.apache.spark.mllib.linalg.VectorUDT@f71b0bce
.
Example that you can find here:
from pyspark.ml.feature import PCA
from pyspark.ml.linalg import Vectors
data = [(Vectors.sparse(5, [(1, 1.0), (3, 7.0)]),),
(Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0]),),
(Vectors.dense([4.0, 0.0, 0.0, 6.0, 7.0]),)]
df = spark.createDataFrame(data, ["features"])
pca = PCA(k=3, inputCol="features", outputCol="pcaFeatures")
model = pca.fit(df)
... other code ...
As you can see above, df is a dataframe which contains Vectors.sparse() and Vectors.dense() that are imported from pyspark.ml.linalg.
Probably, your bankDf contains Vectors imported from pyspark.mllib.linalg.
So you have to set that Vectors in your dataframes are imported
from pyspark.ml.linalg import Vectors
instead of:
from pyspark.mllib.linalg import Vectors
Maybe you can find interesting this stackoverflow question.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With