Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PySpark Array<double> is not Array<double>

I am running a very simple Spark (2.4.0 on Databricks) ML script:

from pyspark.ml.clustering import LDA

lda = LDA(k=10, maxIter=100).setFeaturesCol('features')
model = lda.fit(dataset)

But received following error:

IllegalArgumentException: 'requirement failed: Column features must be of type equal to one of the following types: [struct<type:tinyint,size:int,indices:array<int>,values:array<double>>, array<double>, array<float>] but was actually of type array<double>.'

Why my array<double> is not an array<double>?

Here is the schema:

root
 |-- BagOfWords: struct (nullable = true)
 |    |-- indices: array (nullable = true)
 |    |    |-- element: long (containsNull = true)
 |    |-- size: long (nullable = true)
 |    |-- type: long (nullable = true)
 |    |-- values: array (nullable = true)
 |    |    |-- element: double (containsNull = true)
 |-- tokens: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- features: array (nullable = true)
 |    |-- element: double (containsNull = true)
like image 718
Ryan Avatar asked Nov 07 '22 18:11

Ryan


1 Answers

You probably need to convert it into vector form using vector assembler from pyspark.ml.feature import VectorAssembler

like image 111
Deepti Aggarwal Avatar answered Nov 15 '22 12:11

Deepti Aggarwal