I am using Spark cluster 2.0 and I would like to convert a vector from org.apache.spark.mllib.linalg.VectorUDT
to org.apache.spark.ml.linalg.VectorUDT
.
# Import LinearRegression class
from pyspark.ml.regression import LinearRegression
# Define LinearRegression algorithm
lr = LinearRegression()
modelA = lr.fit(data, {lr.regParam:0.0})
Error:
u'requirement failed: Column features must be of type org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7 but was actually org.apache.spark.mllib.linalg.VectorUDT@f71b0bce.'
Any thoughts how would I do this conversion between vector types.
Thanks a lot.
In PySpark you'll need an or map
over RDD. Let's use the first option. First a couple of imports:
from pyspark.ml.linalg import VectorUDT
from pyspark.sql.functions import udf
and a function:
as_ml = udf(lambda v: v.asML() if v is not None else None, VectorUDT())
With example data:
from pyspark.mllib.linalg import Vectors as MLLibVectors
df = sc.parallelize([
(MLLibVectors.sparse(4, [0, 2], [1, -1]), ),
(MLLibVectors.dense([1, 2, 3, 4]), )
]).toDF(["features"])
result = df.withColumn("features", as_ml("features"))
The result is
+--------------------+
| features|
+--------------------+
|(4,[0,2],[1.0,-1.0])|
| [1.0,2.0,3.0,4.0]|
+--------------------+
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With