How can I convert a single column in spark 2.0.1 into an array?
+---+-----+
| id| dist|
+---+-----+
|1.0|2.0|
|2.0|4.0|
|3.0|6.0|
|4.0|8.0|
+---+-----+
should return Array(1.0, 2.0, 3.0, 4.0)
A
import scala.collection.JavaConverters._
df.select("id").collectAsList.asScala.toArray
fails with
java.lang.RuntimeException: Unsupported array type: [Lorg.apache.spark.sql.Row;
java.lang.RuntimeException: Unsupported array type: [Lorg.apache.spark.sql.Row;
Why do you use JavaConverters if you then re-transform the Java List to a Scala List ? You just need to collect the dataset and then map this array of Rows to an array of doubles, like this :
df.select("id").collect.map(_.getDouble(0))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With