I'm using Spark 2.2 and am running into troubles when attempting to call spark.createDataset
on a Seq
of Map
.
Code and output from my Spark Shell session follow:
// createDataSet on Seq[T] where T = Int works
scala> spark.createDataset(Seq(1, 2, 3)).collect
res0: Array[Int] = Array(1, 2, 3)
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:24: error: Unable to find encoder for type stored in a Dataset.
Primitive types (Int, String, etc) and Product types (case classes) are
supported by importing spark.implicits._
Support for serializing other types will be added in future releases.
spark.createDataset(Seq(Map(1 -> 2))).collect
^
// createDataSet on a custom case class containing Map works
scala> case class MapHolder(m: Map[Int, Int])
defined class MapHolder
scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect
res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))
I've tried import spark.implicits._
, though I'm fairly certain that's implicitly imported by the Spark shell session.
Is this is a case not covered by current encoders?
The dataset is literally a set of data so it might have a discrete geometry but can have multiple attributes (a data set might be thought of as an INPUT), whereas a map is created from one or more INPUTS. A map is an OUTPUT.
From the menu bar, select Spreadsheet Mapper > add more rows. If you don't see the Spreadsheet Mapper menu, try refreshing the web page. After the spreadsheet re-loads, wait a few seconds and the menu should appear. In the dialog box, enter the number of rows you want to add (up to 500 at a time).
It is not covered in 2.2, but can be easily addressed. You can add required Encoder
using ExpressionEncoder
, either explicitly:
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.Encoder
spark
.createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])
or implicitly
:
implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()
spark.createDataset(Seq(Map(1 -> 2)))
Just FYI that the above expression just works in Spark 2.3 (as of this commit if I'm not mistaken).
scala> spark.version
res0: String = 2.3.0
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
res1: Array[scala.collection.immutable.Map[Int,Int]] = Array(Map(1 -> 2))
I think it's because newMapEncoder
is now part of spark.implicits
.
scala> :implicits
...
implicit def newMapEncoder[T <: scala.collection.Map[_, _]](implicit evidence$3: reflect.runtime.universe.TypeTag[T]): org.apache.spark.sql.Encoder[T]
You could "disable" the implicit by using the following trick and give the above expression a try (that will lead to an error).
trait ThatWasABadIdea
implicit def newMapEncoder(ack: ThatWasABadIdea) = ack
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:26: error: Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
spark.createDataset(Seq(Map(1 -> 2))).collect
^
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With