I like Spark Datasets as they give me analysis errors and syntax errors at compile time and also allow me to work with getters instead of hard-coded names/numbers. Most computations can be accomplished with Dataset’s high-level APIs. For example, it’s much simpler to perform agg, select, sum, avg, map, filter, or groupBy operations by accessing a Dataset typed object’s than using RDD rows’ data fields.
However the join operation is missing from this, I read that I can do a join like this
ds1.joinWith(ds2, ds1.toDF().col("key") === ds2.toDF().col("key"), "inner")
But that is not what I want as I would prefer to do it via the case class interface, so something more like this
ds1.joinWith(ds2, ds1.key === ds2.key, "inner")
The best alternative for now seems to create an object next to the case class and give this functions to provide me with the right column name as a String. So I would use the first line of code but put a function instead of a hard-coded column name. But that doesn't feel elegant enough..
Can someone advise me on other options here? The goal is to have an abstraction from the actual column names and work preferably via the getters of the case class.
I'm using Spark 1.6.1 and Scala 2.10
In order to join data, Spark needs the data that is to be joined (i.e., the data based on each key) to live on the same partition. The default implementation of a join in Spark is a shuffled hash join.
The Spark SQL supports several types of joins such as inner join, cross join, left outer join, right outer join, full outer join, left semi-join, left anti join.
Spark SQL can optimize join only if join condition is based on the equality operator. This means we can consider equijoins and non-equijoins separately.
Equijoin can be implemented in a type safe manner by mapping both Datasets
to (key, value) tuples, performing join based on keys, and reshaping the result:
import org.apache.spark.sql.Encoder import org.apache.spark.sql.Dataset def safeEquiJoin[T, U, K](ds1: Dataset[T], ds2: Dataset[U]) (f: T => K, g: U => K) (implicit e1: Encoder[(K, T)], e2: Encoder[(K, U)], e3: Encoder[(T, U)]) = { val ds1_ = ds1.map(x => (f(x), x)) val ds2_ = ds2.map(x => (g(x), x)) ds1_.joinWith(ds2_, ds1_("_1") === ds2_("_1")).map(x => (x._1._2, x._2._2)) }
Can be expressed using relational algebra operators as R ⋈θ S = σθ(R × S) and converted directly to code.
Enable crossJoin
and use joinWith
with trivially equal predicate:
spark.conf.set("spark.sql.crossJoin.enabled", true) def safeNonEquiJoin[T, U](ds1: Dataset[T], ds2: Dataset[U]) (p: (T, U) => Boolean) = { ds1.joinWith(ds2, lit(true)).filter(p.tupled) }
Use crossJoin
method:
def safeNonEquiJoin[T, U](ds1: Dataset[T], ds2: Dataset[U]) (p: (T, U) => Boolean) (implicit e1: Encoder[Tuple1[T]], e2: Encoder[Tuple1[U]], e3: Encoder[(T, U)]) = { ds1.map(Tuple1(_)).crossJoin(ds2.map(Tuple1(_))).as[(T, U)].filter(p.tupled) }
case class LabeledPoint(label: String, x: Double, y: Double) case class Category(id: Long, name: String) val points1 = Seq(LabeledPoint("foo", 1.0, 2.0)).toDS val points2 = Seq( LabeledPoint("bar", 3.0, 5.6), LabeledPoint("foo", -1.0, 3.0) ).toDS val categories = Seq(Category(1, "foo"), Category(2, "bar")).toDS safeEquiJoin(points1, categories)(_.label, _.name) safeNonEquiJoin(points1, points2)(_.x > _.x)
It should be noted that these methods are qualtiatively differnt from a direct joinWith
application and require expensive DeserializeToObject
/ SerializeFromObject
transformations (compared to that direct joinWith
can use logical operations on the data).
This is similar to the behavior described in Spark 2.0 Dataset vs DataFrame.
If you're not limited to the Spark SQL API frameless
provides interesting type safe extensions for Datasets
(as of today its supports only Spark 2.0):
import frameless.TypedDataset val typedPoints1 = TypedDataset.create(points1) val typedPoints2 = TypedDataset.create(points2) typedPoints1.join(typedPoints2, typedPoints1('x), typedPoints2('x))
Dataset
API is not stable in 1.6 so I don't think it makes sense to use it there.
Of course this design and descriptive names are not necessary. You can easily use type class to add this methods implicitly to Dataset
an there is no conflict with built in signatures so both can be called joinWith
.
Also, another bigger problem for not type safe Spark API is that when you join two Datasets
, it will give you a DataFrame
. And then you lose types from your original two datasets.
val a: Dataset[A] val b: Dataset[B] val joined: Dataframe = a.join(b) // what would be great is val joined: Dataset[C] = a.join(b)(implicit func: (A, B) => C)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With