While pondering through the code of Predef.scala
I noticed the following:
/** A type for which there is always an implicit value.
* @see [[scala.Array$]], method `fallbackCanBuildFrom`
*/
class DummyImplicit
object DummyImplicit {
/** An implicit value yielding a `DummyImplicit`.
* @see [[scala.Array$]], method `fallbackCanBuildFrom`
*/
implicit def dummyImplicit: DummyImplicit = new DummyImplicit
}
Does anyone have a clue why this piece of seemingly useless code exists?
Ultimately it comes down to type erasure (which both Java and Scala use).
Imagine this code:
object Foo {
def foo(p: String) = 1
def foo(p: Int) = 2
def foo(p: Any) = 3
}
object Main extends App {
Foo.foo("1")
}
Everything is good here. But what if we change the parameters from individual values to a sequence?
object Foo {
def foo(ps: String*) = 1
def foo(ps: Int*) = 2
def foo(ps: Any*) = 3
}
object Main extends App {
Foo.foo("1")
}
Now we have an error:
Main.scala:4: error: double definition:
def foo(ps: Int*): Int at line 3 and
def foo(ps: Any*): Int at line 4
have same type after erasure: (ps: Seq)Int
def foo(ps: Any*) = 3
^
Main.scala:3: error: double definition:
def foo(ps: String*): Int at line 2 and
def foo(ps: Int*): Int at line 3
have same type after erasure: (ps: Seq)Int
def foo(ps: Int*) = 2
^
two errors found
And see the message "have same type after erasure" - that's our clue.
So why did the sequence fail?
Because the JVM does not support generics - this means that your strongly types collections (like the sequence of ints or strings), really aren't. They are compiled to containers of Object because that's what the JVM expects. The types are "erased".
So after compiling these all look something like this (we'll see exactly what they are in a moment):
object Foo {
def foo(ps: Object*) = 1
def foo(ps: Object*) = 2
def foo(ps: Object*) = 3
}
Obviously that is not what was intended.
So how does Java deal with this?
It creates Bridge Methods behind the scenes. Magic!
Scala doesn't do that magic (though it has been discussed) - rather it uses dummy implicits.
Let's change our definition
object Foo {
def foo(ps: String*) = 1
def foo(ps: Int*)(implicit i: DummyImplicit) = 2
def foo(ps: Any*)(implicit i1: DummyImplicit, i2: DummyImplicit) = 3
}
And now it compiles! But ... why?
Let's look at the code that scalac generated (scalac -print foo.scala)
object Foo extends Object {
def foo(ps: Seq): Int = 1;
def foo(ps: Seq, i: Predef$DummyImplicit): Int = 2;
def foo(ps: Seq, i1: Predef$DummyImplicit, i2: Predef$DummyImplicit): Int = 3;
def <init>(): Foo.type = {
Foo.super.<init>();
()
}
};
OK - so we have three distinct foo methods that differ only by their implicit parameters.
And now let's call them:
object Main extends App {
Foo.foo("1")
Foo.foo(1)
Foo.foo(1.0)
}
And what does that look like (I'm removing a LOT of other code here ...)
Foo.foo(scala.this.Predef.wrapRefArray(Array[String]{"1"}.$asInstanceOf[Array[Object]]()));
Foo.foo(scala.this.Predef.wrapIntArray(Array[Int]{1}), scala.Predef$DummyImplicit.dummyImplicit());
Foo.foo(scala.this.Predef.genericWrapArray(Array[Object]{scala.Double.box(1.0)}), scala.Predef$DummyImplicit.dummyImplicit(), scala.Predef$DummyImplicit.dummyImplicit());
So each call was given the implicit parameter needed to properly disambiguate the call.
So why does DummyImplicit exists? To make sure there is a type for which there will always be an implicit value (otherwise you'd need to make sure it was available).
It's documentation states "A type for which there is always an implicit value." - so the implicit value always exists to be used in cases like this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With