Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How bad are implicit definitions?

I like implicit definitions. They make the code look nice, they make the user feel some features are naturally available on a class when it's just an implicit definition. Yet, I was thinking about JS prototypes, where you can basically define a method on a class you did not write. But if in the next version this class defines a method with the same signature and makes assumptions on its behaviour, you're screwed.

Scala's implicits enable to do almost the same thing with one major difference : the implicit definitions are scoped, therefore there is no risk for the author of a class to have code injected by an implicit definition in someone else's code. But what about the user's code ? Is it protected from a change in the class he's adding implicits to ?

Let's consider this code :

class HeyMan {
    def hello = println("Hello")
}

object Main extends App {
    val heyMan = new HeyMan

    implicit class ImplicitHeyMan(heyMan: HeyMan) {
        def hello = println("What's up ?")
    }
    heyMan.hello // prints Hello
}

Pretty bad, isn't it ? To me, the correct behaviour should be that the implicit definition always shades the real definition, so that the user code is protected from the apparition of new methods in the API he's calling into.

What do you think ? Is there a way to make it safe or should we stop using implicits this way ?

like image 219
Dici Avatar asked Oct 02 '15 12:10

Dici


1 Answers

The behavior of the language with regard to implicit conversions is defined very clearly:

if one calls a method m on an object o of a class C, and that class does not support method m, then Scala will look for an implicit conversion from C to something that does support m.

http://docs.scala-lang.org/tutorials/FAQ/finding-implicits.html

In other words, an implicit conversion will never be applied to heyMan in the expression heyMan.hello if the (statically-known) class/trait of heyMan already defines the method hello—implicit conversions are tried only when you call a method that it doesn't already define.


To me, the correct behaviour should be that the implicit definition always shades the real definition, so that the user code is protected from the apparition of new methods in the API he's calling into.

Isn't the opposite case equally true? If the implicit conversion did take precedence, then the user would be in danger of their long-defined methods that have been around for 5 years suddenly being shadowed by a new implicit conversion in a new version of a library dependence.

This case seems much more insidious and difficult to debug than the case that a user's explicit definition of a new method takes precedence.


Is there a way to make it safe or should we stop using implicits this way ?

If it is really critical that you get the implicit behavior, maybe you should force the implicit conversion with an explicit type:

object Main extends App {
    val heyMan = new HeyMan

    implicit class ImplicitHeyMan(heyMan: HeyMan) {
        def hello = println("What's up ?")
    }

    heyMan.hello // prints Hello

    val iHeyMan: ImplicitHeyMan // force conversion via implicit
    iHeyMan.hello // prints What's up
}

From our (extended) conversation in the comments, it seems like you want a way to check that the underlying class won't define the method you're using through the implicit conversion.

I think Łukasz's comment below is right on—this is something you should catch in testing. Specifically, you could use ScalaTest's assertTypeError for this. Just try calling the method outside the scope of your implicit, and it should fail to type check (and pass the test):

// Should pass only if your implicit isn't in scope,
// and the underlying class doesn't define the hello method
assertTypeError("(new HeyMan).hello")
like image 156
DaoWen Avatar answered Nov 09 '22 05:11

DaoWen