I'm currently researching ideas for a new programming language where ideally I would like the language to mix some functional and procedural (object oriented) concepts.
One of the things that I'm really fascinated about with languages like Haskell is that it's statically typed, but you do not have to annotate types (magic thanks to Hindley-Milner!).
I would really like this for my language, however after reading up on the subject it seems that most agree that type inference is impractical/impossible with subtyping/object orientation, however I have not understood why this is. I do not know F#, but I understand that it uses Hindley-Milner AND is object-oriented.
I would really like an explanation for this and preferably examples on scenarios where type inference is impossible for object oriented languages.
Type inference or deduction refers to the automatic detection of the data type of an expression in a programming language. It is a feature present in some strongly statically typed languages.
Some languages that include type inference include C++11, C# (starting with version 3.0), Chapel, Clean, Crystal, D, F#, FreeBASIC, Go, Haskell, Java (starting with version 10), Julia, Kotlin, ML, Nim, OCaml, Opa, Q#, RPython, Rust, Scala, Swift, TypeScript, Vala, Dart, and Visual Basic (starting with version 9.0).
To add to seppk's response: with structural object types the problem he describes actually goes away (f could be given a polymorphic type like ∀A ≤ {x : Int, y : Int}. A → Int, or even just {x : Int, y : Int} → Int). However, type inference is still problematic.
The fundamental reason is this: in a language without subtyping, the typing rules impose equality constraints on types. These are very nice to deal with during type checking, because they can usually be simplified immediately using unification on the types. With subtyping, however, these constraints are generalised to inequality constraints. You cannot use unification any more, which has at least three unpleasant consequences:
Thus, type inference for subtyping is not impossible (there have been many papers on the subject in the 90s), but it is not very practical.
A much simpler alternative is employed by OCaml, which uses so-called row polymorphism in place of subtyping. That is actually tractable.
When using nominal typing (that is a typing system where two classes whose members have the same name and the same types are not interchangeable), there would be many possible types for a method like the following:
let f(obj) =
obj.x + obj.y
Any class that has both a member x
and a member y
(of types that support the +
operator) would qualify as a possible type for obj
and the type inference algorithm would have no way of knowing which one is the one you want.
In F# the above code would need a type annotation. So F# has object orientation and type inference, but not at the same time (with the exception of local type inference (let myVar = expressionWhoseTypeIKNow
), which always works).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With