As far as I know,any programming language which doesn't require to write type annotations in the source while writing a function or module and if that chunk of code is "type-correct" , compiler will infer the types and compiles the code. is there more to it?
is(are) there a such language(s)? if yes are there any limitations on it's type system?
Update 1: Just to be really clear , I am asking about a statically typed,fully type-inferred programming language not a dynamically typed programming language.
What is type inference?
Historically, type inference (or type reconstruction) meant that all types in a program can be derived without requiring essentially any explicit type annotation. However, in recent years, it has become en vogue in the programming language mainstream to label even the most trivial forms of bottom-up type deduction as "type inference" (e.g., the new auto
declarations of C++11). So people have started adding "full" to refer to the "real" thing.
What is full type inference?
There is a broad spectrum to what degree a language can infer types, and in practice, almost no language supports "full" type inference in the strictest sense (core ML is about the only example). But the main distinguishing factor is whether types can be derived for bindings that do not have a "definition" attached to them — in particular, parameters of functions. If you can write, say,
f(x) = x + 1
and the type system figures out that f e.g. has type Int → Int, then it makes sense to call this type inference. Moreover, we talk about polymorphic type inference when, e.g.,
g(x) = x
is assigned the generic type ∀(t) t → t automatically.
Type inference was invented in the context of the simply-typed lambda calculus, and polymorphic type inference (aka Hindley/Milner type inference, invented in the 1970s) is the claim to fame of the ML family of languages (Standard ML, OCaml, and arguably Haskell).
What are the limits of full type inference?
Core ML has the luxury of "full" polymorphic type inference. But it hinges on certain limitations of polymorphism in its type system. In particular, only definitions can be generic, not function arguments. That is,
id(x) = x;
id(5);
id(True)
works fine, because id
can be given polymorphic type when the definition is known. But
f(id) = (id(5); id(True))
does not type check in ML, because id
cannot be polymorphic as a function argument. In other words, the type system does allow polymorphic types like ∀(t) t → t, but not so-called higher-rank polymorphic types like (∀(t) t → t) → Bool, where polymorphic values are used in a first-class manner (which, just to be clear, even very few explicitly typed languages allow).
The polymorphic lambda calculus (also called "System F"), which is explicitly typed, allows the latter. But it is a standard result in type theory that type reconstruction for full System F is undecidable. Hindley/Milner hits a sweet spot of a slightly less expressive type system for which type reconstruction still is decidable.
There are more advanced type system features that also make full type reconstruction undecidable. And there are others that keep it decidable but still make it infeasible, e.g. the presence of ad-hoc overloading or subtyping, because that leads to combinatorial explosion.
The limitation of full type inference is that it doesn't work with many advanced type system features. As an example consider Haskell and OCaml. Both of these languages are almost fully type inferred, but have some features that can interfere with type inference.
In Haskell it's type classes combined with polymorphic return types:
readAndPrint str = print (read "asd")
Here read
is a function of type Read a => String -> a
, which means "for any type a
that supports the type class Read
the function read
can take a String
and return an a
. So if f
is a method that takes an int, I can write f (read "123")
and it will convert "123" to the Int 123 and call f
with it. It knows that it should convert the string to an Int because f
takes an Int. If f took a list of ints, it would try to convert the string to a list of Ints instead. No problem.
But for the readAndPrint
function above that approach does not work. The problem here is that print
can take an argument of any type that can be printed (that is any type that supports the Show
typeclass). So there's no way for the compiler to know whether you want to convert the string to an int, or a list of ints, or anything else that can be printed. So in cases like this you need to add type annotations.
In OCaml the problematic feature is polymorphic functions in classes: If you define a function that takes an object as its argument and calls a method on that object, the compiler will infer a monomorphic type for that method. For example:
let f obj = obj#meth 23 + obj#meth 42
Here the compiler will infer that obj
must be an instance of a class that has a method named meth
of type int -> int
, i.e. a method that takes an Int and returns an Int. You can now define a bunch of classes that have such a method and pass instances of that class as arguments to f
. No problem.
The problem occurs if you define a class with a method of type 'a. 'a -> int
, i.e. a method that can take an argument of any type and return an int. You can not pass an object of that class as an argument to f
because it doesn't match the inferred type. If you want f
to take such an object as its argument, the only way is to add a type annotation to f
.
So those were examples of languages that are almost fully type inferred and of cases where they're not. If you'd remove the problematic features from those languages, they'd be fully type inferred.
Consequently basic dialects of ML without such advanced features are fully type inferred. For instance I assume that Caml Light is fully type inferred since it's basically OCaml without classes (however I don't actually know the language, so that's just an assumption).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With