In mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics.
Martin-Löf's Intuitionistic type theory, 1971–1984 Per Martin-Löf found a type theory that corresponded to predicate logic by introducing dependent types, which became known as intuitionistic type theory or Martin-Löf type theory.
theory of types, in logic, a theory introduced by the British philosopher Bertrand Russell in his Principia Mathematica (1910–13) to deal with logical paradoxes arising from the unrestricted use of predicate functions as variables.
In computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type to every "term" (a word, phrase, or other set of symbols). Usually the terms are various constructs of a computer program, such as variables, expressions, functions, or modules.
Type theory is a big area. First of all, the term "types" is a kind of a misnomer in computer science, for a few reasons, even though they are mostly used for the same basic idea. Types have arisen in many contexts, philosophy, computer science, and mathematics, for mostly the same reasons. The origin of types in mathematics comes from trying to formalize set theory and running into nitty paradoxes, though similar paradoxes arise in computer science. (For example, I like to point to Girard's paradox on this one).
The way you probably interpret types at the current moment is an idea that became popular throughout the 70s-90s in computers: types are a lightweight flow insensitive analysis that allow us to make concise logical guarantees about the programs we write. Types can be used for this, but you can actually take them all the way out to encoding higher order logic, where programs are proofs. Once you go here, you can take a proof, extract the computational component, and turn it into a program which computes a "correct" result (with respect to the theorem you proved).
There are a few roads you can take from here:
You might also want to look into a few related fields:
There are a few great recommendations I have for learning beyond this, but I'd definitely start with Ben Pierce's books (I've never really gotten into the follow up book "advanced topics in types and computer science," but that's also perhaps interesting to you). In particular, I remember the Handbook of Automated Reasoning having a great article on high order type theory.
P.s., I'm convinced that the answer to this question is concretely, "get a PhD. in programming languages, philosophy, or logic..." ;-)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With