The Wikipedia article on Effect system is currently just a short stub and I've been wondering for a while as to what is an effect system.
The effect kind describes what is being done, and the region describes with what (parameters) it is being done. An effect system is typically an extension of a type system. The term "type and effect system" is sometimes used in this case. Often, a type of a value is denoted together with its effect as type !
An effect is anything that can be observed from outside the program. The role of a function is to return a value, and a side effect is anything, besides the returned value, that's observable from the outside of the function. It's called a side effect because it comes in addition to the value that's returned.
Algebraic effects are an approach to computational effects based on a premise that impure behaviour arises from a set of operations such as get & set for mutable store, read & print for interactive input & output, or raise for exceptions [16,18].
A "type and effect system" describes not only the kinds of values in a program, but the changes in those values. "Typestate" checking is a related idea.
An example might be a type system that tracks file handles: instead of having a function close
with return type void
, the type system would record the effect of close
as disposing of the file resource—any attempt to read from or write to the file after calling close
would become a type error.
I'm not aware of any type and effect system appearing in a mainstream programming language. They have been used to define static analyses (for example, it's quite natural to define an analysis for proper locking/unlocking in terms of effects). As such, effect systems are usually defined using inference schemes rather than concrete syntax. You could imagine a syntax looking something like
File open(String name) [+File]; // open creates a new file handle
void close(File f) [-f] ; // close destroys f
If you want to learn more, the following papers might be interesting (fair warning: the papers are quite theoretical).
(This is not an authoritative answer; just trying to trawl my memory.)
In a sense, any time you code a 'state monad' in a language, you're using the type system as a potential effect system. So "State" or "IO" in Haskell capture this notion (IO captures a whole lot of other effects as well). I vaguely remember reading papers about various languages that use advanced type systems including things like "dependent types" to control finer-grained management of effects, so that for instance the type/effect system could capture information about which memory locations would be modified in a given data type. This is useful, as it provides ways to make two functions that modify mutually exclusive bits of state be allowed to "commute" (monads don't typically commute, and different monads don't always compose well with one another, which often makes it hard to type (read: assign a static type to) 'reasonable' programs)...
An analogy at a very hand-wavy level is how Java has checked exceptions. You express extra information in the type system about certain effects (you can think of an exception as an 'effect' for the purpose of the analogy), but these 'effects' typically leak out all over your program and don't compose well in practice (you end up with a million 'throws' clauses or else resort to lots of unchecked runtime exception types).
I think a lot of research is being done in this area, both for research-y and mainstream-y languages, as the ability to annotate functions with effect information can unlock the compiler's ability to do a number of optimizations, can impact concurrency, and can do great things for various program analyses and tooling. I don't personally have high hopes for it any time soon, though, as I think lots of smart people have been working on it for a long time and there's still very little to show for it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With