The typeannotations module provides a set of tools for type checking and type inference of Python code. It also a provides a set of types useful for annotating functions and objects. These tools are mainly designed to be used by static analyzers such as linters, code completion libraries and IDEs.
__annotations__ Quirks __annotations__ the object will create a new empty dict that it will store and return as its annotations. Deleting the annotations on a function before it has lazily created its annotations dict will throw an AttributeError ; using del fn.
For functions, you can annotate arguments and the return value. This is done as follows: def func(arg: arg_type, optarg: arg_type = default) -> return_type: ... For arguments, the syntax is argument: annotation , while the return type is annotated using -> annotation .
What are Function annotations? Function annotations are arbitrary python expressions that are associated with various part of functions. These expressions are evaluated at compile time and have no life in python's runtime environment. Python does not attach any meaning to these annotations.
Function annotations are what you make of them.
They can be used for documentation:
def kinetic_energy(mass: 'in kilograms', velocity: 'in meters per second'):
...
They can be used for pre-condition checking:
def validate(func, locals):
for var, test in func.__annotations__.items():
value = locals[var]
msg = 'Var: {0}\tValue: {1}\tTest: {2.__name__}'.format(var, value, test)
assert test(value), msg
def is_int(x):
return isinstance(x, int)
def between(lo, hi):
def _between(x):
return lo <= x <= hi
return _between
def f(x: between(3, 10), y: is_int):
validate(f, locals())
print(x, y)
>>> f(0, 31.1)
Traceback (most recent call last):
...
AssertionError: Var: y Value: 31.1 Test: is_int
Also see http://www.python.org/dev/peps/pep-0362/ for a way to implement type checking.
I think this is actually great.
Coming from an academic background, I can tell you that annotations have proved themselves invaluable for enabling smart static analyzers for languages like Java. For instance, you could define semantics like state restrictions, threads that are allowed to access, architecture limitations, etc., and there are quite a few tools that can then read these and process them to provide assurances beyond what you get from the compilers. You could even write things that check preconditions/postconditions.
I feel something like this is especially needed in Python because of its weaker typing, but there were really no constructs that made this straightforward and part of the official syntax.
There are other uses for annotations beyond assurance. I can see how I could apply my Java-based tools to Python. For instance, I have a tool that lets you assign special warnings to methods, and gives you indications when you call them that you should read their documentation (E.g., imagine you have a method that must not be invoked with a negative value, but it's not intuitive from the name). With annotations, I could technically write something like this for Python. Similarly, a tool that organizes methods in a large class based on tags can be written if there is an official syntax.
This is a way late answer, but AFAICT, the best current use of function annotations is PEP-0484 and MyPy.
Mypy is an optional static type checker for Python. You can add type hints to your Python programs using the upcoming standard for type annotations introduced in Python 3.5 beta 1 (PEP 484), and use mypy to type check them statically.
Used like so:
from typing import Iterator
def fib(n: int) -> Iterator[int]:
a, b = 0, 1
while a < n:
yield a
a, b = b, a + b
Just to add a specific example of a good use from my answer here, coupled with decorators a simple mechanism for multimethods can be done.
# This is in the 'mm' module
registry = {}
import inspect
class MultiMethod(object):
def __init__(self, name):
self.name = name
self.typemap = {}
def __call__(self, *args):
types = tuple(arg.__class__ for arg in args) # a generator expression!
function = self.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(*args)
def register(self, types, function):
if types in self.typemap:
raise TypeError("duplicate registration")
self.typemap[types] = function
def multimethod(function):
name = function.__name__
mm = registry.get(name)
if mm is None:
mm = registry[name] = MultiMethod(name)
spec = inspect.getfullargspec(function)
types = tuple(spec.annotations[x] for x in spec.args)
mm.register(types, function)
return mm
and an example of use:
from mm import multimethod
@multimethod
def foo(a: int):
return "an int"
@multimethod
def foo(a: int, b: str):
return "an int and a string"
if __name__ == '__main__':
print("foo(1,'a') = {}".format(foo(1,'a')))
print("foo(7) = {}".format(foo(7)))
This can be done by adding the types to the decorator as Guido's original post shows, but annotating the parameters themselves is better as it avoids the possibility of wrong matching of parameters and types.
Note: In Python you can access the annotations as function.__annotations__
rather than function.func_annotations
as the func_*
style was removed on Python 3.
Uri has already given a proper answer, so here's a less serious one: So you can make your docstrings shorter.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With