cache is a decorator that helps in reducing function execution for the same inputs using the memoization technique. The function returns the same value as lru_cache(maxsize=None) , where the cache grows indefinitely without evicting old values.
Memoization allows you to optimize a Python function by caching its output based on the parameters you supply to it. Once you memoize a function, it will only compute its output once for each set of parameters you call it with. Every call after the first will be quickly retrieved from a cache.
One way to implement an LRU cache in Python is to use a combination of a doubly linked list and a hash map. The head element of the doubly linked list would point to the most recently used entry, and the tail would point to the least recently used entry.
cache can do much more (it works on function with any arguments, properties, any type of methods, and even classes...).
Starting from Python 3.2 there is a built-in decorator:
@functools.lru_cache(maxsize=100, typed=False)
Decorator to wrap a function with a memoizing callable that saves up to the maxsize most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments.
Example of an LRU cache for computing Fibonacci numbers:
from functools import lru_cache
@lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
>>> print([fib(n) for n in range(16)])
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]
>>> print(fib.cache_info())
CacheInfo(hits=28, misses=16, maxsize=None, currsize=16)
If you are stuck with Python 2.x, here's a list of other compatible memoization libraries:
functools32
| PyPI | Source code
repoze.lru
| PyPI | Source code
pylru
| PyPI | Source code
backports.functools_lru_cache
| PyPI | Source code
It sounds like you're not asking for a general-purpose memoization decorator (i.e., you're not interested in the general case where you want to cache return values for different argument values). That is, you'd like to have this:
x = obj.name # expensive
y = obj.name # cheap
while a general-purpose memoization decorator would give you this:
x = obj.name() # expensive
y = obj.name() # cheap
I submit that the method-call syntax is better style, because it suggests the possibility of expensive computation while the property syntax suggests a quick lookup.
[Update: The class-based memoization decorator I had linked to and quoted here previously doesn't work for methods. I've replaced it with a decorator function.] If you're willing to use a general-purpose memoization decorator, here's a simple one:
def memoize(function):
memo = {}
def wrapper(*args):
if args in memo:
return memo[args]
else:
rv = function(*args)
memo[args] = rv
return rv
return wrapper
Example usage:
@memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
Another memoization decorator with a limit on the cache size can be found here.
Python 3.8 functools.cached_property
decorator
https://docs.python.org/dev/library/functools.html#functools.cached_property
cached_property
from Werkzeug was mentioned at: https://stackoverflow.com/a/5295190/895245 but a supposedly derived version will be merged into 3.8, which is awesome.
This decorator can be seen as caching @property
, or as a cleaner @functools.lru_cache
for when you don't have any arguments.
The docs say:
@functools.cached_property(func)
Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to property(), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable.
Example:
class DataSet: def __init__(self, sequence_of_numbers): self._data = sequence_of_numbers @cached_property def stdev(self): return statistics.stdev(self._data) @cached_property def variance(self): return statistics.variance(self._data)
New in version 3.8.
Note This decorator requires that the dict attribute on each instance be a mutable mapping. This means it will not work with some types, such as metaclasses (since the dict attributes on type instances are read-only proxies for the class namespace), and those that specify slots without including dict as one of the defined slots (as such classes don’t provide a dict attribute at all).
class memorize(dict):
def __init__(self, func):
self.func = func
def __call__(self, *args):
return self[args]
def __missing__(self, key):
result = self[key] = self.func(*key)
return result
Sample uses:
>>> @memorize
... def foo(a, b):
... return a * b
>>> foo(2, 4)
8
>>> foo
{(2, 4): 8}
>>> foo('hi', 3)
'hihihi'
>>> foo
{(2, 4): 8, ('hi', 3): 'hihihi'}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With