I have this simple cache
decorator demo here
@functools.cache
def cached_fib(n):
assert n > 0
if n <= 2:
return 1
return cached_fib(n - 1) + cached_fib(n - 2)
t1 = time.perf_counter()
cached_fib(400)
t2 = time.perf_counter()
print(f"cached_fib: {t2 - t1}") # 0.0004117000003134308
I want to access the actual cache dictionary object inside this cached_fib
, but when I try to access through cached_fib.cache
, it gives me an error saying that AttributeError: 'functools._lru_cache_wrapper' object has no attribute 'cache'
though the cache attribute is used in python and c version alike.
Thank you!
The internals of the cache are encapsulated for thread safety and to allow the underlying implementation details to change.
The three public attributes are, some statistics in cache_info
:
>>> cached_fib.cache_info()
CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
>>> cached_fib(123)
22698374052006863956975682
>>> cached_fib.cache_info()
CacheInfo(hits=120, misses=123, maxsize=None, currsize=123)
Details about the init mode in cache_parameters
>>> cached_fib.cache_parameters()
{'maxsize': None, 'typed': False}
And cache_clear
which purges the cache.
>>> cached_fib.cache_info()
CacheInfo(hits=120, misses=123, maxsize=None, currsize=123)
>>> cached_fib.cache_clear()
>>> cached_fib.cache_info()
CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
Additionally, the "uncached" function version is publicly available via __wrapped__
, but this is not something specific to functools.cache
:
>>> cached_fib
<functools._lru_cache_wrapper object at 0x10b7a3270>
>>> cached_fib.__wrapped__
<function cached_fib at 0x10bb781f0>
If you want a cache implementation where the underlying dict is exposed to the user, I recommend the third-party cachetools library.
There is no OfficalWay™ to get the underlying dictionary. It has been intentionally encapsulated to allow the implementation freedom to evolve. However, there is a sneaky backdoor way to look under the hood and get access to the internal dictionary:
>>> import gc
>>> from pprint import pprint
>>> pprint(gc.get_referents(cached_fib))
[<class 'functools._lru_cache_wrapper'>,
{1: 1,
2: 1,
3: 2,
4: 3,
5: 5,
6: 8,
7: 13,
8: 21,
9: 34,
10: 55,
11: 89,
...
}
]
Instead of resorting to tricks, here's a tool that can track function inputs in a straight-forward fashion:
def tracker(mapping):
"Track inputs and outputs to a function"
def deco(func):
def inner(*args):
result = func(*args)
mapping[args] = result
return result
return inner
return deco
Use it like this:
@functools.cache
@tracker(mydata)
def cached_fib(n):
assert n > 0
if n <= 2:
return 1
return cached_fib(n - 1) + cached_fib(n - 2)
This gives you all the performance benefits of caching but also tracks the actual call inputs and outputs:
>>> cached_fib(20)
6765
>>> pprint(mydata)
{(1,): 1,
(2,): 1,
(3,): 2,
(4,): 3,
(5,): 5,
(6,): 8,
(7,): 13,
(8,): 21,
(9,): 34,
(10,): 55,
(11,): 89,
(12,): 144,
(13,): 233,
(14,): 377,
(15,): 610,
(16,): 987,
(17,): 1597,
(18,): 2584,
(19,): 4181,
(20,): 6765}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With