I want to have a function that can use functools.lru_cache
, but not by default. I am looking for a way to use a function parameter that can be used to disable the lru_cache
. Currently, I have a two versions of the function, one with lru_cache
and one without. Then I have another function that wraps these with a parameter that can be used to control which function is used
def _no_cache(foo):
print('_no_cache')
return 1
@lru_cache()
def _with_cache(foo):
print('_with_cache')
return 0
def cache(foo, use_cache=False):
if use_cache:
return _with_cache(foo)
return _no_cache(foo)
Is there a simpler way to do this?
Clearing LRU Cache After the use of the cache, cache_clear() can be used for clearing or invalidating the cache. These methods have limitations as they are individualized, and the cache_clear() function must be typed out for each and every LRU Cache utilizing the function.
lru_cache basics To memoize a function in Python, we can use a utility supplied in Python's standard library—the functools. lru_cache decorator. Now, every time you run the decorated function, lru_cache will check for a cached result for the inputs provided. If the result is in the cache, lru_cache will return it.
Python's @lru_cache decorator offers a maxsize attribute that defines the maximum number of entries before the cache starts evicting old items. By default, maxsize is set to 128 . If you set maxsize to None , then the cache will grow indefinitely, and no entries will be ever evicted.
@functools. lru_cache(user_function) @functools. lru_cache(maxsize=128, typed=False) Decorator to wrap a function with a memoizing callable that saves up to the maxsize most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments.
You can't disable the cache from inside the decorated function. However, you can simplify your code a bit by accessing the function directly via the __wrapped__
attribute.
From the documentation:
The original underlying function is accessible through the
__wrapped__
attribute. This is useful for introspection, for bypassing the cache, or for rewrapping the function with a different cache.
Demo:
from functools import lru_cache
@lru_cache()
def f(arg):
print(f"called with {arg}")
return arg
def call(arg, use_cache=False):
if use_cache:
return f(arg)
return f.__wrapped__(arg)
call(1)
call(1, True)
call(2, True)
call(1, True)
Output:
called with 1
called with 1
called with 2
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With