I have a function that returns a list, say list_x.
def result(val):
..
return(list_x)
I am calling result() every minute and storing the list.
def other_func():
#called every minute
new_list = result(val)
I would like to store the value of new_list for an hour (in some sort of in-memory cache may be?) and then update it again, basically call results() after an hour and not every minute.I read about functools.lru_cache but that will not help here I think. Any ideas?
One way to implement an LRU cache in Python is to use a combination of a doubly linked list and a hash map. The head element of the doubly linked list would point to the most recently used entry, and the tail would point to the least recently used entry.
One technique to make the algorithm more efficient is called memoization. Memoization speeds up the algorithm by storing previously calculated results in a cache. Thus, the function only needs to look up the result of a node without running the computation again.
An in-memory cache is a data storage layer that sits between applications and databases to deliver responses with high speeds by storing data from earlier requests or copied directly from databases.
lru_cache decorator. lru_cache isn't ... It looks like a fantastic library that provides great functionality. But note that those classes are not thread-safe - you have to manually ......
The ttl_cache
decorator in cachetools==3.1.0
works a lot like functools.lru_cache
, but with a time to live.
import cachetools.func
@cachetools.func.ttl_cache(maxsize=128, ttl=10 * 60)
def example_function(key):
return get_expensively_computed_value(key)
class ExampleClass:
EXP = 2
@classmethod
@cachetools.func.ttl_cache()
def example_classmethod(cls, i):
return i* cls.EXP
@staticmethod
@cachetools.func.ttl_cache()
def example_staticmethod(i):
return i * 3
Building a single-element cache with a time-to-live is pretty trivial:
_last_result_time = None
_last_result_value = None
def result(val):
global _last_result_time
global _last_result_value
now = datetime.datetime.now()
if not _last_result_time or now - _last_result_time > datetime.timedelta(hours=1):
_last_result_value = <expensive computation here>
_last_result_time = now
return _last_result_value
If you want to generalize this as a decorator, it's not much harder:
def cache(ttl=datetime.timedelta(hours=1)):
def wrap(func):
time, value = None, None
@functools.wraps(func)
def wrapped(*args, **kw):
nonlocal time
nonlocal value
now = datetime.datetime.now()
if not time or now - time > ttl:
value = func(*args, **kw)
time = now
return value
return wrapped
return wrap
If you want it to handle different arguments, storing a time-to-live for each one:
def cache(ttl=datetime.timedelta(hours=1)):
def wrap(func):
cache = {}
@functools.wraps(func)
def wrapped(*args, **kw):
now = datetime.datetime.now()
# see lru_cache for fancier alternatives
key = tuple(args), frozenset(kw.items())
if key not in cache or now - cache[key][0] > ttl:
value = func(*args, **kw)
cache[key] = (now, value)
return cache[key][1]
return wrapped
return wrap
You can of course key adding features to it—give it a max size and evict by time of storage or by LRU or whatever else you want, expose cache stats as attributes on the decorated function, etc. The implementation of lru_cache
in the stdlib should help show you how to do most of the trickier things (since it does almost all of them).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With