Is it possible to use functools.lru_cache
for caching a partial function created by functools.partial
?
My problem is a function that takes hashable parameters and contant, non-hashable objects such as NumPy arrays.
Consider this toy example:
import numpy as np
from functools import lru_cache, partial
def foo(key, array):
print('%s:' % key, array)
a = np.array([1,2,3])
Since NumPy arrays are not hashable, this will not work:
@lru_cache(maxsize=None)
def foo(key, array):
print('%s:' % key, array)
foo(1, a)
As expected you get following error:
/Users/ch/miniconda/envs/sci34/lib/python3.4/functools.py in __init__(self, tup, hash)
349 def __init__(self, tup, hash=hash):
350 self[:] = tup
--> 351 self.hashvalue = hash(tup)
352
353 def __hash__(self):
TypeError: unhashable type: 'numpy.ndarray'
So my next idea was to use functools.partial
to get rid of the NumPy array (which is constant anyway)
pfoo = partial(foo, array=a)
pfoo(2)
So now I have a function that only takes hashable arguments, and should be perfect for lru_cache
. But is it possible to use lru_cache
in this situation? I cannot use it as a wrapping function instead of the @lru_cache
decorator, can I?
Is there a clever way to solve this?
Python's functools module comes with the @lru_cache decorator, which gives you the ability to cache the result of your functions using the Least Recently Used (LRU) strategy. This is a simple yet powerful technique that you can use to leverage the power of caching in your code.
Any time you have a function where you expect the same results each time a function is called with the same inputs, you can use lru_cache. lru_cache only works for one python process. If you are running multiple subprocesses, or running the same script over and over, lru_cache will not work.
lru_cache basics To memoize a function in Python, we can use a utility supplied in Python's standard library—the functools. lru_cache decorator. Now, every time you run the decorated function, lru_cache will check for a cached result for the inputs provided. If the result is in the cache, lru_cache will return it.
lru_cache() documentation for details. Also note that all the decorators in this module are thread-safe by default.
As the array is constant you can use a wrapper around the actual lru cached function and simply pass the key value to it:
from functools import lru_cache, partial
import numpy as np
def lru_wrapper(array=None):
@lru_cache(maxsize=None)
def foo(key):
return '%s:' % key, array
return foo
arr = np.array([1, 2, 3])
func = lru_wrapper(array=arr)
for x in [0, 0, 1, 2, 2, 1, 2, 0]:
print (func(x))
print (func.cache_info())
Outputs:
('0:', array([1, 2, 3]))
('0:', array([1, 2, 3]))
('1:', array([1, 2, 3]))
('2:', array([1, 2, 3]))
('2:', array([1, 2, 3]))
('1:', array([1, 2, 3]))
('2:', array([1, 2, 3]))
('0:', array([1, 2, 3]))
CacheInfo(hits=5, misses=3, maxsize=None, currsize=3)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With