I assume it's a quite frequent problem with well-known solutions, which I wasn't able to find. So I'm seeking advice here.
Consider the following setting:
class A; // some class
const A f(const A&); // an _expensive_ function
void do_stuff()
{
A a;
a.modify(...);
do_stuff1(f(a)); // compute f(a)
do_stuff2(f(a)); // use cached value of f(a)
a.modify(...);
do_stuff3(f(a)); // recompute f(a)
}
I would like the return value of f(a)
to be cached between the first and
second calls, but to be discarded after the second call to a.modify()
.
EDIT: In practice, the calls to f(a)
will be in different scopes.
Here are the pieces of solutions I've explored, for what it's worth.
I can imagine a simple solution involving adding a time stamp to class A
that
function f
can check and decide if it needs to update its cached result,
stored somewhere in a central cache. I guess this also implies changing the
signature of f
to:
const A& f(const A&);
Problem 1: with a central cache, we need a mechanism to destroy the
cached result of f(a)
when a
is destroyed.
Aside from Problem 1, this seems simple enough. But it gets complicated when A
stands for std::vector<...>
. I guess dynamic polymorphism should be excluded
here. So we forget about adding a time stamp to a subclass of std::vector<...>
and all
the overriding that it would imply. However, we could compute some hash code or UUID
based on the contents of a
--- assuming that it is much cheaper than computing
f(a)
--- and base the central cache on these hash codes. But we're facing
Problem 1 again.
I still haven't found how to implement this, but the idea is to have a
notify
the cache for f(a)
when a
is written to or destroyed, but not when it is
merely read from. I can't figure how to do that without dynamic polymorphism,
and without slowing down single-element accesses using operator[]
or
iterators by sending notifications to the cache for each modified element.
Problem 2: find a mechanism of delimiting sets of changes to a
to invalidate the cache only once for each set of changes.
I've thought of proxies to enable write access on a
(inspired by the concept
of mutex), but couldn't come up with any working code.
Any ideas?
I've done similar stuff with interfaces like this:
class F
{
public:
virtual int f(int a)=0;
};
class Cache : public F
{
public:
Cache(F &f) : f(f) { }
int f(int a) { /*caching logic here, calls f.f() if not found from cache */ }
F &f;
};
class Impl : public F
{
int f(int a) { /* real implementation here */ }
};
Then it's just deciding where to use the caching logic:
Impl i;
Cache c(i);
c.f(10); // put to cache with key 10
c.f(10); // found from cache
c.f(11); // put to cache with key 11
Can't you just do this:
const A &cacheA = f(a);
do_stuff1(cacheA); // compute f(a)
do_stuff2(cacheA); // use cached value of f(a)
I'm probably missing some important detail here, but can't you just use a LRU cache for this purpose?
make f a member of A. Then you can decide in the instance of A if you can reuse the cached result or not.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With