I'm new to Google App Engine, and I've spent the last few days building an app using GAE's Memcache to store data. Based on my initial findings, it appears as though GAE's Memcache is NOT global?
Let me explain further. I'm aware that different requests to GAE can potentially be served by different instances (in fact this appears to happen quite often). It is for this reason, that I'm using Memcache to store some shared data, as opposed to a static Map. I thought (perhaps incorrectly) that this was the point of using a distributed cache so that data could be accessed by any node.
Another definite possibility is that I'm doing something wrong. I've tried both JCache and the low-level Memcache API (I'm writing Java, not Python). This is what I'm doing to retrieve the cache:
MemcacheService cache = MemcacheServiceFactory.getMemcacheService();
After deployment, this is what I examine (via my application logs):
Now I also know that there is no guarantee to how long data will be in Memcache, but from my findings it appears the data is gone the moment a diff instance tries to access the cache. This seems to go against the whole concept of a distributed global cache no?
Hopefully someone can clarify exactly how this SHOULD behave. If Memcache is NOT suppose to be global and every server instance has its own copy, then why even use Memcache? I could simply use a static HashMap (which I initially did until I realized it wouldn't be global due to different instances serving my requests).
Help?
Shared memcache is the free default for App Engine applications. It provides cache capacity on a best-effort basis and is subject to the overall demand of all the App Engine applications using the shared memcache service. Dedicated memcache provides a fixed cache capacity assigned exclusively to your application.
Memcache is a high-performance, distributed memory object caching system that provides fast access to cached data.
Redis and Memcached are both in-memory data storage systems.
When storing data, Redis stores data as specific data types, whereas Memcached only stores data as strings. Because of this, Redis can change data in place without having to re-upload the entire data value. This reduces network overhead.
Yes, Memcache is shared across all instances of your app.
I found the issue and got it working. I was initially using the JCache API and couldn't get it to work, so I switched over to the low-level Memcache API but forgot to remove the old JCache code. So they two implementations were stepping on each other.
I'm not sure why the JCache implementation didn't work so I'll share the code:
try {
if (CacheManager.getInstance().getCache(CACHE_GEO_CLIENTS) == null) {
Cache cache = CacheManager.getInstance().getCacheFactory().createCache(Collections.emptyMap());
cache.put(CACHE_GEO_CLIENTS, new HashMap<String, String>());
CacheManager.getInstance().registerCache(CACHE_GEO_CLIENTS, cache);
}
} catch (CacheException e) {
log.severe("Exception while creating cache: " + e);
}
This block of code is inside a private constructor for a singleton called CacheService. This singleton serves as a Cache facade. Note that since requests can be served by different nodes, each node will have this Singleton instance. So when the Singleton is constructed for the first and only time, it'll check to see if my cache is available. If not, it'll create it. This should technically happen only once since Memcache is global yeah? The other somewhat odd thing I'm doing here is creating a single cache entry of type HashMap to store my actual values. I'm doing this because I need to enumerate through all keys and that's something that I can't do with Memcache natively.
What am I doing wrong here?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With