Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to implement a Persistent Cache in Siesta with a structured model layer

I'm using (and loving) Siesta to communicate with a REST web service in my Swift App. I have implemented a series of ResponseTransformers to map the API call responses to model classes so that the Siesta Resources are automatically parsed into object instances. This all works great.

I now want to implement a Siesta PersistantCache object to support an offline mode by having Siesta cache these objects to disk (rather than in memory) by storing them in Realm. I am not sure how to do this because the documentation says (about the EntityCache.writeEntity function):

This method can — and should — examine the entity’s content and/or headers and ignore it if it is not encodable. While they can apply type-based rules, however, cache implementations should not apply resource-based or url-based rules; use Resource.configure(...) to select which resources are cached and by whom.

In an attempt to conform to this guideline, I have created a specific PersistentCache object for each Resource type based on URL Pattern matching during Service Configuration:

class _GFSFAPI: Service {
    private init() {
        configure("/Challenge/*") { $0.config.persistentCache = SiestaRealmChallengeCache() }
    }

However, since the EntityCache protocol methods only include a reference to the Entity (which exposes raw content but not the typed objects), I don't see how I can call the realm write methods during the call to EntityCache.writeEntity or how to pull the objects out of Realm during EntityCache.readEntity.

Any suggestions about how to approach this would be greatly appreciated.

like image 579
annicaburns Avatar asked Nov 07 '15 15:11

annicaburns


1 Answers

Excellent question. Having a separate EntityCache implementations for each model could certainly work, though it seems like it might be burdensome to create all those little glue classes.

Models in the Cache

Your writeEntity() is called with whatever comes out at the end of all your response transformers. If your transformers are configured to spit out model classes, then writeEntity() sees models. If those models are Realm-friendly models, well, I don’t see any reason why you shouldn’t be able to just call realm.add(entity.content). (If that’s giving you problems, let me know with an update to the question.)

Conversely, when reading from the cache, what readEntity() returns does not go through the transformer pipeline again, so it should return exactly the same thing your transformers would have produced, i.e. models.

Cache Lookup Keys

The particular paragraph you quote from the docs is ill-written and perhaps a bit misleading. When it says you “should not apply resource-based or url-based rules,” it’s really just trying to dissuade you from parsing the forKey: parameter — which is secretly just a URL, but should remain opaque to cache implementations. However, any information you can gather from the given entity is fair game, including the type of entity.content.

The one wrinkle under the current API — and it is a serious wrinkle — is that you need to keep a mapping from Siesta’s key (which you should treat as opaque) to Realm objects of different types. You might do this by:

  1. keeping a Realm model dedicated to keeping a polymorphic mapping from Siesta cache keys to Realm objects of various types,
  2. by adding a siestaKey attribute and doing some kind of union query across models, or
  3. by keeping a (cache key) → (model type, model ID) mapping outside of Realm.

I’d probably pursue the options in that order, but I believe you are in relatively unexplored (though perfectly reasonable) territory here using Realm as the backing for EntityCache. Once you’ve sussed out the options, I’d encourage you to file a Github issue for any suggested API improvements.

like image 95
Paul Cantrell Avatar answered Oct 19 '22 05:10

Paul Cantrell