I am trying to understand the basic concepts of the distributed cache and its usage.
Firstly, is the distributed cache a cluster of machines, who together act as one, big cache for all the clients, or do clients keep their local cache and one coordinator simply sends updates to all the clients stating how to synch their LOCAL copy?
Secondly, if the cache is a distributed set of machines that maintain the cached data, why would we simply not send a query to the DB directly but rather send the request over the network, to the cache? I guess the performance overhead might be similar...
Finally what is the primary benefit of the distributed cache i.e. why do people not stick to the traditional, local cache model?
Thanks a lot for all the answers/resources you might provide.
I'll use Couchbase as an example of a distributed cache (http://www.couchbase.com/).
First question: How does a distributed cache coordinate data?
Answer: Usually the distributed cache is indeed many machines acting as one logical unit. So you might have five computers all running Couchbase and they take care of data integrity and redundancy for you. In other words if one machine dies, you can still get your data from the cluster. (But yes, each node will have a copy of data in case of failures.)
Some clustered machines will have a process in front of the machines in the cluster to route requests and sometimes you use multiple connection strings and the client will round robin the requests to the cluster. Just depends on the technology.
Second question: Why use a cache since it all goes over the network?
Answer: Quite a few of the distributed cache technologies out there live solely in RAM/memory. They never have to go to disk for a query so they are faster than a typical database.
Also databases often have to do some work to join data together from multiple tables, whereas a cache usually just stores data in a key/value. This means the cache never has to actually process anything. It just does straight lookups which are cheap.
Third question: Why a distributed cache over local caches?
Answer: When you start to scale you will want a distributed cache.
First of all the cache can grow quite large and if it runs only in memory it will compete with your web server (or whatever) for resources. Better to have a machine dedicated for caching.
Secondly the cache will scale differently than other technologies in your stack. You might need only four cache nodes for every ten web server nodes. Better to separate.
Lastly, you want any client to be able to connect and get the most current data. Otherwise if a user bounces from one web server to another in a web farm, the cached data could be quite different.
To answer your second question (based on your response to Ryan1234): yes you do have to go to connect to the cache servers and if you had a DB you'd have to connect to that as well but its the "where the data is retrieved from" part that makes the difference in performance; so a DB is disk based while a distributed cache is RAM/memory based. Why customers are relying on caching is because DB has limited resources in terms of connections: the more connections and the more calls you make to the DB the slower performance you'll get and thus your DB will become a bottleneck. To relieve this stress on the DB a caching teir sits on "top" of the DB and stores frequently accessed objects in memory (depending on your application being transactional or referential) and now your application need not go to the DB to get these objects. One important feature of a cache is its ability to scale linearly as the load on your application increases or when your application scales. So, essentially, you can add more servers to the caching teir and these servers will pool memory resources and give a performance boost.
The second part of your question is more of a local cache and a distributed cache. There are caching solutions like NCache that provide a "client cache" that keeps a subset of the data that your application requires on the same server as the application and thus your application will not have to make over the network calls. And at the same time this client cache is kept synchronized with the main cache.
If you want to read more detail on this then read Scalable WCF Applications Using Distributed Caching
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With