I am confused about the concept of Distributed Cache. I kinda know what it is from google search. A distributed cache may span multiple servers so that it can grow in size and in transactional capacity. However, I do not really understand how it works or how it distribute the data.
For example, let's say we have Data 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 2 cache servers A and B. If we use distributed cache, then one of possible solution is that Data 1, 3, 5, 7, 9 are stored in Cache Server A, and 2, 4, 6, 8, 10 are stored in cache Server B.
So is this correct or did I misunderstand it?
Second question is that I usually heard the word server node. What is it? In the above example, Server A is a server node, right?
Third question, if a server (let's say Server A) goes down, what can we do about that? I mean if my example above is correct, we cannot get the data 1, 3, 5, 7, 9 from cache when Server A is down, then what could Cache Server do in this case?
A distributed cache is a system that pools together the random-access memory (RAM) of multiple networked computers into a single in-memory data store used as a data cache to provide fast access to data.
What is Distributed Caching. A cache is a component that stores data so future requests for that data can be served faster. This provides high throughput and low-latency access to commonly used application data, by storing the data in memory.
Redis is an open source in-memory data store, which is often used as a distributed cache. You can configure an Azure Redis Cache for an Azure-hosted ASP.NET Core app, and use an Azure Redis Cache for local development.
Within the Object Caching Service for Java, each cache manages its own objects locally within its Java VM process. In distributed mode, when using multiple processes or when the system is running on multiple sites, a copy of an object may exist in more than one cache.
Yes, half the data on server a, and half on server b would be a distributed cache. There are many methods of distributing the data, though some sort of hashing of the keys seems to be most popular.
The terms server and node are generally interchangeable. A node is generally a single unit of some collection, often called a cluster. A server is generally a single piece of hardware. In erlang, you can run multiple instances of the erlang runtime on a single server, and thus you'd have multiple erlang nodes... but generally you'd want to have one node per server for more optimum scheduling. (For non-distributed languages and platforms you have to manage your processes based on your needs.)
If a server goes down, and it is a cache server, then the data would have to come from its original source. EG: A cache is usually a memory based database designed for quick retrieval. The data in the cache sticks around only so long as its being used regularly, and eventually will be purged. But for distributed systems where you need persistence, a common technique is to have multiple copies. EG: you have servers A, B, C, D, E, and F. For data 1, you would put it on A, and then a copy on B and C. Couchbase and Riak do this. For data 2, it could be on B, and then copies on C and D. This way if any one server goes down you still have two copies.
I have been using Distributed caching solutions for quite some time now (NCache , AppFabric etc) and I am going to answer all three questions based on my experience with Distributed caching.
1: Distributed caching solution allows you to keep data on all the servers by creating a cache cluster. Lets say you have 2 cache servers(server nodes) and you have added 10 items in your cache. Ideally 5 items should be present in both of the server nodes since the data load gets distributed between the number of servers in your cache cluster. This is usually achieved with help of hashing and intelligent data distribution algorithms. As a result, your data request load also gets divided between all cache servers and your achieve linear growth in transnational capacity as you as more servers in the cache cluster.
2: A cache cluster can contain many server machines which are also called server nodes. Yes, Server A is a server node or server machine in your example.
3: Typically Distributed caching system are very reliable using replication support. If one or more servers go down and you had the replication turned on then there will be no data lose or downtime. NCache has different typologies to tackle this such as replicated topology and Partition of replica topology where data of each server is replicated over to the other server as well. In case one server goes down, the replicated data of that server is automatically made available from the surviving server node.
In your example, the data of server A(1, 3, 5, 7, 9) is replicated to server B(2, 4, 6, 8, 10) and vice versa. If server A goes down, the data of server A that is present on Server B will be made available and used from there so that no data lose occurs. So if server A goes down and the application requests for data (1), this data will get retrieved from Server B as Server B contains backup of all the data of Server A. This is seamless to your applications and is managed automatically by the caching system.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With