Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Need an efficient in-memory cache that can process 4k to 7k lookups or writes per second

I have an efficient C# application that receives 80 bytes of data at a rate of 5k to 10k records per second on a multi threaded CPU.

I need to now set up a in memory-cache to detect and filter duplicate records so I can suppress them from travelling further in the pipeline.

Cache specs (maximum thresholds)

  • 80 bytes of data
  • 10,000 records / second
  • 60 Seconds of cache = Key quantity = 60,000
  • (sub total 48000000 bytes = 48Mb )
  • Ideal cache size = 5 minutes (or 240Mb)
  • Acceptable runtime cache size bloat = 1 GB

Question

What is the best way to set up an in-memory cache, dictionary, hashtable, array, etc that will allow the most efficient lookups, purging of old cache data, and prevent expiration of data that is hit.

I looked at ASP.Net Cache, System.Runtime.MemoryCache but think I need something more lightweight and customized to achieve correct throughput. I'm also looking at the System.Collections.Concurrent as an alternative and this related whitepaper.

Does anyone have suggestions on what the best approach would be?

like image 687
makerofthings7 Avatar asked May 12 '12 13:05

makerofthings7


1 Answers

Remember, don't prematurely optimise!

There may be a reasonably concise way of doing this without resorting to unmanaged code, pointers and the like.

A quick test on my old, ordinary laptop shows that you can add 1,000,000 entries to a HashSet while removing 100,000 entries in ~100ms. You can then repeat that with the same 1,000,000 values in ~60ms. This is for working with just longs - 80 byte data structures are obviously larger, but a simple benchmark is in order.

My recommendations:

  • Implement the 'lookup' and 'duplicate detection' as a HashSet, which is extremely fast for inserting, removing and finding.

  • Implement the actual buffer (that receives new events and expires old ones) as a suitably large circular/ring buffer. This will avoid memory allocations and deallocations, and can add entries to the front and remove them from the back. Here are some helpful links including one (the second one) that describes algorithms for expiring items in the cache:

Circular Buffer for .NET

Fast calculation of min, max, and average of incoming numbers

Generic C# RingBuffer

How would you code an efficient Circular Buffer in Java or C#

  • Note that the circular buffer is even better if you want your cache to be bounded by number of elements (say 100,000) rather than time of events (say the last 5 minutes).

  • When items are removed from the buffer (which searches from the end first), they can be removed from the HashSet also. No need to make both data structures the same.

  • Avoid multithreading until you need it! You have a naturally 'serial' workload. Unless you know one of your CPU threads can't handle the speed, keep it in a single thread. This avoids contention, locks, CPU cache misses and other multithreading headaches that tend to slow things down for workloads that are not embarrassingly parallel. My main caveat here is that you may want to offload the 'receiving' of the events to a different thread from the processing of them.

  • The above recommendation is the main idea behind Staged event-driven architecture (SEDA) that is used as the basis for high-performance and stable-behaviour event-driven systems (such as messaging queues).

The above design can be wrapped cleanly, and attempts to achieve the raw performance required with a minimum of complexity. This only provides a decent baseline from which efficiency can now be extracted and measured.

(Note: If you need persistence for the cache, look at Kyoto Cabinet. If you need the cache to be visible to other users or distributed, look at Redis.

like image 110
yamen Avatar answered Sep 20 '22 04:09

yamen