Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

StackExchange redis client very slow compared to benchmark tests

I'm implementing a Redis caching layer using the Stackexchange Redis client and the performance right now is bordering on unusable.

I have a local environment where the web application and the redis server are running on the same machine. I ran the Redis benchmark test against my Redis server and the results were actually really good (I'm just including set and get operations in my write up):

C:\Program Files\Redis>redis-benchmark -n 100000
====== PING_INLINE ======
  100000 requests completed in 0.88 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

====== SET ======
  100000 requests completed in 0.89 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.70% <= 1 milliseconds
99.90% <= 2 milliseconds
100.00% <= 3 milliseconds
111982.08 requests per second

====== GET ======
  100000 requests completed in 0.81 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.87% <= 1 milliseconds
99.98% <= 2 milliseconds
100.00% <= 2 milliseconds
124069.48 requests per second

So according to the benchmarks I am looking at over 100,000 sets and 100,000 gets, per second. I wrote a unit test to do 300,000 set/gets:

private string redisCacheConn = "localhost:6379,allowAdmin=true,abortConnect=false,ssl=false";


[Fact]
public void PerfTestWriteShortString()
{
    CacheManager cm = new CacheManager(redisCacheConn);

    string svalue = "t";
    string skey = "testtesttest";
    for (int i = 0; i < 300000; i++)
    {
        cm.SaveCache(skey + i, svalue);
        string valRead = cm.ObtainItemFromCacheString(skey + i);
     }

}

This uses the following class to perform the Redis operations via the Stackexchange client:

using StackExchange.Redis;    

namespace Caching
{
    public class CacheManager:ICacheManager, ICacheManagerReports
    {
        private static string cs;
        private static ConfigurationOptions options;
        private int pageSize = 5000;
        public ICacheSerializer serializer { get; set; }

        public CacheManager(string connectionString)
        {
            serializer = new SerializeJSON();
            cs = connectionString;
            options = ConfigurationOptions.Parse(connectionString);
            options.SyncTimeout = 60000;
        }

        private static readonly Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(options));
        private static ConnectionMultiplexer Connection => lazyConnection.Value;
        private static IDatabase cache => Connection.GetDatabase();

        public string ObtainItemFromCacheString(string cacheId)
        {
            return cache.StringGet(cacheId);
        }

        public void SaveCache<T>(string cacheId, T cacheEntry, TimeSpan? expiry = null)
        {
            if (IsValueType<T>())
            {
                cache.StringSet(cacheId, cacheEntry.ToString(), expiry);
            }
            else
            {
                cache.StringSet(cacheId, serializer.SerializeObject(cacheEntry), expiry);
            }
        }

        public bool IsValueType<T>()
        {
            return typeof(T).IsValueType || typeof(T) == typeof(string);
        }

    }
}

My JSON serializer is just using Newtonsoft.JSON:

using System.Collections.Generic;
using Newtonsoft.Json;

namespace Caching
{
    public class SerializeJSON:ICacheSerializer
    {
        public string SerializeObject<T>(T cacheEntry)
        {
            return JsonConvert.SerializeObject(cacheEntry, Formatting.None,
                new JsonSerializerSettings()
                {
                    ReferenceLoopHandling = ReferenceLoopHandling.Ignore
                });
        }

        public T DeserializeObject<T>(string data)
        {
            return JsonConvert.DeserializeObject<T>(data, new JsonSerializerSettings()
            {
                ReferenceLoopHandling = ReferenceLoopHandling.Ignore
            });

        }


    }
}

My test times are around 21 seconds (for 300,000 sets and 300,000 gets). This gives me around 28,500 operations per second (at least 3 times slower than I would expect using the benchmarks). The application I am converting to use Redis is pretty chatty and certain heavy requests can approximate 200,000 total operations against Redis. Obviously I wasn't expecting anything like the same times I was getting when using the system runtime cache, but the delays after this change are significant. Am I doing something wrong with my implementation and does anyone know why my benchmarked figures are so much faster than my Stackechange test figures?

Thanks, Paul

like image 245
Paul Witherspoon Avatar asked Feb 29 '16 19:02

Paul Witherspoon


2 Answers

My results from the code below:

Connecting to server...
Connected
PING (sync per op)
    1709ms for 1000000 ops on 50 threads took 1.709594 seconds
    585137 ops/s
SET (sync per op)
    759ms for 500000 ops on 50 threads took 0.7592914 seconds
    658761 ops/s
GET (sync per op)
    780ms for 500000 ops on 50 threads took 0.7806102 seconds
    641025 ops/s
PING (pipelined per thread)
    3751ms for 1000000 ops on 50 threads took 3.7510956 seconds
    266595 ops/s
SET (pipelined per thread)
    1781ms for 500000 ops on 50 threads took 1.7819831 seconds
    280741 ops/s
GET (pipelined per thread)
    1977ms for 500000 ops on 50 threads took 1.9772623 seconds
    252908 ops/s

===

Server configuration: make sure persistence is disabled, etc

The first thing you should do in a benchmark is: benchmark one thing. At the moment you're including a lot of serialization overhead, which won't help get a clear picture. Ideally, for a like-for-like benchmark, you should be using a 3-byte fixed payload, because:

3 bytes payload

Next, you'd need to look at parallelism:

50 parallel clients

It isn't clear whether your test is parallel, but if it isn't we should absolutely expect to see less raw throughput. Conveniently, SE.Redis is designed to be easy to parallelize: you can just spin up multiple threads talking to the same connection (this actually also has the advantage of avoiding packet fragmentation, as you can end up with multiple messages per packet, where-as a single-thread sync approach is guaranteed to use at most one message per packet).

Finally, we need to understand what the listed benchmark is doing. Is it doing:

(send, receive) x n

or is it doing

send x n, receive separately until all n are received

? Both options are possible. Your sync API usage is the first one, but the second test is equally well-defined, and for all I know: that's what it is measuring. There are two ways of simulating this second setup:

  • send the first (n-1) messages with the "fire and forget" flag, so you only actually wait for the last one
  • use the *Async API for all messages, and only Wait() or await the last Task

Here's a benchmark that I used in the above, that shows both "sync per op" (via the sync API) and "pipeline per thread" (using the *Async API and just waiting for the last task per thread), both using 50 threads:

using StackExchange.Redis;
using System;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;

static class P
{
    static void Main()
    {
        Console.WriteLine("Connecting to server...");
        using (var muxer = ConnectionMultiplexer.Connect("127.0.0.1"))
        {
            Console.WriteLine("Connected");
            var db = muxer.GetDatabase();

            RedisKey key = "some key";
            byte[] payload = new byte[3];
            new Random(12345).NextBytes(payload);
            RedisValue value = payload;
            DoWork("PING (sync per op)", db, 1000000, 50, x => { x.Ping(); return null; });
            DoWork("SET (sync per op)", db, 500000, 50, x => { x.StringSet(key, value); return null; });
            DoWork("GET (sync per op)", db, 500000, 50, x => { x.StringGet(key); return null; });

            DoWork("PING (pipelined per thread)", db, 1000000, 50, x => x.PingAsync());
            DoWork("SET (pipelined per thread)", db, 500000, 50, x => x.StringSetAsync(key, value));
            DoWork("GET (pipelined per thread)", db, 500000, 50, x => x.StringGetAsync(key));
        }
    }
    static void DoWork(string action, IDatabase db, int count, int threads, Func<IDatabase, Task> op)
    {
        object startup = new object(), shutdown = new object();
        int activeThreads = 0, outstandingOps = count;
        Stopwatch sw = default(Stopwatch);
        var threadStart = new ThreadStart(() =>
        {
            lock(startup)
            {
                if(++activeThreads == threads)
                {
                    sw = Stopwatch.StartNew();
                    Monitor.PulseAll(startup);
                }
                else
                {
                    Monitor.Wait(startup);
                }
            }
            Task final = null;
            while (Interlocked.Decrement(ref outstandingOps) >= 0)
            {
                final = op(db);
            }
            if (final != null) final.Wait();
            lock(shutdown)
            {
                if (--activeThreads == 0)
                {
                    sw.Stop();
                    Monitor.PulseAll(shutdown);
                }
            }
        });
        lock (shutdown)
        {
            for (int i = 0; i < threads; i++)
            {
                new Thread(threadStart).Start();
            }
            Monitor.Wait(shutdown);
            Console.WriteLine($@"{action}
    {sw.ElapsedMilliseconds}ms for {count} ops on {threads} threads took {sw.Elapsed.TotalSeconds} seconds
    {(count * 1000) / sw.ElapsedMilliseconds} ops/s");
        }
    }
}
like image 88
Marc Gravell Avatar answered Nov 15 '22 19:11

Marc Gravell


You are fetching data in synchronous way (50 clients in parallel but each client's requests are made synchronously instead of asynchronously)

One option would be to use the async/await methods (StackExchange.Redis support that).

If you need to get multiple keys at once (for example to build a daily graph of visitors to your website assuming you save visitors counter per day keys) then you should try fetching data from redis in asynchronous manner using redis pipelining, this should give you much better performance.

like image 40
Kobynet Avatar answered Nov 15 '22 19:11

Kobynet