I'm using the redis-cli
tool to observe redis-server latency. Here's an example:
ubuntu:~$ redis-cli --latency -h 127.0.0.1 -p 6379
min: 0, max: 15, avg: 0.12 (2839 samples)
Question is, what these values actually mean? I'm struggling to find documentation on this beyond what's available through the tool's own help document.
Because Redis is single-‐threaded, command requests are processed sequentially. The typical latency for a 1Gb/s network is about 200 μs. If you are seeing slow response time for commands and latency that is significantly higher than 200 μs, it could be because there are a high number of requests in the command queue.
Check the minimum latency you can expect from your runtime environment using ./redis-cli --intrinsic-latency 100 . Note: you need to run this command in the server not in the client.
Redis comes with a benchmark tool called redis-benchmark . This program can be used to simulate an arbitrary number of clients connecting at the same time and performing actions on the server, measuring how long it takes for the requests to be completed.
To start Redis client, open the terminal and type the command redis-cli. This will connect to your local server and now you can run any command. In the above example, we connect to Redis server running on the local machine and execute a command PING, that checks whether the server is running or not.
The redis-cli --latency -h -p
command is a tool that helps troubleshoot and understand latency problems you maybe experiencing with Redis. It does so by measuring the time for the Redis server to respond to the Redis PING command in milliseconds.
In this context latency is the maximum delay between the time a client issues a command and the time the reply to the command is received by the client. Usually Redis processing time is extremely low, in the sub microsecond range, but there are certain conditions leading to higher latency figures.
-- Redis latency problems troubleshooting
So when we ran the command redis-cli --latency -h 127.0.0.1 -p 6379
Redis enters into a special mode in which it continuously samples latency (by running PING).
Now let's breakdown that data it returns: min: 0, max: 15, avg: 0.12 (2839 samples)
What's (2839 samples)
? This is the amount of times the redis-cli
recorded issuing the PING command and receiving a response. In other words, this is your sample data. In our example we recorded 2839 requests and responses.
What's min: 0
? The min
value represents the minimum delay between the time the CLI issued PING
and the time the reply was received. In other words, this was the absolute best response time from our sampled data.
What's max: 15
? The max
value is the opposite of min
. It represents the maximum delay between the time the CLI issued PING
and the time the reply to the command was received. This is the longest response time from our sampled data. In our example of 2839 samples, the longest transaction took 15ms
.
What's avg: 0.12
? The avg
value is the average response time in milliseconds for all our sampled data. So on average, from our 2839 samples the response time took 0.12ms
.
Basically, higher numbers for min
, max
, and avg
is a bad thing.
Some good followup material on how to use this data:
The --latency
switch puts redis-cli into a special mode that is designed to help you measure the latency between the client and your Redis server. During the time it is run in that node, redis-cli pings (using the Redis PING command) the server and keeps track of the average/minimum/maximum response times it got (in milliseconds).
This is a useful tool for ruling out network issues when you are using a remote Redis server.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With