I want to use pipeline to reduce the number of interaction between my program and redis-server.
I may set many commands in pipeline but I couldn't find any document describing the max number of commands that could be set in pipeline.
Is there any advice? Thanks in advance.
Not sure that there is a maximum, but I don't think that you want to reach it in case there is.
In most cases, limiting the size of the pipeline to 100-1000 operations gives the best results. But, you can do a little benchmark research that includes typical requests that you send. Pipelining requests is generally good, but keep in mind that the responses are kept in Redis memory until all pipeline requests are served, and your client waits for the long reply of all requests.
You should try to find the sweet spot of concurrent connections, pipelined requests and your Redis memory.
Your client will not start reading replies until the last query of your pipeline has been sent to Redis. When the pipeline is too long, it has the potential to block things.
You can use redis-benchmark or memtier_benchmark. Here's a more detailed description and instructions on the memtier_benchmark.
When you find the number of pipelined requests and number of connections, you should implement it with your redis-py client in your app.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With