there's a scenario, 1000 requests ask redis to get the key which name is goods_stock , and set the key's vaule equal to goods_stocks-1 in the redis at the same time, how does the redis sever deals with these request? Is it deal with a default queue, like every request is a block request?
Your app could be multi-threaded but in the server-side Redis-itself is single threaded. All the operation in Redis is atomic. So it works like sequential on Redis side. Key points are quoted here:
The fact that Redis operations are atomic is simply a consequence of the single-threaded event loop. The interesting point is atomicity is provided at no extra cost (it does not require synchronization). It can be exploited by the user to implement optimistic locking and other patterns without paying for the synchronization overhead.
Redis is single threaded. How can I exploit multiple CPU / cores?
It's not very frequent that CPU becomes your bottleneck with Redis, as usually Redis is either memory or network bound. For instance, using pipelining Redis running on an average Linux system can deliver even 1 million requests per second, so if your application mainly uses O(N) or O(log(N)) commands, it is hardly going to use too much CPU. However, to maximize CPU usage you can start multiple instances of Redis in the same box and treat them as different servers. At some point a single box may not be enough anyway, so if you want to use multiple CPUs you can start thinking of some way to shard earlier. You can find more information about using multiple Redis instances in the Partitioning page. However with Redis 4.0 we started to make Redis more threaded. For now this is limited to deleting objects in the background, and to blocking commands implemented via Redis modules. For the next releases, the plan is to make Redis more and more threaded.
Hava look on following posts for more details:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With