Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I implement Redis pipelined requests with Booksleeve?

I'm a bit mixed up about the difference between a Redis transaction and pipeline and ultimately how to use pipelines with Booksleeve. I see that Booksleeve has support for the Redis transaction feature (MULTI/EXEC), but there is no mention in its API/tests about a pipelining feature. However, it's clear in other implementations that there is a distinction between pipelines and transactions, namely in atomicity, as evidenced in the redis-ruby version below, but in some places the terms seem to be used interchangeably.

redis-ruby implementation:

r.pipelined {
  # these commands will be pipelined
  r.get("insensitive_key")
}

r.multi {
  # these commands will be executed atomically
  r.set("sensitive_key")
}

I'd just use MULTI/EXEC instead but they seem to block all other users until the transaction has completed (not necessary in my case), so I worry about their performance. Has anyone used pipelines with Booksleeve or have any ideas about how to implement them?

like image 469
bosgood Avatar asked Dec 05 '11 19:12

bosgood


People also ask

How does Redis pipelining work?

Redis pipelining is a technique for improving performance by issuing multiple commands at once without waiting for the response to each individual command. Pipelining is supported by most Redis clients. This document describes the problem that pipelining is designed to solve and how pipelining works in Redis.

Does Redis cluster support pipeline?

Yes, you can use pipeline with Redis cluster, if all keys belong to the same slot.

How are Redis pipelining and transaction different?

The difference is pipelines are not atomic whereas transactions are atomic, meaning 2 transactions do not run at the same time, whereas multiple pipelines can be executed by Redis-server at the same time in an interleaved fashion.

Is Redis multi Atomic?

Redis transaction is also atomic. Atomic means either all of the commands or none are processed.


2 Answers

In BookSleeve, everything is always pipelined. There are no synchronous operations. Not a single one. As such, every operation returns some form of Task (could be a vanilla Task, could be a Task<string>, Task<long>, etc), which at some point in the future (i.e. when redis responds) will have a value. You can use Wait at your calling code to perform a synchronous wait, or ContinueWith / await (C# 5 language feature) to perform an asynchronous callback.

Transactions are no different; they are pipelined. The only subtle change with transactions is that they are additionally buffered at the call-site until complete (since it is a multiplexer, we can't start pipelining transaction-related messages until we have a complete unit-of-work, as it would adversely impact other callers on the same multiplexer).

So: the reason there is no explicit .pipelined is that everything is pipelined and asynchronous.

like image 73
Marc Gravell Avatar answered Sep 30 '22 10:09

Marc Gravell


Pipelining is a protocol level communication strategy and has nothing to do with atomicity. It is entirely orthogonal to notion of 'transactions'. (For example, you can use MULTI .. EXEC in a pipelined connection.)

What is pipelining?

The most basic connector to redis would be a synchronous client interacting in a request-reply manner. Client sends a request, and then waits for response from Redis before sending the next request.

In pipelining, the client can keep sending requests without pausing to see the Redis response for each request. Redis is, of course, a single threaded server and a natural serialization point, and thus request order is preserved and reflected in the response order. This means, the client can have one thread sending requests (typically by dequeuing from a request queue) and another thread is constantly processing responses from Redis. Note that of course you can still use pipelining with a single threaded client, but you do lose some of the efficiencies. The two threaded model allows for full utilization of your local CPU and the network bandwidth (e.g. saturation).

If you are following this so far, you must ask yourself: well, how are the request and responses matched on the client side? Good question! There are various ways to approach this. In JRedis, I wrap requests in a (java) Future object, to deal with the asynchrony of the request/response processing. Everytime a request is sent, a corresponding Future object is wrapped by a pending response object and is queued. The response listener simply dequeues from this queue 1 item at a time and parses the response (stream) and updates the future object.

Now the end user of the client can either be exposed to a synchronous or an asynchronous interface. If the interface is synchronous, the implementation naturally must block on the Future's response.

If you have followed so far, then it should be clear that a single threaded app using synchronous semantics with pipelining defeats the entire purpose of pipelining (since the app is blocking on the response and is not busy feeding the client additional requests.) But if the app is multithreaded, a synchronous interface to the pipeline allows you to use a single connection while processing N client-app threads. (So here, it is a implementation strategy to help build a thread-safe connection.)

If the interface to pipeline is asynchronous, then even a single threaded client app can benefit. Throughput increases at least by an order of magnitude.

(Caveats with pipelining: It is non-trivial to write a fault-tolerant pipelined client.)

Ideally I should use a diagram, but pay attention to what happens at the end of the clip: http://www.youtube.com/watch?v=NeK5ZjtpO-M

like image 32
alphazero Avatar answered Sep 30 '22 11:09

alphazero