Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Rack concurrency - rack.multithread, async.callback, or both?

Tags:

ruby

rack

thin

I'm attempting to fully understand the options for concurrent request handling in Rack. I've used async_sinatra to build a long-polling app, and am now experimenting with bare-metal Rack using throw :async and/or Thin's --threaded flag. I am comfortable with the subject, but there are some things I just can't make sense of. (No, I am not mistaking concurrency for parallelism, and yes, I do understand the limitations imposed by the GIL).

Q1. My tests indicate that thin --threaded (i.e. rack.multithread=true) runs requests concurrently in separate threads (I assume using EM), meaning long-running request A will not block request B (IO aside). This means my application does not require any special coding (e.g. callbacks) to achieve concurrency (again, ignoring blocking DB calls, IO, etc.). This is what I believe I have observed - is it correct?

Q2. There is another, more oft discussed means of achieving concurrency, involving EventMachine.defer and throw :async. Strictly speaking, requests are not handled using threads. They are dealt with serially, but pass their heavy lifting and a callback off to EventMachine, which uses async.callback to send a response at a later time. After request A has offloaded its work to EM.defer, request B is begun. Is this correct?

Q3. Assuming the above are more-or-less correct, is there any particular advantage to one method over the other? Obviously --threaded looks like a magic bullet. Are there any downsides? If not, why is everyone talking about async_sinatra / throw :async / async.callback ? Perhaps the former is "I want to make my Rails app a little snappier under heavy load" and the latter is better-suited for apps with many long-running requests? Or perhaps scale is a factor? Just guessing here.

I'm running Thin 1.2.11 on MRI Ruby 1.9.2. (FYI, I have to use the --no-epoll flag, as there's a long-standing, supposedly-resolved-but-not-really problem with EventMachine's use of epoll and Ruby 1.9.2. That's beside the point, but any insight is welcome.)

like image 279
bioneuralnet Avatar asked Aug 15 '11 03:08

bioneuralnet


1 Answers

Note: I use Thin as synonym for all web servers implementing the async Rack extension (i.e. Rainbows!, Ebb, future versions of Puma, ...)

Q1. Correct. It will wrap the response generation (aka call) in EventMachine.defer { ... }, which will cause EventMachine to push it onto its built-in thread pool.

Q2. Using async.callback in conjunction with EM.defer actually makes not too much sense, as it would basically use the thread-pool, too, ending up with a similar construct as described in Q1. Using async.callback makes sense when only using eventmachine libraries for IO. Thin will send the response to the client once env['async.callback'] is called with a normal Rack response as argument.

If the body is an EM::Deferrable, Thin will not close the connection until that deferrable succeeds. A rather well kept secret: If you want more than just long polling (i.e. keep the connection open after sending a partial response), you can also return an EM::Deferrable as body object directly without having to use throw :async or a status code of -1.

Q3. You're guessing correct. Threaded serving might improve the load on an otherwise unchanged Rack application. I see a 20% improve for simple Sinatra applications on my machine with Ruby 1.9.3, even more when running on Rubinius or JRuby, where all cores can be utilized. The second approach is useful if you write your application in an evented manner.

You can throw a lot of magic and hacks on top of Rack to have a non-evented application make use of those mechanisms (see em-synchrony or sinatra-synchrony), but that will leave you in debugging and dependency hell.

The async approach makes real sense with applications that tend to be best solved with an evented approach, like a web chat. However, I would not recommend using the threaded approach for implementing long-polling, because every polling connection will block a thread. This will leave you with either a ton of threads or connections you can't deal with. EM's thread pool has a size of 20 threads by default, limiting you to 20 waiting connections per process.

You could use a server that creates a new thread for every incoming connection, but creating threads is expensive (except on MacRuby, but I would not use MacRuby in any production app). Examples are serv and net-http-server. Ideally, what you want is an n:m mapping of requests and threads. But there's no server out there offering that.

If you want to learn more on the topic: I gave a presentation about this at Rocky Mountain Ruby (and a ton of other conferences). A video recording can be found on confreaks.

like image 81
Konstantin Haase Avatar answered Nov 19 '22 17:11

Konstantin Haase