Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the concurrency model in Openresty?

I am having a hard time trying to wrap my head around openresty's (or nginx's) concurrency model. I read the Lua variable scope, which explains the lifetime of variables, but it says nothing about concurrent access to them.

It is difficult to explain in words, so let me try to explain with code. Imagine that I have this Lua module:

local counter = {count = 0}

function counter.incr(amount)
  counter.count = counter.count + (amount or 1)
end

return counter

And then I use it in openresty like this:

server {
  location /incr {
    content_by_lua '
      local counter = require 'counter'
      counter.incr(1)
    '
  }
  location /decr {
    content_by_lua '
      local counter = require 'counter'
      counter.incr(-1)
    '
  }
  location /count {
    content_by_lua '
      local counter = require 'counter'
      ngx.write(counter.count)
    '
  }
}

I want to understand the concurrency model so I can answer these questions:

  • If I do 10 concurrent calls to /incr, and later on I call /count, can I be sure that the result will be 10 (I assume not, but why)?
  • If I do 10 concurrent calls to /incr and at the same time I do another 10 to /decr, can I be sure that /count will return 0?
  • How does the number of workers influence the results?
  • How does the phase in which the code happens (i.e. init_by_lua instead of content_by_lua) influence the results?
like image 972
kikito Avatar asked Jan 17 '14 12:01

kikito


1 Answers

nginx is using event-based architecture, which means it's using a single thread1 with an event loop that handles sockets when they become ready for reading or writing. This means that requests are not really handled at the same time, but several requests can be quickly handled one-by-one, even though there may be delays in individual request processing if there are any socket/IO delays.

If I do 10 concurrent calls to /incr, and later on I call /count, can I be sure that the result will be 10 (I assume not, but why)?

Yes. As long as /count is called after all /incr requests are finished, the result will be 10. Imagine that 9 requests have been finished, but the 10th request was for some reason delayed by the sender, and if /count was processed before the 10th request was handled by nginx, you should get 9 as the result.

If I do 10 concurrent calls to /incr and at the same time I do another 10 to /decr, can I be sure that /count will return 0?

Yes, but the order in which these requests will be handled is not guaranteed. Note that in this case you don't need to lock your state or use global semaphores or anything like that. You may run into trouble if you have some I/O calls between reading state and writing it back (as a different request can be handled during that time), but this is not what your example does.

How does the number of workers influence the results?

Lua instances are shared between requests handled by the same worker process, so multiple workers are not going to give you the same result. All your /incr requests can go to one worker, but your /count request can go to a different worker, with a different Lua instance that (still) has count set to 0. If you need to share data between instances, you probably need to use something like lua_shared_dict. See also the section on data sharing for other options.

How does the phase in which the code happens (i.e. init_by_lua instead of content_by_lua) influence the results?

init_by_lua is only executed when the master process loads the config file.

1 I'm oversimplifying as it can fork multiple instances to handle multi-core systems and some other cases as well as far as I remember.

like image 110
Paul Kulchenko Avatar answered Oct 15 '22 01:10

Paul Kulchenko