Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sharing an object between Gunicorn workers, or persisting an object within a worker

I'm writing a WSGI app using an Nginx / Gunicorn / Bottle stack that accepts a GET request, returns a simple response, and then writes a message to RabbitMQ. If I were running the app through straight Bottle, I'd be reusing the RabbitMQ connection every time the app recieves a GET. However, in Gunicorn, it looks like the workers are destroying and recreating the MQ connection every time. I was wondering if there's a good way to reuse that connection.

More detailed info:

##This is my bottle app
from bottle import blahblahblah
import bottle
from mqconnector import MQConnector

mqc = MQConnector(ip, exchange)

@route('/')
def index():
  try:
    mqc
  except NameError:
    mqc = MQConnector(ip, exchange)

  mqc.publish('whatever message')
  return 'ok'

if __name__ == '__main__':
  run(host='blah', port=808)
app = bottle.default_app()
like image 703
Chris Avatar asked Apr 18 '13 20:04

Chris


People also ask

Do Gunicorn workers shared memory?

Gunicorn also allows for each of the workers to have multiple threads. In this case, the Python application is loaded once per worker, and each of the threads spawned by the same worker shares the same memory space.

Are Gunicorn workers threads or processes?

Gunicorn is based on the pre-fork worker model. This means that there is a central master process that manages a set of worker processes. The master never knows anything about individual clients. All requests and responses are handled completely by worker processes.

Does Gunicorn support async?

Async with gevent or eventlet The default sync worker is appropriate for many use cases. If you need asynchronous support, Gunicorn provides workers using either gevent or eventlet.

What is Max requests in Gunicorn?

the gunicorn is configured to take maximum of 1000 requests per worker until the worker is respawned. about 450 people are able to load the page within a short time range (1-2 minutes)


1 Answers

Okay, this took me a little while to sort out. What was happening was, every time a new request came through, Gunicorn was running my index() method and, as such, creating a new instance of MQConnector.

The fix was to refactor MQConnector such that, rather than being a class, it was just a bunch of methods and variables. That way, each worker referred to the same MQConnector every time, rather than creating a new instance of MQConnector. Finally, I passed MQConnector's publish() function along.

#Bottle app
from blah import blahblah
import MQConnector

@route('/')
def index():
  blahblah(foo, bar, baz, MQConnector.publish)

and

#MQConnector
import pika
mq_ip = "blah"
exhange_name="blahblah"

connection=pika.BlockingConnection(....
...

def publish(message, r_key):
  ...

Results: A call that used to take 800ms now takes 4ms. I used to max out at 80 calls/second across 90 Gunicorn workers, and now I max out around 700 calls/second across 5 Gunicorn workers.

like image 190
Chris Avatar answered Sep 24 '22 03:09

Chris