Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get a concurrency of 1000 requests with Flask and Gunicorn [closed]

I have 4 machine learning models of size 2GB each, i.e. 8GB in total. I am getting requests around 100 requests at a time. Each request is taking around 1sec.
I have a machine having 15GB RAM. Now if I increase the number of workers in Gunicorn, total memory consumption go high. So I can't increase the number of workers beyond 2.
So I have few questions regarding it :

  1. How workers can share models or memory between them?
  2. Which type of worker will be suitable, sync or async considering mentioned situation?
  3. How to use preload option in Gunicorn if it is a solution? I used it but it is of no help. May be I am doing it in a wrong way.

Here is the Flask code which I am using
https://github.com/rathee/learnNshare/blob/master/agent_api.py

like image 397
neel Avatar asked Mar 10 '16 11:03

neel


1 Answers

Use the gevent worker (or another event loop worker), not the default worker. The default sync worker handles one request per worker process. An async worker handles an unlimited number of requests per worker process as long as each request is non-blocking.

gunicorn -k gevent myapp:app

Predictably, you need to install gevent for this: pip install gevent.

like image 79
davidism Avatar answered Oct 16 '22 23:10

davidism