Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multiple server processes using nginx and uWSGI

Tags:

nginx

uwsgi

I've noticed that you can start multiple processes within one uWSGI instance behind nginx:

uwsgi --processes 4 --socket /tmp/uwsgi.sock

Or you can start multiple uWSGI instances on different sockets and load balance between them using nginx:

upstream my_servers {
    server unix:///tmp.uwsgi1.sock;
    server unix:///tmp.uwsgi2.sock;
    #...
}

What is the difference between these 2 strategies and is one preferred over the other?

How does load balancing done by nginx (in the first case) differ from load balancing done by uWSGI (in the second case)?

nginx can front servers on multiple hosts. Can uWSGI do this within a single instance? Do certain uWSGI features only work within a single uWSGI process (ie. shared memory/cache)? If so it might be difficult to scale from the first approach to the second one....

like image 371
user202987 Avatar asked Apr 12 '14 08:04

user202987


People also ask

Is Nginx required for uWSGI?

Can I then ditch NGINX? uWSGI could be used as a standalone web server in production, but that is not it's intentional use. It may sound odd, but uWSGI was always supposed to be a go-between a full-featured web server like NGINX and your Python files.

How many processes does uWSGI have?

The threads option is used to tell uWSGI to start our application in prethreaded mode. That essentially means it is launching the application across multiple threads, making our four processes essentially eight processes.

How does Nginx uWSGI work?

Nginx implements a uwsgi proxying mechanism, which is a fast binary protocol that uWSGI can use to talk with other servers. The uwsgi protocol is actually uWSGI's default protocol, so simply by omitting a protocol specification, it will fall back to uwsgi . Save and close the file when you are finished.


1 Answers

The difference is that in the case of uWSGI there is no "real" load balancing. The first free process will always respond, so this approach is way better than having nginx load balacing between multiple instances (this is obviously true only for local instances). What you need to take in account is the "thundering herd problem". Its implications are exposed here: http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html.

Finally, all of the uWSGI features are multithread/multiprocess (and greenthreads) aware so the caching (for example) is shared by all processes.

like image 131
roberto Avatar answered Nov 02 '22 11:11

roberto