Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Setting up a tornado web service in production with nginx reverse proxy

Tags:

nginx

tornado

I have been developing a web service in tornado for the last few months, in my test environment to run the service I use:

python index.py

index.py is my tornado application handler that listens on port 8001. I then request from the web service using http://localhost:8001. I am now deploying my test environment to a staging environment that should mirror production. How do I go about running tornado in production? I'm guessing I need to create some sort of daemon for the application but I have no idea where to start!

like image 339
S-K' Avatar asked Feb 07 '13 11:02

S-K'


People also ask

Can we use Nginx as reverse proxy?

The benefits of using Nginx as a reverse proxy include: Clients access all backend resources through a single web address. The reverse proxy can serve static content, which reduces the load on application servers such as Express, Tomcat or WebSphere.

Is Nginx a web server or reverse proxy?

NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. NGINX is known for its high performance, stability, rich feature set, simple configuration, and low resource consumption.

When you configure Nginx as a reverse proxy for Apache both may listen to the same port?

You do that by configuring NGINX as a reverse proxy for Apache. With this setup, NGINX will listen for all incoming requests to port 80 and pass them on to Apache, which is listening in on port 8080.


1 Answers

There are a few tools you can use.

First, Supervisord

Supervisord is a "process control system", you configure your processes and let Supervisor manage them, it'll restart them if they fail, make managing them easier and keep them running in the background

Here's a sample supervisor config file

[program:myprogram] 
process_name=MYPROGRAM%(process_num)s
directory=/var/www/apps/myapp 
command=/var/www/apps/myapp/virtualenv/bin/python index.py --PORT=%(process_num)s
startsecs=2
user=youruser
stdout_logfile=/var/log/myapp/out-%(process_num)s.log
stderr_logfile=/var/log/myapp/err-%(process_num)s.log
numprocs=4
numprocs_start=14000

With that config, Supervisor will start 4 (numprocs) instances of index.py on ports 14001-14004 (numprocs_start). We pass the --PORT=%(process_num)s to get each process to start on a different port. You should change numprocs and numprocs_start to suit your environment/equipment. Generally we run 2xCPU cores processes (so a quad core processor would have 8 processes) but that can vary hugely based on what your processes do and how much blocking there is in your code.

Next, configure NGINX to forward requests to your site

   upstream myappbackend {
            server 127.0.0.1:14001  max_fails=3     fail_timeout=1s;
            server 127.0.0.1:14002  max_fails=3     fail_timeout=1s;
            server 127.0.0.1:14003  max_fails=3     fail_timeout=1s;
            server 127.0.0.1:14004  max_fails=3     fail_timeout=1s;
    }

    server {
            listen                                          4.5.6.7:80;
            server_name                                     example.com;

            access_log      /var/log/nginx/myapp.log  main;


            location / {
                    proxy_set_header                Host            $host;
                    proxy_set_header                X-Real-Ip       $remote_addr;
                    proxy_pass                      http://myappbackend/;
            }
    }      

That config should be modified dependent on your application and the way it works, that is a very minimal configuration and will almost certainly need expanding on but that should be enough to get you started

like image 108
Smudge Avatar answered Oct 14 '22 08:10

Smudge