I'm deploying my first attempt at using django+gunicorn+nginx.
curl -XGET http://127.0.0.0.1:8000
works fine if I run the development server).http://example.com/static/my_pic.png
in my browser).(in shell:)
supervisorctl status my_app
my_app RUNNING pid 1002, uptime 0:29:51
Here's the boilerplate script I used to start it:
#!/bin/bash
#script variables
NAME="gunicorn_myapp" # Name of process
DJANGODIR=/webapps/www/my_project # Django project directory
SOCKFILE=/webapps/www/run/gunicorn.sock # communicte using this socket
USER=app_user # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=my_project.settings # settings file
DJANGO_WSGI_MODULE=my_project.wsgi # WSGI module name
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE
Here's the (condensed) nginx config file:
upstream my_server {
server unix:/webapps/www/run/gunicorn.sock fail_timeout=10s;
}
server {
listen 80;
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
client_max_body_size 4G;
access_log /webapps/www/logs/nginx-access.log;
error_log /webapps/www/logs/nginx-error.log;
location /favicon.ico { access_log off; log_not_found off; }
location /static/ {
autoindex on;
alias /webapps/www/my_project/my_app/static/;
}
location /media/ {
autoindex on;
alias /webapps/www/my_project/my_app/media/;
}
location / {
proxy_pass http://my_server;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://example.com;
break;
}
}
location /robots.txt {
alias /webapps/www/my_project/my_app/static/robots.txt ;
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/www/my_project/my_app/static/;
}
}
So: gunicorn is running, nginx is running ... what tests (and how?) should I perform to determine if gunicorn is doing the wsgi stuff properly (and if nginx is proxying the said stuff through correctly)?
Edit: I've narrowed the problem down to the communication between gunicorn and nginx via the unix socket. If I change the $SOCKFILE to be bound to 0.0.0.0:80
and stop nginx, then the app's pages are served from my website. The bad news is that the socket file strings are exactly the same between the two conf files, so I don't know why they aren't communicating. I suppose this means nginx isn't correctly fetching and passing the data through then?
You can verify that the Gunicorn service is running by checking the status again: sudo systemctl status gunicorn.
Nginx and Gunicorn work together Gunicorn translates requests which it gets from Nginx into a format which your web application can handle, and makes sure that your code is executed when needed. They make a great team! Each can do something, which the other can't.
Although there are many HTTP proxies available, we strongly advise that you use Nginx. If you choose another proxy server you need to make sure that it buffers slow clients when you use default Gunicorn workers. Without this buffering Gunicorn will be easily susceptible to denial-of-service attacks.
Gunicorn implements the Web Server Gateway Interface (WSGI), which is a standard interface between web server software and web applications. Nginx is a web server. It's the public handler, more formally called the reverse proxy, for incoming requests and scales to thousands of simultaneous connections.
Go to project directory :
cd projectname
gunicorn --log-file=- projectname.wsgi:application
and
sudo systemctl status gunicorn
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With