We have an app that uses the channels package and works just fine...on localhost. As soon as we hit staging and placed a nginx
box in front of Django (with SSL), we can connect to the socket but no messages are received by the client.
Nginx conf:
worker_processes auto;
error_log /dev/stdout info;
user nobody nogroup;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
accept_mutex off;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "MSIE [1-6].(?!.*SV1)";
gzip_vary on;
upstream ws_server {
server unix:/tmp/daphne.sock fail_timeout=0;
}
server {
# redirect all http requests to https
listen 80;
listen [::]:80 ipv6only=on;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
client_max_body_size 4G;
server_name changemyip.com;
keepalive_timeout 5;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets on;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
location /ws/ {
try_files $uri @proxy_to_ws;
}
location @proxy_to_ws {
proxy_pass http://ws_server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Websocket specific
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_connect_timeout 86400;
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
...
ssl_protocols TLSv1.1 TLSv1.2;
...
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;
}
}
Django runs with gunicorn and for websockets I upped a daphne server. I can see in daphne logs that my client is connecting but still, no messages from daphne to the client are received.
Daphne is creating a unix socket which nginx picks up to communicate:
daphne main.asgi:channel_layer -u /tmp/daphne.sock
I had the exact same problem. I wasn't able to connect through a unix-socket, but I found a really simple way to use a system port to achieve the requests management. I used the following tutorials, (and used my experience with Gunicorn), and managed to modify a little bit their Nginx configuration file, I would recommend you to check the tutorials out:
Django Channels Group Pt1
Django Channels Group Pt2
# Enable upgrading of connection (and websocket proxying) depending on the
# presence of the upgrade field in the client request header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Create an upstream alias to where we've set daphne to bind to
upstream django_app_server {
server 127.0.0.1:8000;
}
server {
listen 80;
server_name YOURDOMAIN.COM;
client_max_body_size 4G;
access_log /webapps/General/logs/nginx-access.log;
error_log /webapps/General/logs/nginx-error.log;
location /static/ {
alias /webapps/General/DjangoProject/static/;
}
location /media/ {
alias /webapps/General/DjangoProject/media/;
}
location / {
if (!-f $request_filename) {
proxy_pass http://django_app_server;
break;
}
# Require http version 1.1 to allow for upgrade requests
proxy_http_version 1.1;
# We want proxy_buffering off for proxying to websockets.
proxy_buffering off;
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you use HTTPS:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client for the sake of redirects
proxy_set_header Host $http_host;
# We've set the Host header, so we don't need Nginx to muddle
# about with redirects
proxy_redirect off;
# Depending on the request value, set the Upgrade and
# connection headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/General/DjangoProject/templates/;
}
}
The websockets in my projects are working quite nicely (Groups and Channels), and all the requests are being served by Daphne, but if you REALLY need to use a socket, this configuration might actually help you.
Remember, this Nginx file allows Daphne to connect in general, but in a production server you need to run the "Daphne Instance Server" and the "Daphne Workers" separately to be able to transmit messages through your channels.
Check if you will use Redis-Server or some other queue manager when serving you channels and groups. I say this because I noticed that when using the "InMemory" configuration, multiple messages were lost.
Also check if your production environment is running Redis-Server as a daemon. I noticed that in several systems Redis-Server wasn't even working but the Django application didn't raise an exception when the connection was refused.
You need something to keep Daphne and its workers up, because even though they loop, they are not "exception resistant" so they will die when an exception is raised. Obviously I recommend Supervisor or using the Linux System for services.
I do not know if daphne's workers can serve static and media files when DEBUG==False
, but apparently is way better to serve them separately using the Nginx configuration.
I still don't know the security/performance implications of using a port compared with using a socket, so it is something worth checking (read below, I found a possible bug with Daphne or my config).
I know this might be irrelevant for you now, (I mean it has been almost 1 month) but maybe someone will find some use to this answer.
TL;DR : Don't deplot two Django-Daphne apps in the same server with this config, or you are gonna have a bad time.
By using this configuration I have been able to deploy Phoenix applications alongside Django applications without any kind of problem, BUT I have been having problems when deploying 2 or more Django applications using this type of configuration. For a certain reason, Daphne knows which ports it has to read constantly to receive the requests, but it just reads all of them and serves them to whoever it pleases. For example, if I have DJANGO_APP_1
and DJANGO_APP_2
running in the same server (with different Nginx configs and obviously different system ports), sometimes the Daphne Workers of DJANGO_APP_2
will STEAL the requests that are meant for DJANGO_APP_1
and viceversa. I haven't been able to pinpoint the source of the problem, but I believe it has to do with the Daphne workers been in a certain way agnostic of the project they are related to. (Just a theory, I don't have the time to check their code).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With