Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Deploying Django Channels to Elastic Beanstalk Python3.4 environment

I have spent the past few days implementing Channels into my Django app in order to use Channel's websocket support. My Django project is written in Python 3.4 and I am using Daphne and Channel's redis backend.

I have been able to get everything functioning locally by wrapping Supervisord in a Python2 virtualenv and using it to run scripts that start Daphne/Redis/workers inside a Python3 virtualenv, but have had no success deploying to our Elastic Beanstalk (Python 3.4) environment.

Is there any way to set up my EB deployment configs to run Supervisor in a Python2 virtualenv, like I can locally? If not, how can I go about getting Daphne, redis, and my workers up and running on EB deploy? I am willing to switch process managers if necessary, but have found Supervisor's syntax to be easier to understand/implement than Circus, and am not aware of any other viable alternatives.

With this configuration, I am able to successfully deploy to my EB environment and ssh into it, but Supervisor fails to start every process, and if I try to start Supervisor manually to check on the processes supervisorctl status gives me FATAL "Exited too quickly (process log may have details) for everything I try to initialize. The logs are empty.

Channels backend config:

CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "asgi_redis.RedisChannelLayer",
        "ROUTING": "<app>.routing.channel_routing",
        "CONFIG": {
            "hosts": [
                os.environ.get('REDIS_URL', 'redis://localhost:6379')
            ],
        },
    },
}

asgi.py:

import os
from channels.asgi import get_channel_layer

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<app>.settings")

channel_layer = get_channel_layer()

supervisor conf (rest of the conf file was left default):

[program:Redis]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_redis.sh
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/redis.out.log

[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_daphne.sh
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log

[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_worker.sh
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log

.ebextensions/channels.config:

container_commands:
  01_start_supervisord:
    command: "sh /supervisord/start_supervisor.sh"

start_supervisor.sh:

#!/usr/bin/env bash
virtualenv -p /usr/bin/python2.7 /tmp/senv
source /tmp/senv/bin/activate
sudo pip install supervisor
sudo /usr/local/bin/supervisord -c 
/opt/python/current/app/<app>/supervisord.conf
supervisorctl -c /opt/python/current/app/<app>/supervisord.conf status

start_redis:

#!/usr/bin/env bash
sudo wget http://download.redis.io/releases/redis-3.2.8.tar.gz
sudo tar xzf redis-3.2.8.tar.gz
cd redis-3.2.8
sudo make
source /opt/python/run/venv/bin/activate
sudo src/redis-server

start_daphne:

#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 <app>.asgi:channel_layer

start_worker:

#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate
python manage.py runworker

I was loosely following this guide but since it was written for a python2 EB environment it is really only good for the ALB setup and base supervisor configuration.

Thank you guys for reading this, and please let me know if I can provide anything else by way of code/output etc.

like image 717
Sam Avatar asked May 12 '17 22:05

Sam


Video Answer


2 Answers

So thanks to the logging advice from Berlin's answer, and an virtual environment tweak suggested by the AWS support team (I forwarded this question along to them), I was finally able to get this to work.

First off, I ended up completely removing Redis from Supervisor, and instead chose to run an ElastiCache Redis instance which I then connected to my EB instance. I don't think this is the only way to handle this, but it was the best route for my implementation.

I then changed from using the pre-existing start_supervisor.sh script and instead added a command to channels.config ebextension to create the script and add it to EB's postdeployment operations. This was necessary because .ebextension config files are run during deployment, but do not live past environment creation (this may not be entirely correct, but for the sake of this solution that is how I think of them), so even though my script was mostly correct the Supervisor process it started up would just die as soon as deployment was done.

so my .ebextensions/channels.config is now:

container_commands:
  01_create_post_dir:
    command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
    ignoreErrors: true
files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/start_supervisor.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      #!/usr/bin/env bash
      virtualenv -p /usr/bin/python2.7 /tmp/senv
      source /tmp/senv/bin/activate && source /opt/python/current/env
      python --version > /tmp/version_check.txt
      sudo pip install supervisor
      /usr/local/bin/supervisord -c /opt/python/current/app/<app>/supervisord.conf
      supervisorctl -c /opt/python/current/app/<app>/supervisord.conf status

This alone was enough to get Supervisor running on EB deployment, but I had to make some more changes to get Daphne and my Django workers to stay alive:

start_daphne.sh:

#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 <app>.asgi:channel_layer

start_worker.sh:

#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
python manage.py runworker

Adding && source /opt/python/current/env to the virtualenv activation command was suggested to me by AWS support, as env variables are not pulled into virtualenvs automatically, which was causing Daphne and workers to die on creation due to Import Errors.

I also made some changes to my supervisord.conf file:

[unix_http_server]
file=/tmp/supervisor.sock   ; (the path to the socket file)

[supervisord]
logfile=/tmp/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10           ; (num of main logfile rotation backups;default 10)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false               ; (start in foreground if true;default false)
minfds=1024                  ; (min. avail startup file descriptors;default 1024)
minprocs=200                 ; (min. avail process descriptors;default 200)

; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket

[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_daphne.sh --log-file /tmp/start_daphne.log
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
stderr_logfile=/tmp/daphne.err.log

[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_worker.sh --log-file /tmp/start_worker.log
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
stderr_logfile=/tmp/workers.err.log
like image 94
Sam Avatar answered Sep 29 '22 08:09

Sam


You said logs are empty, so hard to debug, make sure to have the log line on the main supervisor config file /etc/supervisord.conf, see what the errors and share them.

[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace

And to each of your program on your supervisor conf file add a log error and see what is the errors and share them.

command=sh /opt/python/current/app/<app>/start_redis.sh --log-file /path/to/your/logs/start_redis.log
stdout_logfile=/tmp/redis.out.log
stderr_logfile=/tmp/redis.err.log
like image 24
Berlin Avatar answered Sep 29 '22 07:09

Berlin