I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database. If I install redis in the docker image (Ubuntu 14.04):
RUN apt-get update && apt-get -y install redis-server
RUN pip install redis
The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.
If I start the Redis server with the following command, it hangs :
RUN redis-server
If I try to tweak the previous line, it does not work either :
RUN nohup redis-server &
So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?
The Docker "last command" is already used with:
CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
It's ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.
Use a process manager which can run multiple processes: You can set the container's entrypoint to a specialised program which is capable of running and managing multiple processes. One example of this is supervisord. You can use supervisord as your container entrypoint, which will then load the services that you need.
To connect to a Redis instance from another Docker container, add --link [Redis container name or ID]:redis to that container's docker run command. To connect to a Redis instance from another Docker container with a command-line interface, link the container and specify the host and port with -h redis -p 6379.
When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an "init" script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.
Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that :)
The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example...
# docker-compose.yml
web:
image: myapp
command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
links:
- redis
- mysql
celeryd:
image: myapp
command: celery worker -A myapp.celery
links:
- redis
- mysql
redis:
image: redis
mysql:
image: mysql
This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name "redis" and "mysql" inside your app containers, so instead of pointing at "localhost" you'd point at "redis".
There is a lot of good info on the Docker-compose docs
RUN
commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.
Use CMD
instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD
:
CMD start.sh
In the start.sh
script you write the following:
#!/bin/bash
nohup redis-server &
uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With