Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pass Django SECRET_KEY in Environment Variable to Dockerized gunicorn

Some Background

Recently I had a problem where my Django Application was using the base settings file despite DJANGO_SETTINGS_MODULE being set to a different one. It turned out the problem was that gunicorn wasn't inheriting the environment variable and the solution was to add -e DJANGO_SETTINGS_MODULE=sasite.settings.production to my Dockerfile CMD entry where I call gunicorn.

The Problem

I'm having trouble with how I should handle the SECRET_KEY in my application. I am setting it in an environment variable though I previously had it stored in a JSON file but this seemed less secure (correct me if I'm wrong please).

The other part of the problem is that when using gunicorn it doesn't inherit the environment variables that are set on the container normally. As I stated above I ran into this problem with DJANGO_SETTINGS_MODULE. I imagine that gunicorn would have an issue with SECRET_KEY as well. What would be the way around this?

My Current Approach

I set the SECRET_KEY in an environment variable and load it in the django settings file. I set the value in a file "app-env" which contains export SECRET_KEY=<secretkey>, the Dockerfile contains RUN source app-env in order to set the environment variable in the container.

Follow Up Questions

Would it be better to set the environment variable SECRET_KEY with the Dockerfile command ENV instead of sourcing a file? Is it acceptable practice to hard code a secret key in a Dockerfile like that (seems like it's not to me)?

Is there a "best practice" for handling secret keys in Dockerized applications?

I could always go back to JSON if it turns out to be just as secure as environment variables. But it would still be nice to figure out how people handle SECRET_KEY and gunicorn's issue with environment variables.

Code

Here's the Dockerfile:

FROM python:3.6
LABEL maintainer [email protected]

ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test

WORKDIR /app

COPY manage.py /app/
COPY requirements/ /app/requirements/ 

RUN pip install -r $requirements

COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
COPY scripts scripts

RUN source app-env

EXPOSE 8001

CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
like image 943
ss7 Avatar asked Nov 30 '18 20:11

ss7


1 Answers

I'll start with why it doesn't work as is, and then discuss the options you have to move forward:

During the build process of a container, a single RUN instruction is run as its own standalone container. Only changes to the filesystem of that container's write layer are captured for subsequent layers. This means that your source app-env command runs and exits, and likely makes no changes on disk making that RUN line a no-op.

Docker allows you to specify environment variables at build time using the ENV instruction, which you've done with the DJANGO_SETTINGS_MODULE variable. I don't necessarily agree that SECRET_KEY should be specified here, although it might be okay to put a value needed for development in the Dockerfile.


Since the SECRET_KEY variable may be different for different environments (staging and production) then it may make sense to set that variable at runtime. For example:

docker run -d -e SECRET_KEY=supersecretkey mydjangoproject

The -e option is short for --env. Additionally, there is --env-file and you can pass in a file of variables and values. If you aren't using the docker cli directly, then your docker client should have the ability to specify these there as well (for example docker-compose lets you specify both of these in the yaml)


In this specific case, since you have something inside the container that knows what variables are needed, you can call that at runtime. There are two ways to accomplish this. The first is to change the CMD to this:

CMD source app-env && /usr/local/bin/gunicorn --config config/gunicorn.conf --log-config config/logging.conf -e DJANGO_SETTINGS_MODULE=sasite.settings.production_test -w 4 -b 0.0.0.0:8001 sasite.wsgi:application

This uses the shell encapsulation syntax of CMD rather than the exec syntax. This means that the entire argument to CMD will be run inside /bin/sh -c ""

The shell will handle running source app-env and then your gunicorn command.

If you ever needed to change the command at runtime, you'd need to remember to specify source app-env && where needed, which brings me to the other approach, which is to use an ENTRYPOINT script


The ENTRYPOINT feature in Docker allows you to handle any necessary startup steps inside the container when it is first started. Consider the following entrypoint script:

#!/bin/bash
cd /app && source app-env && cd - && exec "$@"

This will explicitly cd to the location where app-env is, source it, cd back to whatever the oldpwd was, and then execute the command. Now, it is possible for you to override both the command and working directory at runtime for this image and have any variables specified in the app-env file to be active. To use this script, you need to ADD it somewhere in your image and make sure it is executable, and then specify it in the Dockerfile with the ENTRYPOINT directive:

ADD entrypoint.sh /entrypoint.sh
RUN chmod a+x /entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]

With the entrypoint strategy, you can leave your CMD as-is without changing it.

like image 199
programmerq Avatar answered Sep 20 '22 12:09

programmerq