I am trying to run my django application in a docker container with static files served from Amazon S3. When I run RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput
in my Dockerfile, I get a 403 Forbidden error from Amazon S3 with the following response XML
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>RequestTimeTooSkewed</Code>
<Message>The difference between the request time and the current time is too large.</Message>
<RequestTime>Sat, 27 Dec 2014 11:47:05 GMT</RequestTime>
<ServerTime>2014-12-28T08:45:09Z</ServerTime>
<MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds>
<RequestId>4189D5DAF2FA6649</RequestId>
<HostId>lBAhbNfeV4C7lHdjLwcTpVVH2snd/BW18hsZEQFkxqfgrmdD5pgAJJbAP6ULArRo</HostId>
</Error>
My docker container is running Ubuntu 14.04... if that makes any difference. I also am running the application using uWSGI, without nginx or apache or any other kind of reverse-proxy server.
I also get the error at run-time, when the files are being served to the site.
Attempted Solution
Other stackoverflow questions have reported a similar error using S3 (none specifically in conjunction with Docker) and they have said that this error is caused when your system clock is out of sync, and can be fixed by running
sudo service ntp stop
sudo ntpd -gq
sudo service ntp start
so I added the following to my Dockerfile, but it didn't fix the problem.
RUN apt-get install -y ntp
RUN ntpd -gq
RUN service ntp start
I also attempted to sync the time on my local machine before building the docker image, using sudo ntpd -gq
, but that did not work either.
Dockerfile
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
# Update system clock so S3 does not get 403 Error
# NOT WORKING
#RUN apt-get install -y ntp
#RUN ntpd -gq
#RUN service ntp start
RUN pip install uwsgi
RUN apt-get -y install libxml2-dev libxslt1-dev
RUN apt-get install -y python-software-properties uwsgi-plugin-python3
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev
ADD . /home/docker/code
# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
RUN pip install -r /home/docker/code/vitru/requirements.txt
# Create directory for logs
RUN mkdir -p /var/logs
# Set environment as staging
ENV env staging
# Run django commands
# python3.4 is at /usr/bin/python3.4, but which works too
RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py syncdb --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py makemigrations --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py migrate --noinput
EXPOSE 8000
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]
Noted in the comments but for others who come here:
If using boot2docker (i.e. If on windows or Mac), the boot2docker vm has a known time issue when you sleep your machine--see here. Since the host for your docker container is the boot2docker vm, that's where it syncs its time.
I've had success restarting the boot2docker vm. This may cause problems with losing some state, i.e. If you had some data volumes.
Docker containers share clock with the host machine, so syncing your host machine clock should solve the problem. To force the timezone of your container is the same as your host machine you can add -v /etc/localtime:/etc/localtime:ro
in docker run
.
Anyway, you should not start a service in a Dockerfile. This file contains the steps and commands to build the image for your containers, and any process you run inside a Dockerfile will end after the building process. To start any service you should add a run script or a process control daemon (as supervisord) which will run each time you run a new container.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With