I'm building a docker image on my Raspberry Pi, which is of course takes some time. The problem here is that even very simple commands in the Dockerfile like setting an environment variable, using chmod +x
on a single file or exposing port 80 take minutes to complete.
Here is an excerpt of my Dockerfile:
FROM resin/rpi-raspbian
MAINTAINER felixbr <[email protected]>
RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python python-dev python-pip python-numpy python-scipy python-mysqldb mysql-server redis-server nginx dos2unix poppler-utils
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
COPY . /app
WORKDIR /app
RUN cp /app/nginx-django.cfg /etc/nginx/sites-enabled/default
RUN chmod +x /app/start.sh
ENV DOCKERIZED="true"
CMD ./start.sh
EXPOSE 80
Keep in mind this is using an ARMv6
base image, so it can run on a Raspberry Pi and I'm using docker 1.5.0 built for the hypriot Raspberry Pi OS.
Is it copying the built layers for every command or why does each of the last few commands take minutes to complete?
If your Docker image builds takes a long time downloading dependencies, it's a good idea to check whether you're installing more than you need to. First, check if you might be downloading development dependencies which are not needed in your image at all.
I'm talking around 10-15 minute build times when it used to only take around 2 minutes tops.
If you make an image from multiple layers which modify the same files, and those files are large in each layer, squashing can reduce the total size of your image. Beware that the squashed image will consist of a single layer, so won't be able to share layers with other images.
Another consideration that was not mentioned here, is that on armv7, most packages that you may want to install with pip or apt-get, are not packaged as binaries.
That means that on an amd64 architecture, pip install downloads a binary and it just merely copies it in the right place, but on armv7, it won't find a suitable binary and will instead downloads the source code and will need to build it from scratch.
When you have a package with lots of dependencies, and they need to be built from source, it takes a looong time.
You can check on what is going on during docker build using the -v
flag on pip
pip install -v -r requirements.txt
Each instruction of the Dockerfile
will be run in a container. What it means is that for each instruction it will do the following :
--rm
option is specified) (thus, removing the container Read/Write layer)There are a few I/O operations involved. On an SSD it's really quick, as well as on a good hard drive. When you build it on the Raspberry PI, if you build it on the SD Card (or MicroSD), the performance of the SD card is probably not that good. It will depend on the class of you MicroSD and even then, I don't think it's really good for the card. I made the try with a simple node project, and it definitely took a few minutes instead of a few seconds like it did on my laptop. It is hardware related (mostly I/O for the SD Card, maybe a little bit the CPU, but...).
You might wanna try to use an external hard drive connected to the raspberry Pi and move the docker folders there, to see if the performance are better.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With