I'd like to include and enable some custom plug-ins in an official docker base image of an application.
Here is how directory structure looks like;
.
+-- Dockerfile
+-- plugins
| +-- plugin_1
| +-- plugin_2
| +-- plugin_3
| +-- ...
| +-- plugin_n
Dockerfile looks like the following if I need to include plug-ins plugin_1, plugin_3, plugin_7 and plugin_8;
FROM myapp_officialimage
COPY plugins/plugin_1/* /usr/lib/myapp/plugins/
RUN myapp-plugins.sh enable plugin_1
COPY plugins/plugin_3/* /usr/lib/myapp/plugins/
RUN myapp-plugins.sh enable plugin_3
COPY plugins/plugin_7/* /usr/lib/myapp/plugins/
RUN myapp-plugins.sh enable plugin_7
COPY plugins/plugin_8/* /usr/lib/myapp/plugins/
RUN myapp-plugins.sh enable plugin_8
CMD ["myapp-start.sh"]
Question is, is it possible to iterate/loop over a list in order to eliminate boilerplate above?
For example, Dockerfile like below would be cleaner and more maintainable;
FROM myapp_officialimage
ENV CUSTOM_PLUGIN_LIST="plugin_1 plugin_3 plugin_7 plugin_8"
for plugin in $CUSTOM_PLUGIN_LIST; \
do \
COPY plugins/$plugin/* /usr/lib/myapp/plugins/ \
RUN myapp-plugins.sh enable $plugin \
done
CMD ["myapp-start.sh"]
In your case I would prepare a new plugins folder together with a (possibly generated or handcrafted) script that installs them. Add the real plugins folder to .dockerignore
. Then you copy in the new folder and run the install script. This also reduces the size of the context you upload to docker before the build starts. You are hand picking the dependencies anyway, so doing the work beforehand (before you build
) should work fine.
In the build system you you should do something like:
collect_plugins.sh # Creates the new plugin folder and install script
docker build <params>
The two layers could then be:
COPY plugins_selected/ /usr/lib/myapp/plugins/
RUN /usr/lib/myapp/plugins/install.sh
If you prepare the context you are sending to docker it will make the Dockerfile
a lot more simple (a good thing). We are simply solving the problem before build
.
Normally dependencies are fetched over the network using a package manager or simply by downloading them over http(s). Copying them into the build context like you are doing is not necessarily wrong, but it gets a bit more awkward.
Let's look at how docker processes the Dockerfile
(slightly simiplified).
When you build
, docker will upload all the files in the context to docker engine (except the paths mentioned in .dockerignore
) and start processing the Dockerfile
. The purpose is to produce filesystem layers that will represent the final image.
When you do operations like RUN
, docker actually starts a container to execute the command(s) then adds the resulting layer to the image. The only thing that will actually run in production is what you specify in CMD
and or ENTRYPOINT
at the end of the Dockerfile
.
Try to create as few layers as possible.
The dockerfile best practices guide covers the basics.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With