Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker - Execute command after mounting a volume

Tags:

I have the following Dockerfile for a php runtime based on the official [php][1] image.

FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
        libfreetype6-dev \
        libjpeg62-turbo-dev \
        libmcrypt-dev \
        libpng12-dev \
        zip \
        unzip \
    && docker-php-ext-install -j$(nproc) iconv mcrypt \
    && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
    && docker-php-ext-install -j$(nproc) gd \
    && docker-php-ext-install mysqli \
    && docker-php-ext-enable opcache \
    && php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
    && php -r "if (hash_file('SHA384', 'composer-setup.php') === '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
    && php composer-setup.php \
    && php -r "unlink('composer-setup.php');" \
    && mv composer.phar /usr/local/bin/composer

I am having trouble running composer install.

I am guessing that the Dockerfile runs before a volume is mounted because I receive a composer.json file not found error if adding:

...
&& mv composer.phar /usr/local/bin/composer \
&& composer install

to the above.

But, adding the following property to docker-compose.yml:

command: sh -c "composer install && composer require drush/drush"

seems to terminate the container after the command finishes executing.

Is there a way to:

  • wait for a volume to become mounted
  • run composer install using the mounted composer.json file
  • have the container keep running afters

?

like image 685
Raphael Rafatpanah Avatar asked Jul 01 '17 19:07

Raphael Rafatpanah


1 Answers

I generally agree with Chris's answer for local development. I am going to offer something that combines with a recent Docker feature that may set a path for doing both local development and eventual production deployment with the same image.

Let's first start with the image that we can build in a manner that can be used for either local development or deployment somewhere that contains the code and dependencies. In the latest Docker version (17.05) there is a new multi-stage build feature that we can take advantage of. In this case we can first install all your Composer dependencies to a folder in the build context and then later copy them to the final image without needing to add Composer to the final image. This might look like:

FROM composer as composer
COPY . /app
RUN composer install --ignore-platform-reqs --no-scripts

FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
        libfreetype6-dev \
        libjpeg62-turbo-dev \
        libmcrypt-dev \
        libpng12-dev \
        zip \
        unzip \
    && docker-php-ext-install -j$(nproc) iconv mcrypt \
    && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
    && docker-php-ext-install -j$(nproc) gd \
    && docker-php-ext-install mysqli \
    && docker-php-ext-enable opcache
COPY . /var/www/root
COPY --from=composer /app/vendor /var/www/root/vendor

This removes all of Composer from the application image itself and instead uses the first stage to install the dependencies in another context and copy them over to the final image.

Now, during development you have some options. Based on your docker-compose.yml command it sounds like you are mounting the application into the container as .:/var/www/root. You could add a composer service to your docker-compose.yml similar to my example at https://gist.github.com/andyshinn/e2c428f2cd234b718239. Here, you just do docker-compose run --rm composer install when you need to update dependencies locally (this keeps the dependencies build inside the container which could matter for native compiled extensions, especially if you are deploying as containers and developing on Windows or Mac).

The other option is to just do something similar to what Chris has already suggested, and use the official Composer image to update and manage dependencies when needed. I've done something like this locally before where I had private dependencies on GitHub which required SSH authentication:

docker run --rm --interactive --tty \
            --volume $PWD:/app:rw,cached \
            --volume $SSH_AUTH_SOCK:/ssh-auth.sock \
            --env SSH_AUTH_SOCK=/ssh-auth.sock \
            --volume $COMPOSER_HOME:/composer \
            composer:1.4 install --ignore-platform-reqs --no-scripts

To recap, the reasoning for this method of building the image and installing Composer dependencies using an external container / service:

  • Platform specific dependencies will be built correctly for the container (Linux architecture vs Windows or Mac).
  • No Composer or PHP is required on your local computer (it is all contained inside Docker and Docker Compose).
  • The initial image you built is runnable and deployable without needing to mount code into it. In development, you are just overriding the /var/www/root folder with a local volume.
like image 191
Andy Shinn Avatar answered Sep 19 '22 06:09

Andy Shinn