Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to use composer with docker-compose

I am configuring docker-compose.yml file and I want to run a php stack that contain elastic,redis,symfony,composer. now the problem that I have is, I don't know how can I use composer with docker because some features of composer need to have php and some extension. I don't want to build a new image and install nginx and php and composer and extension of php on it, I wan't to have all of them in disparate image, what I have tried so far its this:

version : '2'

services:
  nginx:
    image: tutum/nginx
    ports:
        - "80:80"
    volumes:        
        - ./nginx/default:/etc/nginx/sites-available/default
        - ./nginx/default:/etc/nginx/sites-enabled/default
        - ./logs/nginx-error.log:/var/log/nginx/error.log
        - ./logs/nginx-access.log:/var/log/nginx/access.log
        - ./app:/usr/share/nginx/html

  phpfpm:
      image: php:fpm
      ports:
          - 9000:9000
      volumes:      
          - ./app:/usr/share/nginx/html

  composer:
      image: composer/composer:php7
      command: install
      volumes: 
        - ./app:/app

  elastic2.4.4:
    image: elasticsearch:2.4.4    
    ports:
      - 9200:9200
    volumes:
      - ./esdata1:/usr/share/elasticsearch/data

  redis:
    image: redis:3.2
    ports:
      - 6379:6379

but this won't install dependencies.

like image 590
joe gates Avatar asked Jan 06 '18 13:01

joe gates


People also ask

Is Docker compose the same as Docker compose?

The key difference between docker run versus docker-compose is that docker run is entirely command line based, while docker-compose reads configuration data from a YAML file. The second major difference is that docker run can only start one container at a time, while docker-compose will configure and run multiple.

Can I use Docker compose to build image?

Define and run multi-container applications with Docker. docker-compose build : This command builds images in the docker-compose. yml file. The job of the build command is to get the images ready to create containers, so if a service is using the prebuilt image, it will skip this service.


2 Answers

I set up my docker-compose.yml file so one docker instance would use the composer/composer image and execute composer install within a shared container. All of the other images would then be able to access the vendor directory that composer created. The tricky part was realizing that the composer/composer image assumes that the composer.json file will be in an /app directory. I had to override this behavior by specifying my shared container as the working_dir instead:

version: '3'

services:
  #=====================#
  # nginx proxy service #
  #=====================#
  nginx_proxy:
    image: nginx:alpine
    networks:
      - test_network
    ports:
      - "80:80"
      - "443:443"
    volumes:
      # self-signed testing wildcard ssl certificate
      - "./certs:/certs"
      # proxy needs access to static files
      - "./site1/public:/site1/public"
      - "./site2/public:/site2/public"
      # proxy needs nginx configuration files
      - "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
      - "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
    container_name: nginx_proxy

  #===============#
  # composer.test #
  #===============#
  composer.test:
    image: composer/composer
    networks:
      - test_network
    ports:
      - "9001:9000"
    volumes:
      - "./composer:/composer"
    container_name: composer.test
    working_dir: /composer
    command: install

  #============#
  # site1.test #
  #============#
  site1.test:
    build: ./site1
    networks:
      - test_network
    ports:
      - "9002:9000"
    environment:
      - "VIRTUAL_HOST=site1.test"
    volumes:
      - "./composer:/composer"
      - "./site1:/site1"
    container_name: site1.test

  #============#
  # site2.test #
  #============#
  site2.test:
    build: ./site2
    networks:
      - test_network
    ports:
      - "9003:9000"
    environment:
      - "VIRTUAL_HOST=site2.test"
    volumes:
      - "./composer:/composer"
      - "./site2:/site2"
    container_name: site2.test

# networks
networks:
  test_network:

Here is how the directory structure looks:

certs
    test.crt
    test.key
composer
    composer.json
site1
    app
    public
    Dockerfile
    site1.test.conf
site2
    app
    public
    Dockerfile
    site2.test.conf
docker-compose.yml
like image 99
hanmari Avatar answered Oct 14 '22 02:10

hanmari


If you look at composer/composer:php7 Dockerfile, then you will see, that it is based on php:7.0-alpine and it doesn't seem like fpm is included. So, you could use composer/composer:php7 as base image to install php-fpm on top of it.

So, since you do the mapping of your project in all three containers, running composer install in one container should result in the changes be visible in all three containers.

Me personally, I do not see a point in segregating PHP and nginx into 2 different containers, because one is highly dependable on another. And mapping your app into both containers is also a perfect example of nonsense. That's why I maintain my own public build of nginx+php Docker image. You can check it out here. There are more builds with more flavors. And they all come with composer inside.

like image 44
Alex Karshin Avatar answered Oct 14 '22 01:10

Alex Karshin