Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using Symfony in Docker Environment for Production

I am looking to implement a symfony application on Docker using Docker-Compose. I will have at least the following containers :

  • Nginx
  • Rabbitmq server
  • PHP-FPM
  • MySQL
  • Solr

Currently we have a development environment using the above setup too.

The Symfony application is stored locally (host) and then a volume is used on the PHP-FPM container so that it can read the application - this works well. We bash into the php-fpm container to run composer / app/console commands.

We also manually run the consumers (Symfony commands) that consume messages from the rabbitmq server.

What are my options in production ?

1) Can i create a single container running the application and then allow other containers to use it ? i see that the php-fpm container needs access to the application code - but i would also like to create a container to run a consumer - passing in the name of the service to run to the container - meaning i can have a single image that can be flexibly launched to process messages from any queue. What happens with logs / cache in this option ?

2) Have the application stored within each image that needs it ? this is my least favourite option as then to update the application i need to build each image

3) Something i haven't yet explored ?

I would like to allow easy updates to the application - something scripted perhaps, but i would also like to minimise downtime - i can do that using haproxy or something similar - has anyone else got any experiences with running a multi container symfony application in production ?

like image 334
Manse Avatar asked Mar 21 '17 13:03

Manse


1 Answers

I run a container for each service. Remember that one of the Docker principles is "separation of concern".

You may have Nginx + PHP-FPM on the same container, though.

To launch all the services (in dev or prod environment) you can use docker-compose and the magic "SYMFONY_ENV=dev" environment variable to launch everything. I suggest to launch the consumers in a separate conainer, but with different project / log /cache paths, possibly. Consider that consumers, on production, may affect online performances if they are running with shared CPU/memory/disk.

I am currently investigating alternatives to deploy/postdeploy the webapp, the suboptimal solution is now a simple entrypoint bash script (which is passed to "docker run -d myimage php_entrypoint.sh" that:

  1. prepares the environment
  2. downloads and updates vendors
  3. syncs resources to cdn, updates db schema, etc
  4. runs the applications server (php-fpm in this case, i use supervisord to do the task)

It results in something like this:

#$OPTIMIZE is an ENV-propagated or a calulated variable

su -c "php composer.phar install $OPTIMIZE" webmgr

cp -f web/HTACCESS_${SYMFONY_ENV} web/.htaccess

/usr/bin/supervisord -c /etc/supervisord/supervisord.conf

The reason why I am using supervisord is that I have to copy/mount the [program:] sections that I need to run, thus maintaining a single php image that is good both for php-fpm and CLI/consumer work. I can also restart the php appserver without killing the container. Moreover, supervisord is quite clever at managing "daemonized" processes.

UPDATED

The webapp is mounted as a volume, and docker-compose.yml is in the project root directory, which contains docker image configurations and the symfony project. This is an excerpt of docker-compose.yml

webapp_fpm:
  image: ...  
  volumes:
    - ./symfony:/var/www/html
    - ./docker-conf/supervisord:/etc/supervisord
    - /var/log/appname/symfony:/var/log/symfony
  entrypoint: "/bin/bash php_entrypoint.sh"
like image 113
cernio Avatar answered Sep 22 '22 17:09

cernio