Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Self-contained Docker image with Laravel app (no shared volume)

There are at least a dozen tutorials on the web about how to set up a Laravel app with Docker. The basic set up they all use is 3 Docker containers:

  • nginx container
  • php-fpm container
  • mysql container

The Nginx and PHP-fpm containers rely on a shared volume. HTTP request comes into Nginx for a file on the shared volume. Nginx hands the request off to PHP-fpm. Php-fpm also has access to the files in the shared volume so it can run the scripts.

For development, this is fantastic. I can edit the files in the shared volume and immediately test the changes. But I'm questioning whether I want this for production. Do I actually want any of my code to be on the server running Docker? This seems to defeat some of the purposes of Dockerising it in the first place. It seems like I would want the code to be self-contained inside a Docker container running both nginx AND PHP-fpm (database can be a separate container or service in the hosted environment).

Is my thinking correct here? What is considered to be the best practice for deploying Laravel in Docker for production?

like image 232
Bintz Avatar asked Apr 02 '20 09:04

Bintz


People also ask

Can Docker containers share volumes?

You can manage volumes using Docker CLI commands or the Docker API. Volumes work on both Linux and Windows containers. Volumes can be more safely shared among multiple containers. Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.

Does Docker automatically create volume?

Docker automatically creates a directory for the volume on the host under the /var/lib/docker/volume/ path. You can now mount this volume on a container, ensuring data persistence and data sharing among multiple containers.

Can multiple Docker containers share a volume?

For multiple containers writing to the same volume, you must individually design the applications running in those containers to handle writing to shared data stores to avoid data corruption. After that, exit the container and get back to the host environment.


1 Answers

You are missing a quite important fact here: on a production server volumes are not supposed to be bind mounts, they will mostly be "normal" volumes, and one of their purpose is indeed to be able to share data between containers.

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker. Volumes have several advantages over bind mounts:

  • Volumes are easier to back up or migrate than bind mounts.
  • You can manage volumes using Docker CLI commands or the Docker API.
  • Volumes work on both Linux and Windows containers.
  • Volumes can be more safely shared among multiple containers.
  • Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
  • New volumes can have their content pre-populated by a container.
  • Volumes on Docker Desktop have much higher performance than bind mounts from Mac and Windows hosts.

In addition, volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container.

Docker Volumes Types

If your container generates non-persistent state data, consider using a tmpfs mount to avoid storing the data anywhere permanently, and to increase the container’s performance by avoiding writing into the container’s writable layer.

Volumes use rprivate bind propagation, and bind propagation is not configurable for volumes.

Source: https://docs.docker.com/storage/volumes/, emphasis, mine

So, as you can see here, even Docker's literature is advising volumes over bundling some specific cases of persisting data in the container itself.

Having a container bundling NGINX and PHP would also defeat the containerization idea:

It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.

Source: https://docs.docker.com/config/containers/multi-service_container/

Each container should have only one concern. Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner.

Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule. For example, not only can containers be spawned with an init process, some programs might spawn additional processes of their own accord. For instance, Celery can spawn multiple worker processes, and Apache can create one process per request.

Use your best judgment to keep containers as clean and modular as possible. If containers depend on each other, you can use Docker container networks to ensure that these containers can communicate.

Source: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#decouple-applications


In your specific use case, bundling an NGINX + PHP-FPM container would defeat the fact that you might want to scale NGINX independently of PHP-FPM, for example, if you are doing a reverse proxy caching on the NGNIX level, you will quickly need way more replicas of the NGINX container than you'll need of the PHP container.

So keeping them separate will benefit your horizontal scaling need and would be closer to the principles of the containerization of applications.

One question I would ask myself in those kind of situation is:

If I were to bundle those processes in one container, should I, then, need to embark any sort of process control system, like supervisor in this container?

If the answer to this would be yes, then I would certainly ask myself if I am not defeating the purpose of containerization of my application.


Related questions:

  • Sharing volume between Docker containers
  • docker compose volume type - bind vs volume
like image 107
β.εηοιτ.βε Avatar answered Nov 02 '22 05:11

β.εηοιτ.βε