I am trying to migrate our monolithic PHP Symfony app to a somewhat more scalable solution with Docker. There is some communication between the app and RabbitMQ, and I use docker-compose
to bring all the containers up, in this case the app and the RabbitMQ server.
There is a lot of discussions around the topic that one container should spawn only one process, and the Docker best practices is somewhat vague regarding this point:
While this mantra has good intentions, it is not necessarily true that there should be only one operating system process per container. In addition to the fact that containers can now be spawned with an init process, some programs might spawn additional processes of their own accord.
Does it make sense to create a separate Docker container for each RabbitMQ consumer? It kind of feels "right" and "clean" to not let rabbitmq server know of language/tools used to process the queue. I came up with (relevant parts of docker-compose.yml
):
app :
# my php-fpm app container
rabbitmq_server:
container_name: sf.rabbitmq_server
build: .docker/rabbitmq
ports:
- "15672:15672"
- "5672:5672"
networks:
- app_network
rabbitmq_consumer:
container_name: sf.rabbit_consumer
extends: app
depends_on:
- rabbitmq_server
working_dir: /app
command: "php bin/console rabbitmq:consumer test"
networks:
- app_network
I could run several consumers in the rabbitmq_consumer
container using nohup
or some other way of running them in the background.
I guess my questions are:
Can I somehow automate the "adding a new consumer", so that I would not have to edit the "build script" of Docker (and others, like ansible) every time the new consumer is added from the code?
Does it make sense to separate RabbitMQ server from Consumers, or should I use the Rabbit server with consumers running in the background?
Or should they be placed in the background of the app container?
I'll share my experience so think critically about it.
Consumers have to be run in a separate container from a web app. The consumer container runs a process manager like these. Its responsibility is to spawn some child consumer processor, reboot them if they exit, reload on SIGUSR1 signal, shut down them correctly on SIGTERM. If the main process exists the whole container exists as well. You may have a police for this case like always restart. Here's how the consume.php
script look like:
<?php
// bin/consume.php
use App\Infra\SymfonyDaemon;
use Symfony\Component\Process\ProcessBuilder;
require __DIR__.'/../vendor/autoload.php';
$workerBuilder = new ProcessBuilder(['bin/console', 'enqueue:consume', '--setup-broker', '-vvv']);
$workerBuilder->setPrefix('php');
$workerBuilder->setWorkingDirectory(realpath(__DIR__.'/..'));
$daemon = new SymfonyDaemon($workerBuilder);
$daemon->start(3);
The container config looks like:
app_consumer:
restart: 'always'
entrypoint: "php bin/consume.php"
depends_on:
- 'rabbitmq_server'
Can I somehow automate the "adding a new consumer", so that I would not have to edit the "build script" of Docker (and others, like ansible) every time the new consumer is added from the code?
Unfortunately, the RabbitMQ bundle queue managment leaves much to be desired. By default, you have to run a single command per queue. If you have 100 queues you need 100 processes, at least one per queue. There is a way to configure a multi-queue consumer but requires completely different setup. By the way, enqueue does it a lot better. you can run a single command to consume from all queues at once. The --queue
command option allows doing more accurate adjustments.
Does it make sense to separate RabbitMQ server from Consumers, or should I use the Rabbit server with consumers running in the background?
RabbitMQ server should be run in a separate container. I would not suggest adding them to mixing them up in one container.
Or should they be placed in the background of the app container?
I'd suggest having at least two app containers. One runs a web server and serves HTTP request another one runs queue consumers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With