The situation:
I have a selenium app
(in python) which connects itself to an account on a website in order to download several CSV files.
To run it, I use docker
(and docker-compose
) here's my docker-compose.yml
file
version: '3'
services:
selenium:
build:
context: .
dockerfile: compose/selenium/Dockerfile
ports:
- "4444:4444"
volumes:
- /dev/shm:/dev/shm
- download-folder:/home/seluser/downloads
enma:
build:
context: .
dockerfile: compose/enma_daio/Dockerfile
depends_on:
- selenium
volumes:
- download-folder:/data/selenium-downloads
env_file:
- .env
restart: always
volumes:
download-folder:
My selenium's Dockerfile
is just a way to create the downloads
folder with the official selenium docker image
FROM selenium/standalone-chrome
RUN mkdir -p /home/seluser/downloads
To run my task I use:
docker-compose run -d enma daio arg0 arg1 arg2
By the way, I also use an entrypoint.sh:
#!/bin/bash
set -e
cd /app
# Selenium takes a bit of time before being up so we wait until we can reach it
function selenium_ready(){
curl selenium:4444 &>/dev/null
}
until selenium_ready; do
>&2 echo "Waiting for selenium..."
sleep 1
done
if [ "$1" = 'daio' ]; then
shift
exec python enma.py $@
fi
exec "$@"
The problem:
When I run multiple instances at the same time (on different accounts on the same website), they share the same selenium container
and so the same volume
. All Downloaded files are mixed together and I can't know which file comes which run
.
What I would like to do:
I would like to create another selenium container
every time I run a new task. Or find the other way to use different volumes.
Using Multiple Docker Compose Files Use multiple Docker Compose files when you want to change your app for different environments (e.g., dev, staging, and production) or when you want to run admin tasks against a Compose application.
It's a cool tool to make handling container configuration or multiple interconnected containers a bit easier. The “don't use docker-compose in production” statement is motivated by hidden assumptions which are not necessarily valid for everybody, and propagated by unclear communication.
We can also use the | operator to run multiple commands in Docker Compose. The syntax of the | operator is a bit different from the && operator. Here, we added the commands on separate lines. Everything is the same except for the command instruction.
Compose uses the project name to create unique identifiers for all of a project's containers and other resources. To run multiple copies of a project, set a custom project name using the -p command line option or the COMPOSE_PROJECT_NAME environment variable.
This sounds like you should pass the --project-name
or p
flag to docker-compose when doing docker-compose run
.
By default docker-compose creates volume and container names based on your project name with the name of the current directory as a default. So in your case you will have a volume name <cwd>_download-folder
. With container names <cwd>_selenium
and <cwd>_enma
.
If you want to have new volumes and a new selenium
container created on each docker-compose run
you just need to override its project name.
So if you do
$ docker-compose -p name1 run -d enma daio arg0 arg1 arg2
$ docker-compose -p name2 run -d enma daio arg0 arg1 arg2
You will end up with two created volumes, and four containers. Which seems to suit your needs, this will eliminate the enma
containers from sharing the same volume.
FYI you can view which volumes have been created by running docker volume ls
.
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With