I’ve been asked to configure a ubuntu 18.04 server with docker for multiple users.
Purpose: We have multiple testers who write test cases. But our laptops aren’t fast enough to build the project and run tescases in docker environment. We already have a jenkins server.But we need to build/test our code BEFORE push to git.
I’ve been given a high end ubuntu 18.04 server. I have to configure the server where all our testers can run/debug our testcases on isolated environments.
When testers push there changes to remote servers project should build and run on isolated environments. Multiple users can work on same project but one testers builds must NOT affect another one.
I already installed Docker and tried with only changing docker-compose.yml and adding different networks (using multiple accounts of course). But it was very painful.
I need to have multiple selenoid servers(for different users),different allure reports with docker , Need the ability to build and run tests using our docker-compose files and need the ability to run the actual project on different ports so we can go through the system while writing test cases.
Is it possible to configure an environment without changing project docker-compose.yml ? Whats the approach I should take ?
Multiple users on the same host can use docker.
Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container. These namespaces provide a layer of isolation.
It's ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.
For docker run : Simply add the option --user <user> to change to another user when you start the docker container. For docker attach or docker exec : Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.
You can use Docker in Docker (docker:dind
image) to run multiple instances of Docker daemon on the same host, and have each tester use a different DOCKER_HOST
to run their Compose stack. Each app instance will be deployed on a separate Docker daemon and isolated without requiring any change in docker-compose.yml
.
Docker in Docker can be used to run a Docker daemon from another Docker daemon. (Docker daemon is the process actually managing your container when using docker
). See Docker architecture and DinD original blogpost for details.
Let's Consider 2 testers with this docker-compose.yml
:
version: 3
services:
app:
image: my/app:latest
ports:
- 8080:80
# Run docker dind and map port 23751 on localhost
# Expose Daemon 8080 on 8081 (port that will be used by Tester1)
# privileged is required to run dind (see dind-rootless exists but is experimental)
# DOCKER_TLS_CERTDIR="" is to deploy an unsecure Daemon
# it's easier to use but should only be used for testing/dev purposes
docker run -d \
-p 23751:2375 \
-p 8081:8080 \
--privileged \
--name dockerd-tester1 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
# Second Daemon using port 23752
docker run -d \
-p 23752:2375 \
-p 8082:8080 \
--privileged \
--name dockerd-tester2 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
DOCKER_HOST
env var:# Tester 1 shell
# use dockerd-tester1 daemon on port 23751
export DOCKER_HOST=tcp://localhost:23751
# run our stack
docker-compose up -d
Same for Tester 2 on dockerd-tester2
port:
# Tester 2 shell
export DOCKER_HOST=tcp://localhost:23752
docker-compose up -d
Need the ability to build and run tests using our docker-compose files and need the ability to run the actual project on different ports
The exposed ports for each testers will be exposed on the Docker daemon host and reachable via http://$DOCKER_HOST:$APP_PORT
instead of localhost:$APP_PORT
(that's why we also exposed app port on each Daemon).
Considering our docker-compose.yml
, testers will be able to access application such as:
# Tester 1
# port 8081 is linked to port 8080 of Docker daemon running our app container
# itself redirect on port 8080
# in short: 8081 -> 8080 -> 80
curl localhost:8081
# Tester 2
# 8082 -> 8080 -> 80
curl localhost:8082
Our deployment will look like this
Similar to the first example, you can also interact with the deployed app by using Docker daemon IP directly:
# Run daemon without exposing ports
docker run -d \
--privileged \
--name dockerd-tester1 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
# Retrieve daemon IP
docker inspect --format '{{ .NetworkSettings.IPAddress }}' dockerd-tester1
# output like 172.17.0.2
# use it!
export DOCKER_HOST=172.17.0.2
docker-compose up -d
# our app port are exposed on Daemon
curl 172.17.0.2:8080
We contacted directly our Daemon via its IP instead of exposing its port on localhost.
You can even define your Docker daemons with static IPs in a docker-compose.yml
such as:
version: "3"
services:
dockerd-tester1:
image: docker:dind
privileged: true
environment:
DOCKER_TLS_CERTDIR: ""
networks:
dind-net:
# static IP to set as DOCKER_HOST
ipv4_address: 10.5.0.6
# same for dockerd-tester2
# ...
networks:
dind-net:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
And then
export DOCKER_HOST=10.5.0.6
# ...
Notes:
dind-rootless
instead of dind
to avoid using --privileged
flagsDOCKER_TLS_CERTDIR: ""
for security reasons, see TLS
instruction on docker
image for detailed usage of TLSIf you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With