Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Implement Docker isolation for multiple users

I’ve been asked to configure a ubuntu 18.04 server with docker for multiple users.

Purpose: We have multiple testers who write test cases. But our laptops aren’t fast enough to build the project and run tescases in docker environment. We already have a jenkins server.But we need to build/test our code BEFORE push to git.

I’ve been given a high end ubuntu 18.04 server. I have to configure the server where all our testers can run/debug our testcases on isolated environments.

When testers push there changes to remote servers project should build and run on isolated environments. Multiple users can work on same project but one testers builds must NOT affect another one.

I already installed Docker and tried with only changing docker-compose.yml and adding different networks (using multiple accounts of course). But it was very painful.

I need to have multiple selenoid servers(for different users),different allure reports with docker , Need the ability to build and run tests using our docker-compose files and need the ability to run the actual project on different ports so we can go through the system while writing test cases.

Is it possible to configure an environment without changing project docker-compose.yml ? Whats the approach I should take ?

like image 671
AMendis Avatar asked Jul 07 '20 13:07

AMendis


People also ask

Can multiple users connect to the same docker container?

Multiple users on the same host can use docker.

How does docker provide isolation?

Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container. These namespaces provide a layer of isolation.

Can a docker container have multiple applications?

It's ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.

How do I run a docker container as a different user?

For docker run : Simply add the option --user <user> to change to another user when you start the docker container. For docker attach or docker exec : Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.


Video Answer


1 Answers

You can use Docker in Docker (docker:dind image) to run multiple instances of Docker daemon on the same host, and have each tester use a different DOCKER_HOST to run their Compose stack. Each app instance will be deployed on a separate Docker daemon and isolated without requiring any change in docker-compose.yml.

Docker in Docker can be used to run a Docker daemon from another Docker daemon. (Docker daemon is the process actually managing your container when using docker). See Docker architecture and DinD original blogpost for details.


Example: run 2 Docker daemons exposing the app port

Let's Consider 2 testers with this docker-compose.yml:

version: 3
services:
  app:
    image: my/app:latest
    ports:
      - 8080:80
  1. Run 2 instances of Docker Daemon exposing Daemon port and any port that will be exposed by Docker Compose (see below why)
# Run docker dind and map port 23751 on localhost
# Expose Daemon 8080 on 8081 (port that will be used by Tester1)
# privileged is required to run dind (see dind-rootless exists but is experimental) 
# DOCKER_TLS_CERTDIR="" is to deploy an unsecure Daemon
# it's easier to use but should only be used for testing/dev purposes
docker run -d \
  -p 23751:2375 \
  -p 8081:8080 \
  --privileged \
  --name dockerd-tester1 \
  -e DOCKER_TLS_CERTDIR="" 
  docker:dind

# Second Daemon using port 23752
docker run -d \
  -p 23752:2375 \
  -p 8082:8080 \
  --privileged \
  --name dockerd-tester2 \
  -e DOCKER_TLS_CERTDIR="" 
  docker:dind
  1. Each tester can run their own stack on their Docker daemon by setting DOCKER_HOST env var:
# Tester 1 shell
# use dockerd-tester1 daemon on port 23751
export DOCKER_HOST=tcp://localhost:23751

# run our stack
docker-compose up -d

Same for Tester 2 on dockerd-tester2 port:

# Tester 2 shell
export DOCKER_HOST=tcp://localhost:23752
docker-compose up -d
  1. Interacting with Tester 1 and 2's stacks

Need the ability to build and run tests using our docker-compose files and need the ability to run the actual project on different ports

The exposed ports for each testers will be exposed on the Docker daemon host and reachable via http://$DOCKER_HOST:$APP_PORT instead of localhost:$APP_PORT (that's why we also exposed app port on each Daemon).

Considering our docker-compose.yml, testers will be able to access application such as:

# Tester 1
# port 8081 is linked to port 8080 of Docker daemon running our app container
# itself redirect on port 8080
# in short: 8081 -> 8080 -> 80
curl localhost:8081

# Tester 2
# 8082 -> 8080 -> 80
curl localhost:8082

Our deployment will look like this

enter image description here


Alternative without exposing ports, using Docker daemon IP directly

Similar to the first example, you can also interact with the deployed app by using Docker daemon IP directly:

# Run daemon without exposing ports
docker run -d \
  --privileged \
  --name dockerd-tester1 \
  -e DOCKER_TLS_CERTDIR="" 
  docker:dind

# Retrieve daemon IP
docker inspect --format '{{ .NetworkSettings.IPAddress }}' dockerd-tester1
# output like 172.17.0.2

# use it!
export DOCKER_HOST=172.17.0.2
docker-compose up -d

# our app port are exposed on Daemon
curl 172.17.0.2:8080

We contacted directly our Daemon via its IP instead of exposing its port on localhost.


You can even define your Docker daemons with static IPs in a docker-compose.yml such as:

version: "3"

services:
  dockerd-tester1:
    image: docker:dind
    privileged: true
    environment:
      DOCKER_TLS_CERTDIR: ""
    networks:
      dind-net:
        # static IP to set as DOCKER_HOST
        ipv4_address: 10.5.0.6

  # same for dockerd-tester2
  # ...

networks:
  dind-net:
    driver: bridge
    ipam:
     config:
       - subnet: 10.5.0.0/16

And then

export DOCKER_HOST=10.5.0.6
# ...

Notes:

  • This may have some performance impact depending on the machine on which Daemons are deployed
  • You can use dind-rootless instead of dind to avoid using --privileged flags
  • It's better to avoid DOCKER_TLS_CERTDIR: "" for security reasons, see TLS instruction on docker image for detailed usage of TLS
like image 189
Pierre B. Avatar answered Sep 29 '22 09:09

Pierre B.