I have a docker-compose file which builds two containers, a node app and a ngnix server. Now I would like to automate the build and run process on the server with the help of Gitlab runners. I am pretty new to CI-related stuff so please excuse my approach:
I would want to create multiple repositories on gitlab.com and have a Dockerfile for each one of these. Do I now have to associate a gitlab-runner instance with each of these projects in order to build the image, push it to a docker repo and let the server pull it from there? And then I would have to somehow push the docker-compose file on the server and compose everything from there.
So my questions are:
First of all, you can obviously use GitLab CI/CD features on https://gitlab.com as well as on self hosted GitLab instances. It doesn't change anything, except the host on which you will register your runner:
You can add as many runners as you want (I think so, and at least I have 5-6 runners per project without problem). You just need to register each of those runners for your project. See Registering Runners for that.
As for shared runners versus specific runner, I think you should stick to share runners if you wish to try GitLab CI/CD.
Shared Runners on GitLab.com run in autoscale mode and are powered by DigitalOcean. Autoscaling means reduced wait times to spin up builds, and isolated VMs for each project, thus maximizing security.
They're free to use for public open source projects and limited to 2000 CI minutes per month per group for private projects. Read about all GitLab.com plans.
You can install your own runners on literraly any machine though, for example your laptotp. You can deploy it with Docker for a quick start.
Finally, yes you can use docker-compose in a gitlab-ci.yml
file if you use ssh
executor and have docker-compose
install on your server.
But I recommend using the docker
executor and use docker:dind
(Docker in Docker) image
What is Docker in Docker?
Although running Docker inside Docker is generally not recommended, there are > some legitimate use cases, such as development of Docker itself.
Here is an example usage, without docker-compose
though:
image: docker:latest
services:
- name: docker:dind
command: ["--experimental"]
before_script:
- apk add --no-cache py-pip # <-- add python package install pip
- pip install docker-compose # <--- add docker-compose
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin # <---- Login to your registry
build-master:
stage: build
script:
- docker build --squash --pull -t "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":latest .
- docker push "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":latest
only:
- master
build-dev:
stage: build
script:
- docker build --squash --pull -t "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_SLUG"
except:
- master
As you can see, I build the Docker image, tag it, then push it to my Docker registry, but you could push to any registry. And of course you could use docker-compose at any time in a script
declaration
My Git repository looks like :
/my_repo
|---- .gitignore
|---- .gitlab-ci.yml
|---- Dockerfile
|---- README.md
And the config.toml of my runner looks like:
[[runners]]
name = "4Gb digital ocean vps"
url = "https://gitlab.com"
token = "efnrong44d77a5d40f74fc2ba84d8"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:dind"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
You can take a look at https://docs.gitlab.com/runner/configuration/advanced-configuration.html for more information about Runner configuration.
Note : All the variables used here are secret variables. See https://docs.gitlab.com/ee/ci/variables/ for explanations
I hope it answers your questions
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With