Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Deploy Docker Container with Compose & Github Actions

I am using GitHub Actions to trigger the building of my dockerfile, it is uploading the container to GitHub Container Registry. In the last step i am connecting via SSH to my remote DigitalOcean Droplet and executing a script to pull and install the new image from GHCR. This workflow was good for me as I was only building a single container in the project. Now I am using docker compose as I need NGINX besides by API. I would like to keep the containers on a single dropplet as the project is not demanding in ressources at the moment.

What is the right way to automate deployment with Github Actions and Docker Compose to DigitalOcean on a single VM?

My currently known options are:

  • Skip building containers on GHCR and fetch the repo via ssh to start building on remote from source by executing a production compose file
  • Building each container on GHCR, copy the production compose file on remote to pull & install from GHCR

If you know more options, that may be cleaner or more efficient please let me know!

Unfortunatly I have found a docker-compose with Github Actions for CI question for reference.

GitHub Action for single Container

name: Github Container Registry to DigitalOcean Droplet

on:
  # Trigger the workflow via push on main branch
  push:
    branches:
      - main
    # use only trigger action if the backend folder changed
    paths:
      - "backend/**"
      - ".github/workflows/**"

jobs:
  # Builds a Docker Image and pushes it to Github Container Registry
  push_to_github_container_registry:
    name: Push to GHCR
    runs-on: ubuntu-latest

    # use the backend folder as the default working directory for the job
    defaults:
      run:
        working-directory: ./backend

    steps:
      # Checkout the Repository
      - name: Checking out the repository
        uses: actions/checkout@v2

      # Setting up Docker Builder
      - name: Set up Docker Builder
        uses: docker/setup-buildx-action@v1

      # Set Github Access Token with "write:packages & read:packages" scope for Github Container Registry.
      # Then go to repository setings and add the copied token as a secret called "CR_PAT"
      # https://github.com/settings/tokens/new?scopes=repo,write:packages&description=Github+Container+Registry
      # ! While GHCR is in Beta make sure to enable the feature
      - name: Logging into GitHub Container Registry
        uses: docker/login-action@v1
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.CR_PAT }}

      # Push to Github Container Registry
      - name: Pushing Image to Github Container Registry
        uses: docker/build-push-action@v2
        with:
          context: ./backend
          version: latest
          file: backend/dockerfile
          push: true
          tags: ghcr.io/${{ github.repository }}:latest

  # Connect to existing Droplet via SSH and (re)installs add. runs the image
  # ! Ensure you have installed the preconfigured Droplet with Docker
  # ! Ensure you have added SSH Key to the Droplet
  # !   - its easier to add the SSH Keys bevore createing the droplet
  deploy_to_digital_ocean_dropplet:
    name: Deploy to Digital Ocean Droplet
    runs-on: ubuntu-latest
    needs: push_to_github_container_registry

    steps:
      - name: Deploy to Digital Ocean droplet via SSH action
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.HOST }}
          username: ${{ secrets.USERNAME }}
          key: ${{ secrets.PRIVATE_KEY }}
          port: ${{ secrets.PORT }}
          script: |
            # Stop all running Docker Containers
            docker kill $(docker ps -q)

            # Free up space
            docker system prune -a

            # Login to Github Container Registry
            docker login https://ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.CR_PAT }}

            # Pull the Docker Image 
            docker pull ghcr.io/${{ github.repository }}:latest

            # Run a new container from a new image
            docker run -d -p 80:8080 -p 443:443 -t ghcr.io/${{ github.repository }}:latest

Current Docker-Compose

version: "3"

services:
  api:
    build:
      context: ./backend/api
    networks:
      api-network:
        aliases:
          - api-net
  nginx:
    build:
      context: ./backend/nginx
    ports:
      - "80:80"
      - "443:443"
    networks:
      api-network:
        aliases:
          - nginx-net
    depends_on:
      - api

networks:
  api-network:

like image 339
nixn Avatar asked Apr 09 '21 14:04

nixn


Video Answer


1 Answers

Thought I'd post this as an answer instead of a comment since it was cleaner.

Here's a gist: https://gist.github.com/Aldo111/702f1146fb88f2c14f7b5955bec3d101

name: Server Build & Push

on:
  push:
    branches: [main]
    paths:
      - 'server/**'
      - 'shared/**'
      - docker-compose.prod.yml
      - Dockerfile

jobs:
  build_and_push:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout the repo
        uses: actions/checkout@v2
      - name: Create env file
        run: |
          touch .env
          echo "${{ secrets.SERVER_ENV_PROD }}" > .env
          cat .env
      - name: Build image
        run: docker compose -f docker-compose.prod.yml build

      - name: Install doctl
        uses: digitalocean/action-doctl@v2
        with:
          token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}

      - name: Log in to DO Container Registry
        run: doctl registry login --expiry-seconds 600

      - name: Push image to DO Container Registry
        run: docker compose -f docker-compose.prod.yml push

      - name: Deploy Stack
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.GL_SSH_HOST }}
          username: ${{ secrets.GL_SSH_USERNAME }}
          key: ${{ secrets.GL_SSH_SECRET }}
          port: ${{ secrets.GL_SSH_PORT }}
          script: |
            cd /srv/www/game
            ./init.sh

In the final step, the directory in my case just contains a .env file and my prod compose file but these things could also be rsyncd/copied/automated as another step in this workflow before actually running things.

My init.sh simply contains:

docker stack deploy -c <(docker-compose -f docker-compose.yml config) game --with-registry-auth

The with-registry-auth part is important since my docker-compose has image:....s that use containers in DigitalOcean's container registry. So on my server, I'd already logged in once when I first setup the directory.

With that, this docker command consumes my docker-compose.yml along with the environment vairables (i.e. docker-compose -f docker-compose.yml config will pre-process the compose file with the .env file in the same directory, since stack deploy doesn't use .env) and registry already authenticated, pulls the relevant images, and restarts things as needed!

This can definitely be cleaned up and made a lot simpler but it's been working pretty well for me in my use case.

like image 100
Azarro Avatar answered Sep 28 '22 05:09

Azarro