Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Updating a Docker container with new code

Tags:

python

docker

I've looked at some other answers but none seem to be quite what I'm looking for.

I've got a python bot I've written that I turned in to a docker container that's launched via

docker run -dit --restart unless-stopped -v /home/dockeradmin/pythonApp/:/pythonApp--name python-bot-app python-bot

My question though is that how to update my docker container when I change the code for my python project. Right now I usually will just rebuild the image, stop/prune the container, and then start it again, however this seems to be extremely wasteful.

Is there a simple or "right" way to do this?

like image 805
AndyReifman Avatar asked Sep 04 '18 19:09

AndyReifman


2 Answers

I've got a python bot I've written that I turned in to a docker container that's launched via

docker run -dit --restart unless-stopped \
  -v /home/dockeradmin/pythonApp/:/pythonApp \
  --name python-bot-app  python-bot

This is a very common way to run containers in a development environment like your laptop. The name on the container lets you easily find your container and manage it. The volume mount includes the current code in your container on top of whatever was built in the image at that same location. If you restart the container, the volume mount will restart the app with that new code in the container, which should mean testing a change in python only involves a:

docker container restart python-bot

My question though is that how to update my docker container when I change the code for my python project.

When you get into deploying the application in production, the above is less than ideal. You need something that is easily redeployed, the ability to backout changes quickly if there's an error, and most importantly you need to avoid the risk of state drift. The standard workflow into production involves:

  1. Checkin code changes to version control
  2. Have a build server detect those changes and create a new image with a unique tag
  3. Push that image to a registry server
  4. Deploy that image in a dev, CI, stage, and prod environment according to your organization policies

The important parts are that you do not upgrade containers in place, all of the code is inside the image rather than being mounted with a volume (you still have volumes for data), and you also don't give containers anything unique that would prevent scaling like a container name.

Right now I usually will just rebuild the image, stop/prune the container, and then start it again, however this seems to be extremely wasteful.

On a single node implementation, you can start with docker-compose to replace the container, and it will handle the stop and restart steps for you. When you get into multi-node environments, you'll want Swarm Mode or Kubernetes to handle rolling updates of your application, providing HA and avoiding any outage during the update of your app.

When working with containers, you minimize waste by efficiently layering your image, reusing the build cache, and shipping images with a registry server. Docker's filesystem layers build on top of each other to create an image, and if you only change a few files in the last layer, only those changes are sent when the updated image is deployed. Any changes to an application will involve restarting that application at a minimum, and a container is only a bit of additional kernel API calls to run that application with settings to create its own namespace and restrictions. The only addition you have with recreating a container vs restarting it is a bit of added housekeeping to remove old images and possibly some stopped containers. But the advantage you gain in knowing that your entire environment is reproducible without any state drift is worth the added effort.

like image 113
BMitch Avatar answered Oct 06 '22 13:10

BMitch


Rebuilding your image when your code changes is the canonical approach, and is not wasteful at all if done right.

Your pythonApp code should be COPY'd into your image as the final step (rule of thumb: most frequently changed step in dockerfile should go last). This means rebuilding will be very fast as all other steps will be cached. If you only have a few kB of source code changes it will only result in a single new layer of a few kB. Stopping and starting containers is also very light weight.

There is nothing to worry about in following this approach.

like image 38
Jack Ukleja Avatar answered Oct 06 '22 15:10

Jack Ukleja