I've looked at some other answers but none seem to be quite what I'm looking for.
I've got a python bot I've written that I turned in to a docker container that's launched via
docker run -dit --restart unless-stopped -v /home/dockeradmin/pythonApp/:/pythonApp--name python-bot-app python-bot
My question though is that how to update my docker container when I change the code for my python project. Right now I usually will just rebuild the image, stop/prune the container, and then start it again, however this seems to be extremely wasteful.
Is there a simple or "right" way to do this?
I've got a python bot I've written that I turned in to a docker container that's launched via
docker run -dit --restart unless-stopped \
-v /home/dockeradmin/pythonApp/:/pythonApp \
--name python-bot-app python-bot
This is a very common way to run containers in a development environment like your laptop. The name on the container lets you easily find your container and manage it. The volume mount includes the current code in your container on top of whatever was built in the image at that same location. If you restart the container, the volume mount will restart the app with that new code in the container, which should mean testing a change in python only involves a:
docker container restart python-bot
My question though is that how to update my docker container when I change the code for my python project.
When you get into deploying the application in production, the above is less than ideal. You need something that is easily redeployed, the ability to backout changes quickly if there's an error, and most importantly you need to avoid the risk of state drift. The standard workflow into production involves:
The important parts are that you do not upgrade containers in place, all of the code is inside the image rather than being mounted with a volume (you still have volumes for data), and you also don't give containers anything unique that would prevent scaling like a container name.
Right now I usually will just rebuild the image, stop/prune the container, and then start it again, however this seems to be extremely wasteful.
On a single node implementation, you can start with docker-compose
to replace the container, and it will handle the stop and restart steps for you. When you get into multi-node environments, you'll want Swarm Mode or Kubernetes to handle rolling updates of your application, providing HA and avoiding any outage during the update of your app.
When working with containers, you minimize waste by efficiently layering your image, reusing the build cache, and shipping images with a registry server. Docker's filesystem layers build on top of each other to create an image, and if you only change a few files in the last layer, only those changes are sent when the updated image is deployed. Any changes to an application will involve restarting that application at a minimum, and a container is only a bit of additional kernel API calls to run that application with settings to create its own namespace and restrictions. The only addition you have with recreating a container vs restarting it is a bit of added housekeeping to remove old images and possibly some stopped containers. But the advantage you gain in knowing that your entire environment is reproducible without any state drift is worth the added effort.
Rebuilding your image when your code changes is the canonical approach, and is not wasteful at all if done right.
Your pythonApp
code should be COPY
'd into your image as the final step (rule of thumb: most frequently changed step in dockerfile should go last). This means rebuilding will be very fast as all other steps will be cached. If you only have a few kB of source code changes it will only result in a single new layer of a few kB. Stopping and starting containers is also very light weight.
There is nothing to worry about in following this approach.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With