Situation: lots of heavy docker conainers that get hit periodically for a while, then stay unused for a longer period.
Wish: start the containers on demand (like systemd starts things through socket activation) and stop them after idling for given period. No visible downtime to end-user.
Options:
Any ideas appreciated!
This is similar to docker run -d except the container is never started. You can then use the docker container start (or shorthand: docker start ) command to start the container at any point.
Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restarts. Restart policies ensure that linked containers are started in the correct order. Docker recommends that you use restart policies, and avoid using process managers to start containers.
How to implement a CI/CD pipeline in the codebase using a CircleCI config file in the project. Building a Docker image. Pushing the Docker image to Docker Hub. Kicking off a deployment script which will run the application in Docker container on a Digital Ocean server.
You could use Kubernetes' built-in Horizonal Pod Autoscaling (HPA) to scale up from 1 instance of each container to as many are needed to handle the load, but there's no built-in functionality for 0-to-1 scaling on receiving a request, and I'm not aware of any widely used solution.
You can use systemd to manage your docker containers. See https://developer.atlassian.com/blog/2015/03/docker-systemd-socket-activation/
Some time ago I talked to an ops guy for pantheon.io about how they do this sort of thing with docker. I guess it would have been before Kubernetes even came out. Pantheon do drupal hosting. The way they have things set up, every server they run for clients is containerised, but as you describe, the container goes away when it's not needed. The only resource that's reserved then other than disk storage is a socket number on the host.
They have a fairly simple daemon which listens on the sockets of all inactive servers. When it receives a request, the daemon stops listening for more incoming connections on that socket, starts the required container, and proxies that one request to the new container. Subsequent connections go direct to the container until it's idle for a period, and the listener daemon takes over the port again. That's about as much detail as I know about what they did, but you get the idea.
I imagine that something like the daemon that pantheon implemented could be used to send commands to Kubernetes rather than straight to the Docker daemon. Maybe a systemd based approach to dynamically starting contaners could also communicate with Kubernetes as required. Either of these might allow you to fire up pods, not just containers.
Podman supports socket activation since version 3.4.0 (released Sep 2021).
See the Podman socket activation tutorial.
Running rootless Podman with socket-activated containers comes with some advantages:
Native network speed. The communication over the socket-activated socket does not pass through slirp4netns so it has the same performance characteristics as the normal network on the host.
Improved security, because the container can run with --network=none if it only needs to communicate over the activated socket.
I wrote two blogs about the security advantages of using socket-activated containers with Podman:
https://www.redhat.com/sysadmin/socket-activation-podman https://www.redhat.com/sysadmin/podman-systemd-limit-access
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With