How is the isolation provided by operating system containers different than that provided by the kernel between many processes?
Each process is already isolated from any other process running on the same kernel. How is this isolation different than the isolation provided by containers?
Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging. Docker containers that run on Docker Engine: Standard: Docker created the industry standard for containers, so they could be portable anywhere.
Process isolation is a set of different hardware and software technologies designed to protect each process from other processes on the operating system. It does so by preventing process A from writing to process B.
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allows you to run many containers simultaneously on a given host.
One of these principles is that there should just be one process running in a container. That is to say, a Docker container should have just one program running inside it. Docker is efficient at creating and starting containers. It allocates PID (Process ID) 1 to the process running inside the container.
Each process is already isolated from any other process running on the same kernel.
Are they? How does kill -9
work? I can just reach out and zap any process I feel like, if I have enough permissions.
Container technologies like Docker, rkt, and LXC utilize two linux kernel features in particular to achieve "containerization".
The first is namespaces. From the opening blurb of the wikipedia entry:
Namespaces are a feature of the Linux kernel that isolate and virtualize system resources of a collection of processes. Examples of resources that can be virtualized include process IDs, hostnames, user IDs, network access, interprocess communication, and filesystems. Namespaces are a fundamental aspect of containers on Linux.
So I can use namespaces, for example, to restrict what a process can see or who a process can talk to, at the kernel level. I can configure interprocess communication and file system visibility in such a way that my kill -9
command cannot see the processes that live in a different namespace, and as such can't just kill them willy-nilly.
Second is control groups, which allows for resource limits and isolation. Cgroups let us tell a process "you can only have 512MB of memory and 10% host CPU usage". If I have some ugly command that is capable of using 99% of the CPU, the other processes on the host will not be isolated against having to share 1% of the CPU every now and then. With Cgroups, I can change that though. I can tell my ugly command "you only get 25% of the CPU at any given time" and that's all it can have.
The thing to remember here, these are linux kernel features, not some bolted-on system manager tool or other piece of software. Docker/rkt/LXC are platforms and tooling wrapped around these two core features of the kernel, they simply utilizing what is already there at the very basic level, making it more usable.
If by "operating system containers" you mean something like Dockers, my anwser is on that.
On Dockers you can limit memory and CPU usage for each containers you setup on the same machine. Here's a link that explains how and some of the various possibilities:
https://docs.docker.com/engine/admin/resource_constraints/
While on processes you can do something similar with Job Objects, they need to be coded in you app.
https://msdn.microsoft.com/en-us/library/ms684161(VS.85).aspx
Hope I understood well the question.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With