No. Docker image/container only has the application layer of the OS and uses the kernel and CPU of the host machine. That's why docker container boot's so fast. In your host machine kernel is already running, so if you boot your docker container it will share the running kernel and start the container so fast.
Docker never uses a different kernel: the kernel is always your host kernel. If your host kernel is "compatible enough" with the software in the container you want to run it will work; otherwise it won't.
Containers share the same operating system kernel and isolate the application processes from the rest of the system.
You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64). Docker Inc. builds products that let you build and run containers on Linux, Windows and macOS.
If my program depends on some function of a kernel library, and that function in turn has a chain of dependencies, how does docker stay small and portable without taking a snapshot of all the kernel libraries (and managing dependency issues at a function rather than library level)? In other words how does it insulate itself from changes in Kernel libraries from one version to the next, and does it do so at a library or function granlarity?
Also what if my application has a software stack where for example one function is compatible with a future version of kernel library A whereas a second function that uses kernel library A is no longer compatible. In other words:
function 1&2 both depend on and work with functions in kernel Lib A version 1.0
function 1 works with Lib A version 1.1 function 2 breaks with Lib A version 1.1 (function 2 still needs Lib A version 1.0)
I don't know much about Docker so this is a newbie question.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With