My understading, based on the fact that Docker is based on LXC, is that Docker containers share various resources from its host operating system. My concern is with CPU cores. Here is a scenario:
a) So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS ?
b) Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core ?
c) How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
If no value is provided docker will use a default value. On windows, a container defaults to using two CPUs. If hyperthreading is available this is one core and two logical processors. If hyperthreading is not available this is two cores and two logical processors.
By default, Docker containers have access to the full RAM and CPU resources of the host. Leaving them to run with these default settings may lead to performance bottlenecks. If you don't limit Docker's memory and CPU usage, Docker can use all the systems resources.
We can get CPU usage of docker container with docker stats command. The column CPU % will give the percentage of the host's CPU the container is using.
@MuhammadUmer yes, same as running multiple processes on one CPU.
Currently, I don't think docker provides this level of granularity. It doesn't specify how many cores it allocates in its lxc.conf files, so you will get all cores for each docker, potentially (or possibly 1, I'm not 100% sure on that).
However, you could tweak the conf file generated for a given container and set something like
cpuset { cpuset.cpus="0-3"; }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With