Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multiple docker containers of same image and memory usage

Tags:

docker

I have a pretty basic question about docker that I can't seem to get an answer to.

What is the difference between having 1 container running nginx and 500 virtual hosts and 500 containers each based off an nginx image (each with different configs)?

Seems like maybe the later case (500 containers) would have the memory requirements of a container multiplied by 500. But maybe docker is smarter than that (it seems aufs can share memory somehow)?

Basically wondering how to setup a system for hosting many low-traffic wordpress instances. It is ok to make a new container for each instance (nginx + php)?

like image 853
Nick Lang Avatar asked Feb 01 '17 23:02

Nick Lang


People also ask

Can you run multiple containers from the same image?

Just run from the same image as many times as needed. New containers will be created and they can then be started and stoped each one saving its own configuration. For your convenience would be better to give each of your containers a name with "--name". That's it.

Can a single Docker image have multiple containers?

It's ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.

Can I run multiple Docker containers at once?

Docker Compose is a tool that helps us overcome this problem and efficiently handle multiple containers at once. Also used to manage several containers at the same time for the same application. This tool can become very powerful and allow you to deploy applications with complex architectures very quickly.


1 Answers

An application memory footprint depends on several things:

  1. Kernel
  2. Kernel resources (file caches, network buffers etc)
  3. Application code (including loaded libraries)
  4. Static application data
  5. Operational application data ( wordpress views, user records etc)

All docker containers share the same kernel, so it is reused by all instances. AUFS storage driver lets you share loaded application code so that it's also loaded once for all containers.

Application data both static and operational is never shared between containers. So you multiple this footprint by 500.

Kernel resources and operational application data is never shared in either scenario. If a user requests a page from blogA and blogB this page is going to be created and sent to the user no matter what.

In your case most likely one nginx process with 500 virtual hosts will have less memory footprint. By how much is very hard to tell, depends on how busy the blogs are how much network buffering is to be done, do you have a shared database and memcache server. The only sure way to tell is to set it up and observe.

However with containers you can have multiple boxes, so when things get tight you can just move a single container to a separate box without affecting the rest of your users, also you can make more instances of a particular blog if it gets very busy and spread instances over several boxes. Look into things like Docker-Swarm.

Another advantage of containers is that you can have very simple configuration for individual nginx instead of a monster with 500 virtual hosts.

like image 95
Vlad Avatar answered Oct 14 '22 04:10

Vlad