One of the Docker examples is for a container with Memcached configured. I'm wondering why one would want this versus a VM configured with Memcached? I'm guessing that it would make no sense to have more than one memcached docker container running under the same host, and that the only real advantage is speed advantage of "spinning up" the memcached stack in a docker container vs Memcached via a VM. Is this correct?
Also, how does one set the memory to be used by memcached in the docker container? How would this work if there were two or more docker containers with Memcached under one host? (I'm assuming again that two or more would not make sense).
I'm wondering why one would want this versus a VM configured with Memcached?
Security: If someone breaks memcached and trojans the filesystem, it doesn't matter -- the file system gets thrown away when you start a new memchached.
Isolation: You can hard-limit each container to prevent it from using too much RAM.
Standardization: Currently, each app/database/cache/load balancer must record what to install, what to configure and what to run. There is no standard (and no lack of tools such as puppet, chef, etc.). But these tools are very complex, not really OS independent (despite their claims), and carry the same complexity from development to deployment.
With docker, everything is just a container started with run BLAH
. If your app has 5 layers, you just have 5 containers to run, with a tiny bit of orchestration on top. Developers never need to "look into the container" unless they are developing at that layer.
Resources: You can spin up 1000's of docker containers on an ordinary PC, but you would have trouble spinning up 100's of VMs. The limit is both CPU and RAM. Docker containers are just processes in an "enhanced" chroot. On a VM, there are dozens of background processes (cron, logrotation, syslog, etc), but there are no extra processes for docker.
I'm guessing that it would make no sense to have more than one memcached docker container running under the same host
It depends. There are cases where you want to split up your RAM into parcels instead of globally. (i.e. imagine if you want to devote 20% of your cache to caching users, and 40% of your cache to caching files, etc.)
Also, most sharding schemes are hard to expand, so people often start with many 'virtual' shards, then expand on to physical boxes when needed. So you might start with your app knowing about 20 memcached instances (chosen based on object ID). At first, all 20 run on one physical server. But later you split them onto 2 servers (10/10), then later onto 5 servers (4/4/4/4) and finally onto 20 physical servers (1 memcached each). Thus, you can scale your app 20x just by moving VMs around and not changing your app.
the only real advantage is speed advantage of "spinning up" the memcached stack in a docker container vs Memcached via a VM. Is this correct?
No, that's just a slight side benefit. see above.
Also, how does one set the memory to be used by memcached in the docker container?
In the docker run
command, just use -m
.
How would this work if there were two or more docker containers with Memcached under one host? (I'm assuming again that two or more would not make sense).
Same way. If you didn't set a memory limit, it would be exactly like running 2 memcached processes on the host. (If one fills up the memory, both will get out of memory errors.)
There seems to be two questions here...
1 - The benefit is as you describe. You can sandbox the memcached instance (and configuration) in to separate containers so you could run multiple on a given host. In addition, moving the memcached instance to another host is pretty trivial and just requires an update to application configuration in the worst case.
2 - docker run -m <inbytes> <memcached-image>
would limit the amount of memory a memcached container could consume. You can run as many of these as you want under a single host.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With