Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker: containers vs local installs

Tags:

docker

After playing around with Docker for the first time over the week-end and seeing tiny images for everything from irssi, mutt, browsers, etc, I was wondering if local installs of packages are making way for dozens of containers instead?

I can see the benefit in keeping the base system very clean and having all these containers that are all self-contained and could be easily relocated to different desktops, even Windows. Each running a tiny distro like Alpine, with the app e.g. irssi, etc....

Is this the way things are moving towards or am I missing the boat here?

like image 288
Kosie Avatar asked Jan 11 '16 11:01

Kosie


People also ask

Do Docker containers run locally?

Docker containers can run on a developer's local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of environments.

Can we run containers without installing Docker?

There's no way for it to silently depend on the host configuration, because its filesystem is totally separate from the host's filesystem. To get these advantages, you don't need to run Docker or Kubernetes or Mesos or anything! You just need to have a container that is isolated from the rest of your operating system.

Are Docker containers worth it?

Docker is very useful for web applications running on a server or console-based software. But if your product is a standard desktop application, especially with a rich GUI, Docker may not be the best choice.

Why do we need Docker containers?

Developers can create containers without Docker, but the platform makes it easier, simpler, and safer to build, deploy and manage containers. Docker is essentially a toolkit that enables developers to build, deploy, run, update, and stop containers using simple commands and work-saving automation through a single API.


2 Answers

Jess Frazelle would not disagree with you.
In her blog post "Docker Containers on the Desktop", she is containerizing everything. Everything.

Like Chrome itself:

$ docker run -it \
    --net host \ # may as well YOLO
    --cpuset-cpus 0 \ # control the cpu
    --memory 512mb \ # max memory it can use
    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display
    -v $HOME/Downloads:/root/Downloads \ # optional, but nice
    -v $HOME/.config/google-chrome/:/data \ # if you want to save state
    --device /dev/snd \ # so we have sound
    --name chrome \
    jess/chrome

But Docker containers are not limited to that usage, and are mainly a way to represent a stable well-defined and reproducible execution environment, for one service per container, that you can use from a development workstation up to a production server.

like image 109
VonC Avatar answered Oct 06 '22 20:10

VonC


Your sentiment is correct. I have been a long-time Vagrant user and the simplicity it provided with creating portable, self-inflating systems has enabled me to become a wandering developer—I only need to securely transfer my private keys to any machine that is handed to me and a few moments away I'm back to where I left off with work. You can't wear two pairs of shoes at the same time, so if you have one machine and quickly need to a adopt a new secondary, this helps (I purchase great hardware for my loved ones and usurp in case of catastrophes).

My ideals were always to have no tools at all on my host, except for a browser client and a text editor, as to not suffer from any virtualization overhead. Unfortunately, with Vagrant, this required that I compromise on certain host features, such as being able to integrate with compilers, test runners, linters, etc.

With Docker, this isn't an issue. Like VonC shows, you can imagine that you can wrap his snippet of code inside a script, to which you can pass commands and have it behave just as the Chrome binary would, were it to be installed locally.

For instance, I could write a script that takes the working directory, mounts it inside a Node.js container and runs eslint on the sources. My editor would happily pass options to eslint and read from STDOUT, completely oblivious to the fact that I doesn't exist on my host at all.

# eslint, as seen by the editor
docker -v $(pwd):$(pwd) $OTHER_DOCKER_ARGS run $ESLINT_IMAGE $@

This may have been possible with hypervisors in the past, with some esoteric SSH incantations, who knows? I never entertained the idea, but with Docker, those who have not previously worked in such a manner find the approach unsurprising (as a good thing).

like image 31
Filip Dupanović Avatar answered Oct 06 '22 22:10

Filip Dupanović