Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

`docker start` in parallel?

I'm spinning up 1000 containers on a single Docker network, all from the same Docker image.

Currently it takes a long time to deploy. I've separated the process into docker create and docker start, as opposed to the monolithic docker run.

Is there any way to spin containers up in parallel? - Happy to work in a programming interface (Go, C, whatever), or use CLI commands.

Related: Can Docker Engine start containers in parallel [asked and answered 3 years ago]

like image 616
A T Avatar asked Sep 24 '18 05:09

A T


People also ask

Can Docker build in parallel?

Parallel Docker Build Looking at total seconds, the average build without parallel took 230 seconds. The average build using parallel took 128 seconds. That's a 44% drop in overall time for the build, which is pretty significant, especially when you're running this frequently while trying to troubleshoot a problem.

Does Docker start multiple containers?

The docker-compose. yml file allows you to configure and document all your application's service dependencies (other services, cache, databases, queues, etc.). Using the docker-compose CLI command, you can create and start one or more containers for each dependency with a single command (docker-compose up).


1 Answers

Deploying the containers with Swarm or K8s is just an added layer of abstraction on top of deploying lots of containers with a start command, they won't speed up the process (only make it easier to manage), so I'm not sure why so many are quick to recommend that for your question. More layers of abstraction don't speed up a solution. What they do allow is horizontal scaling, so if you can spread these containers across more docker hosts, an orchestration solution can make that easier to manage and give you some automated recovery from any faults.

The run command is a wrapper around create/start. And all of these docker commands are small wrappers around a REST API to dockerd. You can go directly to that API, but the time is likely spent setting up the namespaces, including IPAM and iptables rules for the networking. I'm not aware of any parallel API's that would speed up the start of lots of containers. One option to speed this up is to remove some of the namespace isolation, or look at other options for drivers used to create a namespace. Switching to host networking completely skips the container bridge network and iptables rules, putting your container in the same network namespace as the host.

Or with networking, you can configure the container with the "none" network to avoid any connections to a bridge network and iptables rules, though you'll still have a loopback address. You do have the option to connect a running container to a network after starting it, so if you don't have published ports, that may be an option to split the start command from the networking setup, depending on your use case.

Beyond that, if you want faster, you may need better or more hardware, perhaps a newer kernel with performance enhancements, or you may need to remove some of the abstraction layers. If you're willing to handle some of the networking and other pieces yourself, you could connect directly to the containerd backend used by docker. Just note that in doing so, you will lose some of the functionality that docker provides.

like image 153
BMitch Avatar answered Oct 31 '22 08:10

BMitch