I keep getting connection timeout while pulling an image:
First, it starts downloading the 3 first layers, after one of them finish, the 4th layer try to start downloading. Now the problem is it won't start until the two remaining layers finish there download process, and before that happens (I think) the fourth layer fails to start downloading and abort the whole process. So I was thinking, if downloading the layers one by one would solve this problem. Or maybe a better way/option to solve this issue that may occure when you don't have a very fast internet speed.
In the command line interface (CLI), each layer of a Docker image is viewable under /var/lib/docker/aufs/diff, or via the Docker history command). Docker shows all top-layer images, Like the repository, tags and file sizes, by default.
Docker enables you to pull an image by its digest. When pulling an image by digest, you specify exactly which version of an image to pull. Doing so, allows you to “pin” an image to that version, and guarantee that the image you're using is always the same.
Docker has limited patience when it comes to stopping containers. There's a timeout, which is 10 seconds by default for each container. If even one of your containers does not respond to SIGTERM signals, Docker will wait for 10 seconds at least.
The Docker daemon has a --max-concurrent-downloads
option. According to the documentation, it sets the max concurrent downloads for each pull.
So you can start the daemon with dockerd --max-concurrent-downloads 1
to get the desired effect.
See the dockerd documentation for how to set daemon options on startup.
Please follow the step if docker running already Ubuntu:
sudo service docker stop sudo dockerd --max-concurrent-downloads 1
Download your images after that stop this terminal and start the daemon again as it was earlier.
sudo service docker start
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With