I have a deployment setup with Docker that works as follows:
I'd like to do these steps as quickly as possible, but they take an incredibly long time. Even for an image of modest size (750MiB, including the standard ubuntu
and friends), after a small modification, it takes 17 minutes to deploy. I optimized the order of items in my Dockerfile
, so it actually hits the cached images most of the time. This doesn't seem to make a difference.
The main culprit is the docker push
step. For both Docker Hub and Quay.io, it takes an unrealistically long time to push images. In one simple benchmark I did, I executed docker push
twice back to back, so all the previous images are already on the registry. So I only see these lines:
...
bf84c1d841244f: Image already pushed, skipping
...
But if I time the push, the performance is horrendous. Pushing to Quay.io takes 3.5 minutes when all the images are already on the server! Pushing to Docker Hub takes about 12 minutes!
There is clearly something wrong somewhere, since many people are using Docker in production, these times are exactly the opposite of continuous delivery.
How can I make this run quicker? Do others also see this kind of performance? Does it have to do with the registry services, or somehow related to my local machine?
I am using Docker under Mac OS X.
Pushing to Docker Hub takes about 12 minutes!
If you make an image from multiple layers which modify the same files, and those files are large in each layer, squashing can reduce the total size of your image. Beware that the squashed image will consist of a single layer, so won't be able to share layers with other images.
Docker Hub can automatically build images from source code in an external repository and automatically push the built image to your Docker repositories.
Just a note: I run my own docker registry which is local to the machine I am issuing the "docker push" command on and it still takes an inordinate amount of time. It is definitely not an I/O rate issue from the disks as they are backed by SSDs (and to clarify, they are performant with ~500+MB/sec from anything else that uses them). However, the docker push command seems to take just as long as if I were sending it to a remote site. I think there is something beyond "bandwidth" issues going on. My suspicion is that regardless of the fact that my registry is local, it is still attempting to use the NIC to transfer data (which seems to make sense due to requiring a URI as the push destination and the registry being a container itself).
That being said, I can copy the same file(s) to where they will ultimately reside on the local registry orders of magnitude faster than the push command. Perhaps the solution would be just that. However, the one thing that is clear is that the problem alone is not one of bandwidth per se, but likely data path in general.
At any rate, running a local registry will not likely (totally) solve the OP's issue. While I just started to investigate, I suspect there needs to be a code change to docker in order to resolve this issue. I don't think it is a bug, but rather a design challenge. URIs and/or host<->host communications require network stacks, even when the source and destination are the same machine/host/container.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With