The origin of this question is that the pulling from docker private registry is super slow.
Then I figure out that the speed of wget
through registry remote API is all right.
Now I get layers of one image. How can I load it as image to my docker daemon?
ps:
I tried docker load < layer.0
, where layer.0 is the base image of all layers.
And the result as following.
FATA[0015] Error: open /home/docker/data/docker/tmp/docker-import-087506163/repo/etc/json: no such file or directory
All layers are diffs you have to load them all for your "image" to be complete. There is no single image file which contains all your layers. This is by design, because it allows sharing of those layers as a base for subsequent images.
You can flatten an image with docker export and then docker import. That will take all the existing layers and export them as a single file system image, in a tarball. Then import will bring it back in again as an image.
Steps:
docker run --name mycontainer
the image to create a container (the container name mycontainer is just an example)
docker export --output=mycontainer.tar mycontainer
the container to a tarball (mycontainer.tar is just an example)
cat mycontainer.tar | docker import - mynewimage:imported
the tarball (the image name mynewimage:imported is just an example)
Docs:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With