Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What Docker image size is considered 'too large'?

In my previous company, we adopted a micro-service architecture and used Docker to implement it. The average size of our Docker images were ~300MB - ~600MB. However my new company is using Docker mostly for development workflow, and the average image size is ~1.5GB - ~3GB. Some of the larger images (10GB+) are being actively refactored to reduce the image size.

From everything I have read, I feel that these images are too large and we will run into issues down the line, but the rest of the team feels that Docker Engine and Docker Swarm should handle those image sizes without problems.

My question: Is there an accepted ideal range for Docker images, and what pitfalls (if any) will I face trying to use a workflow with GB images?

like image 670
tdensmore Avatar asked Jul 26 '16 19:07

tdensmore


People also ask

What is the maximum size of a Docker image?

1 Answer. The maximum size for a deployable container image on Azure Container Instances is 15 GB.

Why is Docker image size too big?

A Docker image takes up more space with every layer you add to it. Therefore, the more layers you have, the more space the image requires. Each RUN instruction in a Dockerfile adds a new layer to your image. That is why you should try to do file manipulation inside a single RUN command.

How big should my Docker container be?

The average size of our Docker images were ~300MB - ~600MB. However my new company is using Docker mostly for development workflow, and the average image size is ~1.5GB - ~3GB. Some of the larger images (10GB+) are being actively refactored to reduce the image size.

Does size of Docker image matter?

Docker images are a core component in our development and production lifecycles. Having a large image can make every step of the process slow and tedious. Size affects how long it takes to build the image locally, on CI/CD, or in production and it affects how quickly we can spin up new instances that run our code.


1 Answers

In my opinion, ideal size is only ideal for your exact case. For me and my current company, we have no image bigger than 1GB.

If you use an image of 10GB size and have no problems (is it even possible?!), then it is ok for your case.

As example of a problem case, you could consider a question such as: "Is it ok that I am waiting 1-2 hours while my image is deploying over internet to the remote server/dev machine?" In all likelihood, this is not ok. On the another hand, if you are not facing such a problem, then you have no problems at all.

Another problem is while small images start up for a couple of seconds, the huge one starts up for minutes. It also can break a "hot deploy" scheme if you use it.

It also could be appropriate to check why your image is so big. You can read how layers work.

Consider the following two Dockerfiles:

First:

RUN download something huge that weighs 5GB
RUN remove that something huge from above

Second:

RUN download something huge that weighs 5GB &&\
    remove that something huge from above

The image built from the second Dockerfile weighs 5GB less than that from the first, while they are the same inside.

Another trick is to use a small, basic image from the beginning. Just compare these differences:

IMAGE NAME     SIZE
busybox        1 MB
alpine         3 MB
debian         125 MB
ubuntu         188 MB 

While debian and ubuntu are almost the same inside, debian will save you 50MB from the start, and will need fewer dependencies in future.

like image 111
Evedel Avatar answered Oct 14 '22 15:10

Evedel