Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Dockerfile - Hide --build-args from showing up in the build time

I have the following Dockerfile:

FROM ubuntu:16.04

RUN apt-get update \
    && apt-get upgrade -y \
    && apt-get install -y \
    git \
    make \
    python-pip \
    python2.7 \
    python2.7-dev \
    ssh \
    && apt-get autoremove \
    && apt-get clean

ARG password
ARG username
ENV password $password
ENV username $username

RUN pip install git+http://$username:[email protected]/scm/do/repo.git

I use the following commands to build the image from this Dockerfile:

docker build -t myimage:v1 --build-arg password="somepassoword" --build-arg username="someuser" .

However, in the build log the username and password that I pass as --build-arg are visible.

Step 8/8 : RUN pip install git+http://$username:[email protected]/scm/do/repo.git
 ---> Running in 650d9423b549
Collecting git+http://someuser:[email protected]/scm/do/repo.git

How to hide them? Or is there a different way of passing the credentials in the Dockerfile?

like image 840
ANIL Avatar asked Jan 10 '19 09:01

ANIL


People also ask

How do I pass args to Dockerfile build?

If you want to pass multiple build arguments with docker build command you have to pass each argument with separate — build-arg. docker build -t <image-name>:<tag> --build-arg <key1>=<value1> --build-arg <key2>=<value2> .

What is the difference between expose and publish in docker?

In this article, we learned about exposing and publishing the ports in Docker. We also discussed that the exposed port is metadata about the containerized application, whereas publishing a port is a way to access the application from the host.

What will happen if you use expose instruction in a Dockerfile?

The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. EXPOSE does not make the ports of the container accessible to the host.


2 Answers

Update

You know, I was focusing on the wrong part of your question. You shouldn't be using a username and password at all. You should be using access keys, which permit read-only access to private repositories.

Once you've created an ssh key and added the public component to your repository, you can then drop the private key into your image:

RUN mkdir -m 700 -p /root/.ssh
COPY my_access_key /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa

And now you can use that key when installing your Python project:

RUN pip install git+ssh://[email protected]/you/yourproject.repo

(Original answer follows)

You would generally not bake credentials into an image like this. In addition to the problem you've already discovered, it makes your image less useful because you would need to rebuild it every time your credentials changed, or if more than one person wanted to be able to use it.

Credentials are more generally provided at runtime via one of various mechanisms:

  • Environment variables: you can place your credentials in a file, e.g.:

    USERNAME=myname
    PASSWORD=secret
    

    And then include that on the docker run command line:

    docker run --env-file myenvfile.env ...
    

    The USERNAME and PASSWORD environment variables will be available to processes in your container.

  • Bind mounts: you can place your credentials in a file, and then expose that file inside your container as a bind mount using the -v option to docker run:

    docker run -v /path/to/myfile:/path/inside/container ...
    

    This would expose the file as /path/inside/container inside your container.

  • Docker secrets: If you're running Docker in swarm mode, you can expose your credentials as docker secrets.

like image 136
larsks Avatar answered Nov 15 '22 07:11

larsks


It's worse than that: they're in docker history in perpetuity.

I've done two things here in the past that work:

You can configure pip to use local packages, or to download dependencies ahead of time into "wheel" files. Outside of Docker you can download the package from the private repository, giving the credentials there, and then you can COPY in the resulting .whl file.

pip install wheel
pip wheel --wheel-dir ./wheels git+http://$username:[email protected]/scm/do/repo.git
docker build .
COPY ./wheels/ ./wheels/
RUN pip install wheels/*.whl

The second is to use a multi-stage Dockerfile where the first stage does all of the installation, and the second doesn't need the credentials. This might look something like

FROM ubuntu:16.04 AS build
RUN apt-get update && ...
...
RUN pip install git+http://$username:[email protected]/scm/do/repo.git

FROM ubuntu:16.04
RUN apt-get update \
 && apt-get upgrade -y \
 && apt-get install \
      python2.7
COPY --from=build /usr/lib/python2.7/site-packages/ /usr/lib/python2.7/site-packages/
COPY ...
CMD ["./app.py"]

It's worth double-checking in the second case that nothing has gotten leaked into your final image, because the ARG values are still available to the second stage.

like image 39
David Maze Avatar answered Nov 15 '22 07:11

David Maze