Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Run Docker inside repository

Tags:

git

docker

I have been asked to solve a challenge in C++ that works in Linux. I have developed my solution in my MacOS and have created a repo in Github. Moreover to ease the execution of my repo I have created a Dockerfile that configures all packages that need to be installed and so on.

My repo looks like:

UBIMET_Challenge

-- Dockerfile
-- Rest of .cpp files
-- build

My Dockerfile looks like:

FROM rikorose/gcc-cmake
RUN apt-get update && apt-get install -y libpng-dev
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
RUN cmake ..
RUN make
RUN ./ubimet /UBIMET_Challenge/data/1706221600.29 output.csv

Is this a good way to present my work? It is for a job interview :) Moreover, the last line of the Dockerfile should generate 2 files, what I do is after is:

docker build --tag <name_process> .
docker run -t -i <name_process> /bin/bash

Get the ID of the /bin/bash process and export it from the Docker to my local domain as:

docker cp <process_id>:/UBIMET_Challenge/build/heatmap0.png .

Where in this example I copy heatmap0.png to my local domain.

So my questions:

  • Can I modify my Docker not to clone the repo as the Dockerfile itself is already inside the same repo?
  • How would you make nicer the way to access the data generated by the container in your local domain (Like that files save automatically in the local domain)?
like image 995
Hector Esteban Avatar asked Jan 20 '26 09:01

Hector Esteban


1 Answers

Can I modify my Docker not to clone the repo as the Dockerfile itself is already inside the same repo?

YES.

You should generally avoid putting git commands in your Dockerfile. There are three good reasons for this: the Dockerfile as you've shown it can't build anything other than committed changes to master (can't build an image for integration testing of a proposed change; can't build an image from a branch); because of Docker layer caching, if you update the repository, the image won't actually get rebuild; and if the repository is private, correctly managing the credentials (and not leaking them into the final image) is tricky.

How would you make nicer the way to access the data generated by the container in your local domain (Like that files save automatically in the local domain)?

I wouldn't use Docker for this case and would run the process directly on the host. Since the major goal of your program is to write a file to the local system, you actively don't want Docker's filesystem isolation, and you're not taking advantage of any of Docker's other networking features or planning to run this in a clustered environment.

You might look at tools like Vagrant for the specific workflow you've set up here. Even if you configure it to be backed by Docker, it's a little more purpose-designed for having only an alternate-OS build environment.

docker run -t -i <name_process> /bin/bash

Generally you should think of a container as a wrapper around a single process. In the same way that you're not going to open up a shell in your Web browser and cp files out of it, this isn't a really convenient way to work with Docker.

Instead of RUNning the program in the Dockerfile, you can make it the default CMD when the image is run. I might set up:

...
RUN make

WORKDIR /UBIMET_Challenge
RUN mkdir output
CMD ["./build/ubimet", "./data/1706221600.29", "./output/output.csv"]

Then when you run the container you need to bind mount a directory over the output directory

docker run --rm -v $PWD:/UBIMET_Challenge/output <image_name>

and the result file will be put there.

like image 141
David Maze Avatar answered Jan 23 '26 03:01

David Maze



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!