I'm writting a C++ program, and want to use Docker on it. The Dockerfile
looks like the following:
FROM gcc:7.2.0
ENV MYP /repo
WORKDIR ${MYP}
COPY . ${MYP}
RUN /bin/sh -c 'make'
ENTRYPOINT ["./program"]
CMD ["file1", "file2"]
This program needs two input files (file1 and file2) and is built and executed with as follows:
docker build -t image .
docker run -v /home/user/tmp/:/repo/dir image dir/file1 dir/file2
These input files are located in the host in /home/user/tmp/
. In the original repository (repo/
), the executable is located in its root directory, and the output file generated is saved in the same folder (i.e. they look like repo/program
and repo/results.md
).
When I run the above docker run
command, I can see from the standard output that the executable is reading correctly the input files and generating the expected results. However, I hoped the written output file (generated by the program with std::ofstream
) to be also saved in the mounted directory /home/user/tmp/
, but its not.
How can I access this file? Is there a straightforward way to get it using the docker volume mechanism?
Docker version is 18.04.0-ce, build 3d479c0af6.
EDIT
The relevant code regarding how the program saves the output file result.md
is the following:
std::string filename ("result.md"); // in the actual code this name is not hard-coded and depends on intput, but it will not include / chars
std::ofstream output_file;
output_file.open(filename.data(), std::ios::out);
output_file << some_data << etc << std::endl;
...
output_file.close();
In practice, the program is run as program file1 file2
, and the output will be saved in the working directory, not matter if its the same where program
is placed or not.
You need to be sure to save your file into the mounted directory. Right now, it looks like your file is being saved as a sibling to your program which is right outside of the mounted directory.
Since you mount with:
docker run -v /home/user/tmp/:/repo/dir image dir/file1 dir/file2
/repo/dir
is the only folder you will see changes to. But if you are saving files to /repo
, they will get saved there, but not seen on the host system after running.
Consider how you open your output file:
std::string filename ("result.md"); // in the actual code this name is not hard-coded and depends on intput, but it will not include / chars
std::ofstream output_file;
output_file.open(filename.data(), std::ios::out);
output_file << some_data << etc << std::endl;
...
output_file.close();
Since you set the output file to "result.md"
with no path, it is going to be opened as a sibling to the program.
If you were to run
docker run -it --rm --entrypoint=/bin/bash image
which would open an interactive shell using your image and then run ./program some-file.text some-other-file.txt
and then ran ls
you would see the output file result.md
as a sibling to program
. That is outside of your mountpoint, which is why you don't see it on your host machine.
Consider this program. This program will take an input file and an output location. It will read in each line of the infile
and wrap it in <p>
. /some
is the repository directory. /some/res/
is the folder that will be mounted to /repo/res/
.
I provide 2 arguments to my program through docker run
, the infile
and outfile
both of which are relative to /repo
which is the working directory.
My program then saves to the outfile
location which is within the mountpoint (/repo/res/
). After docker run
finishes, /some/res/out.txt
is populated.
.
├── Dockerfile
├── README.md
├── makefile
├── res
│ └── in.txt
└── test.cpp
docker build -t image .
docker run --rm -v ~/Desktop/some/res/:/repo/res/ image ./res/in.txt ./res/out.txt
FROM gcc:7.2.0
ENV MYP /repo
WORKDIR ${MYP}
COPY . ${MYP}
RUN /bin/sh -c 'make'
ENTRYPOINT ["./test"]
CMD ["file1", "file2"]
test: test.cpp
g++ -o test test.cpp
.PHONY: clean
clean:
rm -f test
#include <fstream>
#include <iostream>
#include <string>
int main(int argc, char **argv) {
if (argc < 3) {
std::cout << "Usage: test [infile] [outfile]" << std::endl;
return 1;
}
std::cout << "All args" << std::endl;
for (int i = 0; i < argc; i++) {
std::cout << argv[i] << std::endl;
}
std::string line;
std::ifstream infile(argv[1]);
std::ofstream outfile(argv[2]);
if (!(infile.is_open() && outfile.is_open())) {
std::cerr << "Unable to open files" << std::endl;
return 1;
}
while (getline(infile, line)) {
outfile << "<p>" << line << "</p>" << std::endl;
}
outfile.close();
return 0;
}
hello
world
<p>hello</p>
<p>world</p>
I would like yo post the Dockerfile
I'm using right now, in the hope it can be useful to somebody. It doesn't need to specify a name or path for output files. Output files are always written in $PWD
.
FROM alpine:3.4
LABEL version="1.0"
LABEL description="some nice description"
LABEL maintainer="[email protected]"
RUN apk update && apk add \
gcc \
g++ \
make \
git \
&& git clone https://gitlab.com/myuser/myrepo.git \
&& cd myrepo \
&& make \
&& cp program /bin \
&& rm -r /myrepo \
&& apk del g++ make git
WORKDIR /tmp
ENTRYPOINT ["program"]
I only need to run:
docker run --rm -v $PWD:/tmp image file1 file2
Inside the image, the working directory is tmp
and cannot be changed, which is the one passed to the volume -v
option. After running the image, all output files will be saved in the corresponding working directory of the host machine.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With