The aim here is to use a docker container as a secure sandbox to run untrusted python scripts in, but to do so from within python using the docker-py module, and be able to capture the output of that script.
I'm running a python script foo.py inside a docker container (it's set as the ENTRYPOINT
command in my Dockerfile, so it's executed as soon as the container is run) and am unable to capture the output of that script. When I run the container via the normal CLI using
docker run -v /host_dirpath:/cont_dirpath my_image
(host_dirpath
is the directory containing foo.py) I get the expected output of foo.py printed to stdout, which is just a dictionary of key-value pairs. However, I'm trying to do this from within python using the docker-py module, and somehow the script output is not being captured by the logs
method. Here's the python code I'm using:
from docker import Client docker = Client(base_url='unix://var/run/docker.sock', version='1.10', timeout=10) contid = docker.create_container('my_image', volumes={"/cont_dirpath":""}) docker.start(contid, binds={"/host_dirpath": {"bind": "/cont_dirpath"} }) print "Docker logs: " + str(docker.logs(contid))
Which just results in "Docker logs: " - nothing is being captured in the logs, neither stdout nor stderr (I tried raising an exception inside foo.py to test this).
The results I'm after are calculated by foo.py and are currently just printed to stdout with a python print
statement. How can I get this to be included in the docker container logs so I can read it from within python? Or capture this output some other way from outside the container?
Any help would be greatly appreciated. Thanks in advance!
EDIT:
Still no luck with docker-py, but it is working well when running the container with the normal CLI using subprocess.Popen - the output is indeed correctly grabbed by stdout when doing this.
docker logs <container id> will show you all the output of the container run. If you're running it on ECS, you'll probably need to set DOCKER_HOST=tcp://ip:port for the host that ran the container. My container is already stopped. Using the cmd line doing, docker run -d image, it returns me the container id.
You have to use two combinations, one after the other: ctrl+p followed by ctrl+q. You turn interactive mode to daemon mode, which keeps the container running but frees up your terminal. You can attach to it later using docker attach, if you need to interact with the container more.
You are experiencing this behavior because python buffers its outputs by default.
Take this example:
vagrant@docker:/vagrant/tmp$ cat foo.py #!/usr/bin/python from time import sleep while True: print "f00" sleep(1)
then observing the logs from a container running as a daemon does not show anything:
vagrant@docker:/vagrant/tmp$ docker logs -f $(docker run -d -v $(pwd):/app dockerfile/python python /app/foo.py)
but if you disable the python buffered output with the -u
command line parameter, everything shows up:
vagrant@docker:/vagrant/tmp$ docker logs -f $(docker run -d -v $(pwd):/app dockerfile/python python -u /app/foo.py) f00 f00 f00 f00
You can also inject the PYTHONUNBUFFERED
environment variable:
vagrant@docker:/vagrant/tmp$ docker logs -f $(docker run -d -v $(pwd):/app -e PYTHONUNBUFFERED=0 dockerfile/python python /app/foo.py) f00 f00 f00 f00
Note that this behavior affects only containers running without the -t
or --tty
parameter.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With