I have a Python script in my docker container that needs to be executed, but I also need to have interactive access to the container once it has been created ( with /bin/bash ).
I would like to be able to create my container, have my script executed and be inside the container to see the changes/results that have occurred (no need to manually execute my python script).
The current issue I am facing is that if I use the CMD or ENTRYPOINT commands in the docker file I am unable to get back into the container once it has been created. I tried using docker start and docker attach but I'm getting the error:
sudo docker start containerID sudo docker attach containerID "You cannot attach to a stepped container, start it first"
Ideally, something close to this:
sudo docker run -i -t image /bin/bash python myscript.py
Assume my python script contains something like (It's irrelevant what it does, in this case it just creates a new file with text):
open('newfile.txt','w').write('Created new file with text\n')
When I create my container I want my script to execute and I would like to be able to see the content of the file. So something like:
root@66bddaa892ed# sudo docker run -i -t image /bin/bash bash4.1# ls newfile.txt bash4.1# cat newfile.txt Created new file with text bash4.1# exit root@66bddaa892ed#
In the example above my python script would have executed upon creation of the container to generate the new file newfile.txt. This is what I need.
Docker Run vs Docker Exec! This is a fairly common question – but has a simple answer! In short, docker run is the command you use to create a new container from an image, whilst docker exec lets you run commands on an already running container! Easy!
-i (interactive) is about whether to keep stdin open (some programs, like bash, use stdin and other programs don't). -d (detached) is about whether the docker run command waits for the process being run to exit. Thus, they are orthogonal and not inherently contradictory.
To demonstrate the process of running Docker container in interactive mode, we will take the example of Redis. We can first start a Redis Docker container in backgroundusing the below command. docker run -d redis This will basically pull the Redis Docker image from Docker Hub and start up a container running the same.
To this end, Docker provides the docker exec command to run programs in containers that are already running. In this tutorial we will learn about the docker exec command and how to use it to run commands and get an interactive shell in a running Docker container.
If you need to run a command inside a running Docker container, but don’t need any interactivity, use the docker exec command without any flags: This command will run tail /var/log/date.log on the container-name container, and output the results.
@AlexandrosIoannou: there are several ways to do this in a Dockerfile. 1) CMD ["bash", "-c", "<your-script-full-path>; bash"] will define a default command in the Dockerfile. With that, you can run 'sudo docker run -it <image-name>' without specifying the command.
My way of doing it is slightly different with some advantages. It is actually multi-session server rather than script but could be even more usable in some scenarios:
# Just create interactive container. No start but named for future reference. # Use your own image. docker create -it --name new-container <image> # Now start it. docker start new-container # Now attach bash session. docker exec -it new-container bash
Main advantage is you can attach several bash sessions to single container. For example I can exec one session with bash for telling log and in another session do actual commands.
BTW when you detach last 'exec' session your container is still running so it can perform operations in background
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With