Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to run shell script on host from docker container?

Tags:

docker

People also ask

Can Docker container Run command on host?

If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.

Can we run shell script in Dockerfile?

Like you may need to execute multiple commands at start point of container which is not easy to do that. But fortunetly , you can add shell script file in Entrypoint and it will get executed. Here is the example how you can run shell script from file on Entrypoint in dockerfile.


Use a named pipe. On the host OS, create a script to loop and read commands, and then you call eval on that.

Have the docker container read to that named pipe.

To be able to access the pipe, you need to mount it via a volume.

This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.

My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.

Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.


The solution I use is to connect to the host over SSH and execute the command like this:

ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"

UPDATE

As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).

UPDATE: Named Pipes

The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.

However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.


This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so the credit goes to him.

In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.

I have to admit I didn't know what are named pipes at the time I read his solution. So I struggled to implement it (while it's actually really simple), but I did succeed. So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.

PART 1 - Testing the named pipe concept without docker

On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:

mkfifo /path/to/pipe/mypipe

The pipe is created. Type

ls -l /path/to/pipe/mypipe 

And check the access rights start with "p", such as

prw-r--r-- 1 root root 0 mypipe

Now run:

tail -f /path/to/pipe/mypipe

The terminal is now waiting for data to be sent into this pipe

Now open another terminal window.

And then run:

echo "hello world" > /path/to/pipe/mypipe

Check the first terminal (the one with tail -f), it should display "hello world"

PART 2 - Run commands through the pipe

On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:

eval "$(cat /path/to/pipe/mypipe)"

Then, from the other terminal, try running:

echo "ls -l" > /path/to/pipe/mypipe

Go back to the first terminal and you should see the result of the ls -l command.

PART 3 - Make it listen forever

You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.

Instead of eval "$(cat /path/to/pipe/mypipe)", run:

while true; do eval "$(cat /path/to/pipe/mypipe)"; done

(you can nohup that)

Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.

PART 4 - Make it work even when reboot happens

The only caveat is if the host has to reboot, the "while" loop will stop working.

To handle reboot, here what I've done:

Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header

Don't forget to chmod +x it

Add it to crontab by running

crontab -e

And then adding

@reboot /path/to/execpipe.sh

At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed. Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.

Another option is to modify the script to put the output in a file, such as:

while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done

Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.

PART 5 - Make it work with docker

If you are using both docker compose and dockerfile like I do, here is what I've done:

Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container

Add this:

VOLUME /hostpipe

in your dockerfile in order to create a mount point

Then add this:

volumes:
   - /path/to/pipe:/hostpipe

in your docker compose file in order to mount /path/to/pipe as /hostpipe

Restart your docker containers.

PART 6 - Testing

Exec into your docker container:

docker exec -it <container> bash

Go into the mount folder and check you can see the pipe:

cd /hostpipe && ls -l

Now try running a command from within the container:

echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe

And it should work!

WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).

For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.

PART 7 - Example from Node.JS container

Here is how I send a command from my Node.JS container to the main host and retrieve the output:

const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"

console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)

console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()

console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
    if (Date.now() - timeoutStart > timeout) {
        clearInterval(myLoop);
        console.log("timed out")
    } else {
        //if output.txt exists, read it
        if (fs.existsSync(outputPath)) {
            clearInterval(myLoop);
            const data = fs.readFileSync(outputPath).toString()
            if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
            console.log(data) //log the output of the command
        }
    }
}, 300);

That REALLY depends on what you need that bash script to do!

For example, if the bash script just echoes some output, you could just do

docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like

docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.


My laziness led me to find the easiest solution that wasn't published as an answer here.

It is based on the great article by luc juggery.

All you need to do in order to gain a full shell to your linux host from within your docker container is:

docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh

Explanation:

--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)

--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running) nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)

nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1. The whole command will then provide an interactive sh shell in the VM

This setup has major security implications and should be used with cautions (if any).


Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080

#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json

PORT_NUMBER = 8080

# This class will handles any incoming request from
# the browser 
class myHandler(BaseHTTPRequestHandler):
        def do_POST(self):
                content_len = int(self.headers.getheader('content-length'))
                post_body = self.rfile.read(content_len)
                self.send_response(200)
                self.end_headers()
                data = json.loads(post_body)

                # Use the post data
                cmd = "your shell cmd"
                p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
                p_status = p.wait()
                (output, err) = p.communicate()
                print "Command output : ", output
                print "Command exit status/return code : ", p_status

                self.wfile.write(cmd + "\n")
                return
try:
        # Create a web server and define the handler to manage the
        # incoming request
        server = HTTPServer(('', PORT_NUMBER), myHandler)
        print 'Started httpserver on port ' , PORT_NUMBER

        # Wait forever for incoming http requests
        server.serve_forever()

except KeyboardInterrupt:
        print '^C received, shutting down the web server'
        server.socket.close()

If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.

Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.

You can do this by adding the following volume args to your start command

docker run -v /var/run/docker.sock:/var/run/docker.sock ...

or by sharing /var/run/docker.sock within your docker compose file like this:

version: '3'

services:
   ci:
      command: ...
      image: ...
      volumes:
         - /var/run/docker.sock:/var/run/docker.sock

When you run the docker start command within your docker container, the docker server running on your host will see the request and provision the sibling container.

credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/