Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to execute command from one docker container to another

I'm creating an application that will allow users to upload video files that will then be put through some processing.

I have two containers.

  1. Nginx container that serves the website where users can upload their video files.
  2. Video processing container that has FFmpeg and some other processing stuff installed.

What I want to achieve. I need container 1 to be able to run a bash script on container 2.

One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill. I just want to execute a bash script.

Any suggestions?

like image 255
Nicolas Buch Avatar asked Nov 25 '19 15:11

Nicolas Buch


2 Answers

You have a few options, but the first 2 that come time mind are:

  1. In container 1, install the Docker CLI and bind mount /var/run/docker.sock (you need to specify the bind mount from the host when you start the container). Then, inside the container, you should be able to use docker commands against the bind mounted socket as if you were executing them from the host (you might also need to chmod the socket inside the container to allow a non-root user to do this.
  2. You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.

Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.

Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.

Warning:

To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.

like image 194
Z4-tier Avatar answered Sep 30 '22 03:09

Z4-tier


Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :

  1. You'll need to install docker on the container (and do docker in docker stuff)
  2. You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.

So, this leaves us two solutions :

  1. Install ssh on you're container and execute the command through ssh
  2. Share a volume and have a process that watch for something to trigger your batch
like image 28
Marc ABOUCHACRA Avatar answered Sep 30 '22 03:09

Marc ABOUCHACRA