Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Running app inside Docker as non-root user

After yesterday's news of Shocker, it seems like apps inside a Docker container should not be run as root. I tried to update my Dockerfile to create an app user however changing permissions on app files (while still root) doesn't seem to work. I'm guessing this is because some LXC permission is not being granted to the root user maybe?

Here's my Dockerfile:

# Node.js app Docker file

FROM dockerfile/nodejs
MAINTAINER Thom Nichols "[email protected]"

RUN useradd -ms /bin/bash node

ADD . /data
# This next line doesn't seem to have any effect:
RUN chown -R node /data 

ENV HOME /home/node
USER node

RUN cd /data && npm install

EXPOSE 8888

WORKDIR /data

CMD ["npm", "start"]

Pretty straightforward, but when I ls -l everything is still owned by root:

[ node@ed7ae33e76e1:/data {docker-nonroot-user} ]$ ls -l /data
total 64K
-rw-r--r--  1 root root  383 Jun 18 20:32 Dockerfile
-rw-r--r--  1 root root  862 Jun 18 16:23 Gruntfile.js
-rw-r--r--  1 root root 1.2K Jun 18 15:48 README.md
drwxr-xr-x  4 root root 4.0K May 30 14:24 assets/
-rw-r--r--  1 root root  416 Jun  3 14:22 bower.json
-rw-r--r--  1 root root  930 May 30 01:50 config.js
drwxr-xr-x  4 root root 4.0K Jun 18 16:08 lib/
drwxr-xr-x 42 root root 4.0K Jun 18 16:04 node_modules/
-rw-r--r--  1 root root 2.0K Jun 18 16:04 package.json
-rw-r--r--  1 root root  118 May 30 18:35 server.js
drwxr-xr-x  3 root root 4.0K May 30 02:17 static/
drwxr-xr-x  3 root root 4.0K Jun 18 20:13 test/
drwxr-xr-x  3 root root 4.0K Jun  3 17:38 views/

My updated dockerfile works great thanks to @creak's clarification of how volumes work. Once the initial files are chowned, npm install is run as the non-root user. And thanks to a postinstall hook, npm runs bower install && grunt assets which takes care of the remaining install steps and avoids any need to npm install -g any node cli tools like bower, grunt or coffeescript.

like image 825
thom_nic Avatar asked Jun 19 '14 14:06

thom_nic


People also ask

Should Docker run as root or user?

One of the best practices while running Docker Container is to run processes with a non-root user. This is because if a user manages to break out of the application running as root in the container, he may gain root user access on host.

How do I run a Docker container with a specific user?

For docker run : Simply add the option --user <user> to change to another user when you start the docker container. For docker attach or docker exec : Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.

Why you shouldn't run containers as root?

Running containers as root is a bad idea for security. This has been shown time and time again. Hackers find new ways of escaping out of the container, and that grants unfettered access to the host or Kubernetes node.


3 Answers

Check this post: http://www.yegor256.com/2014/08/29/docker-non-root.html In rultor.com we run all builds in their own Docker containers. And every time before running the scripts inside the container, we switch to a non-root user. This is how:

adduser --disabled-password --gecos '' r
adduser r sudo
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
su -m r -c /home/r/script.sh

r is the user we're using.

like image 129
yegor256 Avatar answered Oct 18 '22 21:10

yegor256


Update 2015-09-28

I have noticed this post getting a bit of attention. A word of advice for anyone who is potentially interested in doing something like this. I would try to use Python or another language as a wrapper for your script executions. Doing native bash scripts I had problems when trying to pass through a variety of arguments to my containers. Specifically there was issues with the interpretation/escaping of " and ' characters by the shell.


I was needing to change the user for a slightly different reason.

I created a docker image housing a full featured install of ImageMagick and Ffmpeg with a desire that I could do transformations on images/videos within my host OS. My problem was that these are command line tools, so it is slightly trickier to execute them via docker and then get the results back into the host OS. I managed to allow for this by mounting a docker volume. This seemed to work okay except that the image/video output was coming out as being owned by root (i.e. the user the docker container was running as), rather than the user whom executed the command.

I looked at the approach that @François Zaninotto mentioned in his answer (you can see the full make script here). It was really cool, but I preferred the option of creating a bash shell script that I would then register on my path. I took some of the concepts from the Makefile approach (specifically the user/group creation) and then I created the shell script.

Here is an example of my dockermagick shell script:

#!/bin/bash

### VARIABLES

DOCKER_IMAGE='acleancoder/imagemagick-full:latest'
CONTAINER_USERNAME='dummy'
CONTAINER_GROUPNAME='dummy'
HOMEDIR='/home/'$CONTAINER_USERNAME
GROUP_ID=$(id -g)
USER_ID=$(id -u)

### FUNCTIONS

create_user_cmd()
{
  echo \
    groupadd -f -g $GROUP_ID $CONTAINER_GROUPNAME '&&' \
    useradd -u $USER_ID -g $CONTAINER_GROUPNAME $CONTAINER_USERNAME '&&' \
    mkdir --parent $HOMEDIR '&&' \
    chown -R $CONTAINER_USERNAME:$CONTAINER_GROUPNAME $HOMEDIR
}

execute_as_cmd()
{
  echo \
    sudo -u $CONTAINER_USERNAME HOME=$HOMEDIR
}

full_container_cmd()
{
  echo "'$(create_user_cmd) && $(execute_as_cmd) $@'"
}

### MAIN

eval docker run \
    --rm=true \
    -a stdout \
    -v $(pwd):$HOMEDIR \
    -w $HOMEDIR \
    $DOCKER_IMAGE \
    /bin/bash -ci $(full_container_cmd $@)

This script is bound to the 'acleancoder/imagemagick-full' image, but that can be changed by editing the variable at the top of the script.

What it basically does is:

  • Create a user id and group within the container to match the user who executes the script from the host OS.
  • Mounts the current working directory of the host OS (using docker volumes) into home directory for the user we create within the executing docker container.
  • Sets the tmp directory as the working directory for the container.
  • Passes any arguments that are passed to the script, which will then be executed by the '/bin/bash' of the executing docker container.

Now I am able to run the ImageMagick/Ffmpeg commands against files on my host OS. For example, say I want to convert an image MyImage.jpeg into a PNG file, I could now do the following:

$ cd ~/MyImages
$ ls
  MyImage.jpeg
$ dockermagick convert MyImage.jpeg Foo.png
$ ls
  Foo.png MyImage.jpeg

I have also attached to the 'stdout' so I could run the ImageMagick identify command to get info on an image on my host, for e.g.:

$ dockermagick identify MyImage.jpeg
  MyImage.jpeg JPEG 640x426 640x426+0+0 8-bit DirectClass 78.6KB 0.000u 0:00.000

There are obvious dangers about mounting the current directory and allowing any arbitrary command definition to be passed along for execution. But there are also many ways to make the script more safe/secure. I am executing this in my own non-production personal environment, so these are not of highest concern for me. But I would highly recommend you take the dangers into consideration should you choose to expand upon this script. It's also worth me mentioning that this script doesn't take an OS X host into consideration. The make file that I steal ideas/concepts from does take this into account, so you could extend this script to do so.

Another limitation to note is that I can only refer to files currently in the path for which I am executing the script. This is because of the way I am mounting the volumes, so the following would not work:

$ cd ~/MyImages
$ ls
  MyImage.jpeg
$ dockermagick convert ~/DifferentDirectory/AnotherImage.jpeg Foo.png
$ ls
  MyImage.jpeg

It's best just to go to the directory containing the image and execute against it directly. Of course I am sure there are ways to get around this limitation too, but for me and my current needs, this will do.

like image 18
ctrlplusb Avatar answered Oct 18 '22 19:10

ctrlplusb


This one is a bit tricky, it is actually due to the image you start from.

If you look at the source, you notice that /data/ is a volume. So everything you do in the Dockerfile will be discarded and overridden at runtime by the volume that gets mounted then.

You can chown at runtime by changing your CMD to something like CMD chown -R node /data && npm start.

like image 16
creack Avatar answered Oct 18 '22 20:10

creack