Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker container with non-root user deployed in Google Container Engine can not write to mounted GCE Persistent disk

I'm playing with kubernetes and google container engine (GKE).

I deployed a container from this image jupyter/all-spark-notebook

This is my replication controller :

{
  "apiVersion": "v1",
  "kind": "ReplicationController",
  "metadata": {
    "name": "datalab-notebook"
  },
  "spec": {
    "replicas": 1,
    "selector": {
      "app": "datalab-notebook"
    },
    "template": {
      "metadata": {
        "name": "datalab-notebook",
        "labels": {
          "environment": "TEST",
          "app": "datalab-notebook"
        }
      },
      "spec": {
        "containers": [{
          "name": "datalab-notebook-container",
          "image": "jupyter/all-spark-notebook",
          "env": [],
          "ports": [{
            "containerPort": 8888,
            "name": "datalab-port"
          }],
          "volumeMounts": [{
            "name": "datalab-notebook-persistent-storage",
            "mountPath": "/home/jovyan/work"
          }]
        }],
        "volumes": [{
          "name": "datalab-notebook-persistent-storage",
          "gcePersistentDisk": {
            "pdName": "datalab-notebook-disk",
            "fsType": "ext4"
          }
        }]
      }
    }

  }
}

As you can see I mounted a Google Compute Engine Persistent Disk. My issue is that the container uses a non-root user and the mounted disk is owned by root. so my container can not write to the disk.

  • Is there a way to mount GCE persistent disks and make them read/write for containers without non-root users?
  • Another general question : is it safe to run container with root user in Google Container Engine?

Thank you in advance for your inputs

like image 634
med Avatar asked Feb 04 '16 23:02

med


2 Answers

You can use the FSGroup field of the pod's security context to make GCE PDs writable by non-root users.

In this example, the gce volume will be owned by group 1234 and the container process will have 1234 in its list of supplemental groups:

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  securityContext:
    fsGroup: 1234
  containers:
  - image: gcr.io/google_containers/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    # This GCE PD must already exist.
    gcePersistentDisk:
      pdName: my-data-disk
      fsType: ext4
like image 108
Paul Morie Avatar answered Sep 28 '22 16:09

Paul Morie


I ran into the same problem. The workaround I used was to run df -h on the host machine that the container was running on. From there I was able to find the bind point of the persistant storage. It should look something like /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/<pd-name>. It will also be one of the ones that has a file system that starts with /dev that isn't mounted to root.

Once you've found that you can run sudo chmod -R 0777 /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/<pd-name> from the host box, and now at least your container can use the directory, though the files will still be owned by root.

like image 31
funkymonkeymonk Avatar answered Sep 28 '22 15:09

funkymonkeymonk