Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

can i use a configmap created from an init container in the pod

Tags:

kubernetes

I am trying to "pass" a value from the init container to a container. Since values in a configmap are shared across the namespace, I figured I can use it for this purpose. Here is my job.yaml (with faked-out info):

apiVersion: batch/v1
kind: Job
metadata:
  name: installer-test
spec:
  template:
    spec:
      containers:
      - name: installer-test
        image: installer-test:latest
        env:
        - name: clusterId
          value: "some_cluster_id"
        - name: in_artifactoryUrl
          valueFrom:
            configMapKeyRef:
              name: test-config
              key: artifactorySnapshotUrl
      initContainers:
      - name: artifactory-snapshot
        image: busybox
        command: ['kubectl', 'create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url']
      restartPolicy: Never
  backoffLimit: 0

This does not seem to work (EDIT: although the statements following this edit note may still be correct, this is not working because kubectl is not a recognizable command in the busybox image), and I am assuming that the pod can only read values from a configmap created BEFORE the pod is created. Has anyone else come across the difficulty of passing values between containers, and what did you do to solve this?

Should I deploy the configmap in another pod and wait to deploy this one until the configmap exists?

(I know I can write files to a volume, but I'd rather not go that route unless it's absolutely necessary, since it essentially means our docker images must be coupled to an environment where some specific files exist)

like image 871
Mike Avatar asked Apr 25 '18 00:04

Mike


3 Answers

You can create an EmptyDir volume, and mount this volume onto both containers. Unlike persistent volume, EmptyDir has no portability issue.

apiVersion: batch/v1
kind: Job
metadata:
  name: installer-test
spec:
  template:
    spec:
      containers:
      - name: installer-test
        image: installer-test:latest
        env:
        - name: clusterId
          value: "some_cluster_id"
        volumeMounts:
        - name: tmp
          mountPath: /tmp/artifact
      initContainers:
      - name: artifactory-snapshot
        image: busybox
        command: ['/bin/sh', '-c', 'cp x /tmp/artifact/x']
        volumeMounts:
        - name: tmp
          mountPath: /tmp/artifact
      restartPolicy: Never
      volumes:
      - name: tmp
        emptyDir: {}
  backoffLimit: 0
like image 85
ccshih Avatar answered Oct 22 '22 14:10

ccshih


If for various reasons, you don't want to use share volume. And you want to create a configmap or a secret, here is a solution.

First you need to use a docker image which contains kubectl : gcr.io/cloud-builders/kubectl:latest for example. (docker image which contains kubectl manage by Google).

Then this (init)container needs enough rights to create resource on Kubernetes cluster. Ok by default, kubernetes inject a token of default service account named : "default" in container, but I prefer to make more explicit, then add this line :

...
      initContainers:
        - # Already true by default but if use it, prefer to make it explicit
          automountServiceAccountToken: true
          name: artifactory-snapshot

And add "edit" role to "default" service account:

kubectl create rolebinding default-edit-rb --clusterrole=edit --serviceaccount=default:myapp --namespace=default

Then complete example :

apiVersion: batch/v1
kind: Job
metadata:
  name: installer-test
spec:
  template:
    spec:
      initContainers:
        - # Already true by default but if use it, prefer to make it explicit.
          automountServiceAccountToken: true
          name: artifactory-snapshot
          # You need to use docker image which contains kubectl
          image: gcr.io/cloud-builders/kubectl:latest
          command:
            - sh
            - -c
            # the "--dry-run -o yaml | kubectl apply -f -" is to make command idempotent
            - kubectl create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url --dry-run -o yaml | kubectl apply -f -
      containers:
        - name: installer-test
          image: installer-test:latest
          env:
            - name: clusterId
              value: "some_cluster_id"
            - name: in_artifactoryUrl
              valueFrom:
                configMapKeyRef:
                  name: test-config
                  key: artifactorySnapshotUrl

like image 7
Antoine Avatar answered Oct 22 '22 14:10

Antoine


First of all, kubectl is a binary. It was downloaded in your machine before you could use the command. But, In your POD, the kubectl binary doesn't exist. So, you can't use kubectl command from a busybox image.

Furthermore, kubectl uses some credential that is saved in your machine (probably in ~/.kube path). So, If you try to use kubectl from inside an image, this will fail because of missing credentials.

For your scenario, I will suggest the same as @ccshih, use volume sharing. Here is the official doc about volume sharing between init-container and container.

The yaml that is used here is ,

apiVersion: v1
kind: Pod
metadata:
  name: init-demo
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: workdir
      mountPath: /usr/share/nginx/html
  # These containers are run during pod initialization
  initContainers:
  - name: install
    image: busybox
    command:
    - wget
    - "-O"
    - "/work-dir/index.html"
    - http://kubernetes.io
    volumeMounts:
    - name: workdir
      mountPath: "/work-dir"
  dnsPolicy: Default
  volumes:
  - name: workdir
    emptyDir: {}

Here init-containers saves a file in the volume and later the file was available in inside the container. Try the tutorial by yourself for better understanding.

like image 4
Abdullah Al Maruf - Tuhin Avatar answered Oct 22 '22 15:10

Abdullah Al Maruf - Tuhin