Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes Pod fails with CrashLoopBackOff

I 'm Following this guide in order to set up a pod using minikube and pull an image from a private repository hosted at: hub.docker.com

When trying to set up a pod to pull the image i see CrashLoopBackoff

pod config:

apiVersion: v1 kind: Pod metadata:   name: private-reg spec:   containers:     - name: private-reg-container       image: ha/prod:latest   imagePullSecrets:     - name: regsecret 

output of "get pod"

kubectl get pod private-reg NAME          READY     STATUS             RESTARTS   AGE private-reg   0/1       CrashLoopBackOff   5          4m 

As far as i can see there is no issue with the images and if i pull them manually and run them, they works.

(you can see Successfully pulled image "ha/prod:latest")

this issue also happens if i push a generic image to the repository such as centos and try to pull and run it using pod.

Also, the secret seems to work fine and i can see the "pulls" counted in the private repository.

Here is the output of the command:

kubectl describe pods private-reg:

[~]$ kubectl describe pods private-reg Name:       private-reg Namespace:  default Node:       minikube/192.168.99.100 Start Time: Thu, 22 Jun 2017 17:13:24 +0300 Labels:     <none> Annotations:    <none> Status:     Running IP:     172.17.0.5 Controllers:    <none> Containers:   private-reg-container:     Container ID:   docker://1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132     Image:      ha/prod:latest     Image ID:       docker://sha256:7335859e2071af518bcd0e2f373f57c1da643bb37c7e6bbc125d171ff98f71c0     Port:            State:      Waiting       Reason:       CrashLoopBackOff     Last State:     Terminated       Reason:       Completed       Exit Code:    0       Started:      Mon, 01 Jan 0001 00:00:00 +0000       Finished:     Thu, 22 Jun 2017 17:20:04 +0300     Ready:      False     Restart Count:  6     Environment:    <none>     Mounts:       /var/run/secrets/kubernetes.io/serviceaccount from default-token-bhvgz (ro) Conditions:   Type      Status   Initialized   True    Ready     False    PodScheduled  True  Volumes:   default-token-bhvgz:     Type:   Secret (a volume populated by a Secret)     SecretName: default-token-bhvgz     Optional:   false QoS Class:  BestEffort Node-Selectors: <none> Tolerations:    <none> Events:   FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason      Message   --------- --------    -----   ----            -------------               --------    ------      -------   9m        9m      1   default-scheduler                       Normal      Scheduled   Successfully assigned private-reg to minikube   8m        8m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal      Created     Created container with id 431fecfd1d2ca03d29fd88fd6c663e66afb59dc5e86487409002dd8e9987945c   8m        8m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal      Started     Started container with id 431fecfd1d2ca03d29fd88fd6c663e66afb59dc5e86487409002dd8e9987945c   8m        8m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal      Started     Started container with id 223e6af99bb950570a27056d7401137ff9f3dc895f4f313a36e73ef6489eb61a   8m        8m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal      Created     Created container with id 223e6af99bb950570a27056d7401137ff9f3dc895f4f313a36e73ef6489eb61a   8m        8m      2   kubelet, minikube                       Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 10s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"    8m    8m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id a98377f9aedc5947fe1dd006caddb11fb48fa2fd0bb06c20667e0c8b83a3ab6a   8m    8m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id a98377f9aedc5947fe1dd006caddb11fb48fa2fd0bb06c20667e0c8b83a3ab6a   8m    8m  2   kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 20s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"    8m    8m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id 261f430a80ff5a312bdbdee78558091a9ae7bc9fc6a9e0676207922f1a576841   8m    8m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id 261f430a80ff5a312bdbdee78558091a9ae7bc9fc6a9e0676207922f1a576841   8m    7m  3   kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 40s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"    7m    7m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id 7251ab76853d4178eff59c10bb41e52b2b1939fbee26e546cd564e2f6b4a1478   7m    7m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id 7251ab76853d4178eff59c10bb41e52b2b1939fbee26e546cd564e2f6b4a1478   7m    5m  7   kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"    5m    5m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id 347868d03fc9730417cf234e4c96195bb9b45a6cc9d9d97973855801d52e2a02   5m    5m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id 347868d03fc9730417cf234e4c96195bb9b45a6cc9d9d97973855801d52e2a02   5m    3m  12  kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"    9m    2m      7   kubelet, minikube   spec.containers{private-reg-container}  Normal  Pulling     pulling image "ha/prod:latest"   2m    2m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id 1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132   8m    2m      7   kubelet, minikube   spec.containers{private-reg-container}  Normal  Pulled      Successfully pulled image "ha/prod:latest"   2m    2m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id 1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132   8m    <invalid>   40  kubelet, minikube   spec.containers{private-reg-container}  Warning BackOff     Back-off restarting failed container   2m    <invalid>   14  kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)" 

Here is the output of the command:

kubectl --v=8 logs private-reg:

I0622 17:35:01.043739   15981 cached_discovery.go:71] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/apps/v1beta1/serverresources.json I0622 17:35:01.043951   15981 cached_discovery.go:71] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/v1/serverresources.json I0622 17:35:01.045061   15981 cached_discovery.go:118] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/servergroups.json I0622 17:35:01.045175   15981 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/private-reg I0622 17:35:01.045182   15981 round_trippers.go:402] Request Headers: I0622 17:35:01.045187   15981 round_trippers.go:405]     Accept: application/json, */* I0622 17:35:01.045191   15981 round_trippers.go:405]     User-Agent: kubectl/v1.6.6 (linux/amd64) kubernetes/7fa1c17 I0622 17:35:01.072863   15981 round_trippers.go:420] Response Status: 200 OK in 27 milliseconds I0622 17:35:01.072900   15981 round_trippers.go:423] Response Headers: I0622 17:35:01.072921   15981 round_trippers.go:426]     Content-Type: application/json I0622 17:35:01.072930   15981 round_trippers.go:426]     Content-Length: 2216 I0622 17:35:01.072936   15981 round_trippers.go:426]     Date: Thu, 22 Jun 2017 14:35:31 GMT I0622 17:35:01.072994   15981 request.go:991] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"private-reg","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/private-reg","uid":"f4340638-5754-11e7-978a-08002773375c","resourceVersion":"3070","creationTimestamp":"2017-06-22T14:13:24Z"},"spec":{"volumes":[{"name":"default-token-bhvgz","secret":{"secretName":"default-token-bhvgz","defaultMode":420}}],"containers":[{"name":"private-reg-container","image":"ha/prod:latest","resources":{},"volumeMounts":[{"name":"default-token-bhvgz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"imagePullSecrets":[{"name":"regsecret"}],"schedulerName":"default-scheduler"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z","reason":"ContainersNotReady","message":"containers with unready status: [private-reg-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z"}],"hostIP":"192.168.99.100","podIP":"172.17.0.5","startTime":"2017-06-22T14:13:24Z","containerStatuses":[{"name":"private-reg-container","state":{"waiting":{"reason":"CrashLoopBackOff","message":"Back-off 5m0s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"}},"lastState":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":null,"finishedAt":"2017-06-22T14:30:36Z","containerID":"docker://a4cb436a79b0b21bb385e544d424b2444a80ca66160ef21af30ab69ed2e23b32"}},"ready":false,"restartCount":8,"image":"ha/prod:latest","imageID":"docker://sha256:7335859e2071af518bcd0e2f373f57c1da643bb37c7e6bbc125d171ff98f71c0","containerID":"docker://a4cb436a79b0b21bb385e544d424b2444a80ca66160ef21af30ab69ed2e23b32"}],"qosClass":"BestEffort"}} I0622 17:35:01.074108   15981 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/private-reg/log I0622 17:35:01.074126   15981 round_trippers.go:402] Request Headers: I0622 17:35:01.074132   15981 round_trippers.go:405]     Accept: application/json, */* I0622 17:35:01.074137   15981 round_trippers.go:405]     User-Agent: kubectl/v1.6.6 (linux/amd64) kubernetes/7fa1c17 I0622 17:35:01.079257   15981 round_trippers.go:420] Response Status: 200 OK in 5 milliseconds I0622 17:35:01.079289   15981 round_trippers.go:423] Response Headers: I0622 17:35:01.079299   15981 round_trippers.go:426]     Content-Type: text/plain I0622 17:35:01.079307   15981 round_trippers.go:426]     Content-Length: 0 I0622 17:35:01.079315   15981 round_trippers.go:426]     Date: Thu, 22 Jun 2017 14:35:31 GMT 

How can i debug this issue ?

Update

The output of:

kubectl --v=8 logs ps-agent-2028336249-3pk43 --namespace=default -p

I0625 11:30:01.569903   13420 round_trippers.go:395] GET  I0625 11:30:01.569920   13420 round_trippers.go:402] Request Headers: I0625 11:30:01.569927   13420 round_trippers.go:405]     User-Agent: kubectl/v1.6.6 (linux/amd64) kubernetes/7fa1c17 I0625 11:30:01.569934   13420 round_trippers.go:405]     Accept: application/json, */* I0625 11:30:01.599026   13420 round_trippers.go:420] Response Status: 200 OK in 29 milliseconds I0625 11:30:01.599048   13420 round_trippers.go:423] Response Headers: I0625 11:30:01.599056   13420 round_trippers.go:426]     Date: Sun, 25 Jun 2017 08:30:01 GMT I0625 11:30:01.599062   13420 round_trippers.go:426]     Content-Type: application/json I0625 11:30:01.599069   13420 round_trippers.go:426]     Content-Length: 2794 I0625 11:30:01.599264   13420 request.go:991] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"ps-agent-2028336249-3pk43","generateName":"ps-agent-2028336249-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/ps-agent-2028336249-3pk43","uid":"87c69072-597e-11e7-83cd-08002773375c","resourceVersion":"14354","creationTimestamp":"2017-06-25T08:16:03Z","labels":{"pod-template-hash":"2028336249","run":"ps-agent"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"ps-agent-2028336249\",\"uid\":\"87c577b5-597e-11e7-83cd-08002773375c\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"13446\"}}\n"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"ps-agent-2028336249","uid":"87c577b5-597e-11e7-83cd-08002773375c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-bhvgz","secret":{"secretName":"default-token-bhvgz","defaultMode":420}}],"containers":[{"name":"ps-agent","image":"ha/prod:ps-agent-latest","resources":{},"volumeMounts":[{"name":"default-token-bhvgz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"schedulerName":"default-scheduler"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-25T08:16:03Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-06-25T08:16:03Z","reason":"ContainersNotReady","message":"containers with unready status: [ps-agent]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-25T08:16:03Z"}],"hostIP":"192.168.99.100","podIP":"172.17.0.5","startTime":"2017-06-25T08:16:03Z","containerStatuses":[{"name":"ps-agent","state":{"waiting":{"reason":"CrashLoopBackOff","message":"Back-off 5m0s restarting failed container=ps-agent pod=ps-agent-2028336249-3pk43_default(87c69072-597e-11e7-83cd-08002773375c)"}},"lastState":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":null,"finishedAt":"2017-06-25T08:27:17Z","containerID":"docker://1aa9dfbfeb80042c6f4c8d04cabb3306ac1cd52963568e621019e2f1f0ee081b"}},"ready":false,"restartCount":7,"image":"ha/prod:ps-agent-latest","imageID":"docker://sha256:eb5307c4366fc129d022703625a5f30ff175b5e1a24dbe39fd4c32e726a0ee7b","containerID":"docker://1aa9dfbfeb80042c6f4c8d04cabb3306ac1cd52963568e621019e2f1f0ee081b"}],"qosClass":"BestEffort"}} I0625 11:30:01.600727   13420 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/ps-agent-2028336249-3pk43/log?previous=true I0625 11:30:01.600747   13420 round_trippers.go:402] Request Headers: I0625 11:30:01.600757   13420 round_trippers.go:405]     Accept: application/json, */* I0625 11:30:01.600766   13420 round_trippers.go:405]     User-Agent: kubectl/v1.6.6 (linux/amd64) kubernetes/7fa1c17 I0625 11:30:01.632473   13420 round_trippers.go:420] Response Status: 200 OK in 31 milliseconds I0625 11:30:01.632545   13420 round_trippers.go:423] Response Headers: I0625 11:30:01.632569   13420 round_trippers.go:426]     Date: Sun, 25 Jun 2017 08:30:01 GMT I0625 11:30:01.632592   13420 round_trippers.go:426]     Content-Type: text/plain I0625 11:30:01.632615   13420 round_trippers.go:426]     Content-Length: 0 
like image 477
haim ari Avatar asked Jun 22 '17 14:06

haim ari


People also ask

How do I fix the back of the restarting failed container?

If you receive the "Back-Off restarting failed container" output message, then your container probably exited soon after Kubernetes started the container. If the Liveness probe isn't returning a successful status, then verify that the Liveness probe is configured correctly for the application.

How do I check logs on a Crashloopback pod?

The first command kubectl -n <namespace-name> describe pod <pod name> is to describe your pod, which can be used to see any error in pod creation and running the pod like lack of resource, etc. And the second command kubectl -n <namespace-name> logs -p <pod name> to see the logs of the application running in the pod.

How long is CrashLoopBackOff?

Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container. I think that means that anything that executes for longer than 10 minutes before exiting will not trigger a CrashLoopBackOff status.


1 Answers

The issue caused by the docker container which exits as soon as the "start" process finishes. i added a command that runs forever and it worked. This issue mentioned here

like image 180
haim ari Avatar answered Sep 22 '22 08:09

haim ari