Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is CrashLoopBackOff status for openshift pods?

There is more than one example where I have seen this status from a pod running in openshift origin. In this case it was the quickstart for the cdi camel example. I was able to successfully build and run it locally (non - openshift) but when I try to deploy on my local openshift (using mvn -Pf8-local-deploy), I get this output for that particular example (snipped for relevance) :-

[vagrant@vagrant camel]$ oc get pods NAME READY STATUS RESTARTS AGE cdi-camel-z4czs 0/1 CrashLoopBackOff 4 2m

Tail of the logs are as under:-

  Error occurred during initialization of VM
  Error opening zip file or JAR manifest missing : agents/jolokia.jar
  agent library failed to init: instrument

Can someone help me solve this ?

like image 887
ZeroGraviti Avatar asked Feb 29 '16 22:02

ZeroGraviti


People also ask

Why is POD status CrashLoopBackOff?

What Does CrashLoopBackOff mean? CrashLoopBackOff is a status message that indicates one of your pods is in a constant state of flux—one or more containers are failing and restarting repeatedly. This typically happens because each pod inherits a default restartPolicy of Always upon creation.

What is meant by CrashLoopBackOff?

CrashLoopBackOff means the pod has failed/exited unexpectedly/has an error code that is not zero. There are a couple of ways to check this. I would recommend to go through below links and get the logs for the pod using kubectl logs. Debug Pods and ReplicationControllers. Determine the Reason for Pod Failure.

How do I remove CrashLoopBackOff?

type=OnDelete . In case of node failure, the pod will recreated on new node after few time, the old pod will be removed after full recovery of broken node. worth noting it is not going to happen if your pod was created by DaemonSet or StatefulSet .


2 Answers

If the state of the pod goes in to CrashLoopBackOff it usually indicates that the application within the container is failing to start up properly and the container is exiting straight away as a result.

If you use oc logs on the pod name, you may not see anything useful though as it would capture what the latest attempt to start it up is doing and may miss messages.

What you should do is instead provide the --previous or -p option to oc logs along with the pod name. That will show you the complete logs from the previous attempt to start up the container.

If this is an arbitrary Docker image you are using, a common problem that can occur and which would cause the container not to start, is an application image which requires to be run as the root user. Because running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID. The application image may not be designed with this possibility in mind and so is failing.

So try and get those logs messages and see what the problem is.

like image 67
Graham Dumpleton Avatar answered Sep 22 '22 05:09

Graham Dumpleton


Temporary workaround -> https://github.com/fabric8io/ipaas-quickstarts/issues/1157

Basically, the src/main/hawt-app directory needs to be deleted.

like image 39
ZeroGraviti Avatar answered Sep 21 '22 05:09

ZeroGraviti