When a Kubernetes pod goes into CrashLoopBackOff
state, you will fix the underlying issue. How do you force it to be rescheduled?
What Does CrashLoopBackOff mean? CrashLoopBackOff is a status message that indicates one of your pods is in a constant state of flux—one or more containers are failing and restarting repeatedly. This typically happens because each pod inherits a default restartPolicy of Always upon creation.
type=OnDelete . In case of node failure, the pod will recreated on new node after few time, the old pod will be removed after full recovery of broken node. worth noting it is not going to happen if your pod was created by DaemonSet or StatefulSet .
That explains the CrashLoop part, but what about the BackOff time? Basically, it's an exponential delay between restarts (10s, 20s, 40s, …) which is capped at five minutes. When a Pod state is displaying CrashLoopBackOff, it means that it's currently waiting the indicated time before restarting the pod again.
For apply new configuration the new pod should be created (the old one will be removed).
If your pod was created automatically by Deployment
or DaemonSet
resource, this action will run automaticaly each time after you update resource's yaml. It is not going to happen if your resource have spec.updateStrategy.type=OnDelete
.
If problem was connected with error inside docker image, that you solved, you should update pods manually, you can use rolling-update feature for this purpose, In case when new image have same tag, you can just remove broken pod. (see below)
In case of node failure, the pod will recreated on new node after few time, the old pod will be removed after full recovery of broken node. worth noting it is not going to happen if your pod was created by DaemonSet
or StatefulSet
.
Any way you can manual remove crashed pod:
kubectl delete pod <pod_name>
Or all pods with CrashLoopBackOff
state:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
If you have completely dead node you can add --grace-period=0 --force
options for remove just information about this pod from kubernetes.
Generally a fix requires you to change something about the configuration of the pod (the docker image, an environment variable, a command line flag, etc), in which case you should remove the old pod and start a new pod. If your pod is running under a replication controller (which it should be), then you can do a rolling update to the new version.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With