I am currently working on a monitoring service that will monitor Kubernetes' deployments and their pods. I want to notify users when a deployment is not running the expected amount of replicas and also when pods' containers restart unexpectedly. This may not be the right things to monitor and I would greatly appreciate some feedback on what I should be monitoring.
Anyways, the main question is the differences between all of the Statuses of pods. And when I say Statuses I mean the Status column when running kubectl get pods
. The statuses in question are:
- ContainerCreating
- ImagePullBackOff
- Pending
- CrashLoopBackOff
- Error
- Running
What causes pod/containers to go into these states?
For the first four Statuses, are these states recoverable without user interaction?
What is the threshold for a CrashLoopBackOff
?
Is Running
the only status that has a Ready Condition
of True?
Any feedback would be greatly appreciated!
Also, would it be bad practice to use kubectl
in an automated script for monitoring purposes? For example, every minute log the results of kubectl get pods
to Elasticsearch?
PodScheduled : the Pod has been scheduled to a node. ContainersReady : all containers in the Pod are ready. Initialized : all init containers have completed successfully. Ready : the Pod is able to serve requests and should be added to the load balancing pools of all matching Services.
To see the Deployment rollout status, run kubectl rollout status deployment/nginx-deployment . Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.
That means inside pod's container process has been successfully completed.
If a Pod is stuck in Pending it means that it can not be scheduled onto a node. Generally this is because there are insufficient resources of one type or another that prevent scheduling.
In the Kubernetes API, Pods have both a specification and an actual status. The status for a Pod object consists of a set of Pod conditions . You can also inject custom readiness information into the condition data for a Pod, if that is useful to your application. Pods are only scheduled once in their lifetime.
They are managed by the Kubernetes deployment controllers that: Wait for a specific amount of time between upgrading each pod Health check each pod to ensure that the new version of the application is correctly working Stop the deployment if too many failures occur
With a Deployment, you can be sure your application will continue handling traffic, even if the Deployment hasn’t yet completed. Today, Kubernetes advises using Deployments to represent your workloads. Your Deployments will run and scale ReplicaSets automatically; ReplicaSets will in turn manage your Pods.
Once the scheduler assigns a Pod to a Node, the kubelet starts creating containers for that Pod using a container runtime . There are three possible container states: Waiting, Running, and Terminated.
You can see the pod lifecycle details in k8s documentation. The recommended way of monitoring kubernetes cluster and applications are with prometheus
I will try to tell what I see hidden behind these terms
Showing when we wait to image be downloaded and the container will be created by a docker or another system.
Showing when we have problem to download the image from a registry. Wrong credentials to log in to the docker hub for example.
The container starts (if start take time) or started but redinessProbe failed.
This status showing when container restarts occur too much often. For example, we have process that tries to read not exists file and crash. Then the container will be recreated by Kube and repeat.
This is pretty clear. We have some errors to run the container.
All is good container running and livenessProbe is OK.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With