Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes Liveness Probe Logging

We're using Kubernetes 1.1.3 with its default fluentd-elasticsearch logging.

We also use LivenessProbes on our containers to make sure they operate as expected.

Our problem is that lines we send out to the STDOUT from the LivenessProbe does not appear to reach Elastic Search.

Is there a way to make fluentd ship LivenessProbes output like it does to regular containers in a pod?

like image 702
Erez Rabih Avatar asked Dec 24 '15 15:12

Erez Rabih


People also ask

Why did liveness probe fail in Kubernetes?

The liveness probe will be marked as failed when the container issues an unhealthy response. The probe is also considered failed if the service doesn't implement the gRPC health checking protocol.

How do I fix liveness probe failure?

Increase the Timeout of the Liveness Probe To increase the Liveness probe timeout, configure the Managed controller item and update the value of "Health Check Timeout". By default it set to 10 (10 seconds). You may increase it to for example 30 (30 seconds).

Does liveness probe restart pod or container?

Liveness probes take action at the compute level, restarting Pods if they become unhealthy. A restart is when Kubernetes replaces the Pod container with a new one; the Pod itself isn't replaced; it continues to run on the same node but with a new container.

What are Kubernetes liveness probes?

Kubernetes provides liveness probes to detect and remedy such situations. In this exercise, you create a Pod that runs a container based on the k8s.gcr.io/busybox image. Here is the configuration file for the Pod: In the configuration file, you can see that the Pod has a single Container .

What happens when a Kubernetes probe fails?

failureThreshold: When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3.

Do I need to configure liveness probes when exiting a pod?

If your application gracefully exits when encountering such an issue, you won't necessarily need to configure liveness probes, but there can still be bugs you don't know about. The pod will be restarted as per the configured/default restart policy.

What are liveness probes and readiness probes?

Liveness probes check if the pod is healthy, and if the pod is deemed unhealthy, it will trigger a restart; this action is different than the action of Readiness Probes I discussed in my previous post. Let's look at the components of the probes and dive into how to configure and troubleshoot Liveness Probes.


1 Answers

The output from the probe is swallowed by the Kubelet component on the node, which is responsible for running the probes (source code, if you're interested). If a probe fails, its output will be recorded as an event associated with the pod, which should be accessible through the API.

The output of successful probes isn't recorded anywhere unless your Kubelet has a log level of at least --v=4, in which case it'll be in the Kubelet's logs.

Feel free to file a feature request in a Github issue if you have ideas of what you'd like to be done with the output :)

like image 70
Alex Robinson Avatar answered Oct 17 '22 17:10

Alex Robinson