We're using Kubernetes 1.1.3 with its default fluentd-elasticsearch logging.
We also use LivenessProbes on our containers to make sure they operate as expected.
Our problem is that lines we send out to the STDOUT from the LivenessProbe does not appear to reach Elastic Search.
Is there a way to make fluentd ship LivenessProbes output like it does to regular containers in a pod?
The liveness probe will be marked as failed when the container issues an unhealthy response. The probe is also considered failed if the service doesn't implement the gRPC health checking protocol.
Increase the Timeout of the Liveness Probe To increase the Liveness probe timeout, configure the Managed controller item and update the value of "Health Check Timeout". By default it set to 10 (10 seconds). You may increase it to for example 30 (30 seconds).
Liveness probes take action at the compute level, restarting Pods if they become unhealthy. A restart is when Kubernetes replaces the Pod container with a new one; the Pod itself isn't replaced; it continues to run on the same node but with a new container.
Kubernetes provides liveness probes to detect and remedy such situations. In this exercise, you create a Pod that runs a container based on the k8s.gcr.io/busybox image. Here is the configuration file for the Pod: In the configuration file, you can see that the Pod has a single Container .
failureThreshold: When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3.
If your application gracefully exits when encountering such an issue, you won't necessarily need to configure liveness probes, but there can still be bugs you don't know about. The pod will be restarted as per the configured/default restart policy.
Liveness probes check if the pod is healthy, and if the pod is deemed unhealthy, it will trigger a restart; this action is different than the action of Readiness Probes I discussed in my previous post. Let's look at the components of the probes and dive into how to configure and troubleshoot Liveness Probes.
The output from the probe is swallowed by the Kubelet component on the node, which is responsible for running the probes (source code, if you're interested). If a probe fails, its output will be recorded as an event associated with the pod, which should be accessible through the API.
The output of successful probes isn't recorded anywhere unless your Kubelet has a log level of at least --v=4, in which case it'll be in the Kubelet's logs.
Feel free to file a feature request in a Github issue if you have ideas of what you'd like to be done with the output :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With