I'm starting to introduce liveness and readiness probes in my services, and I'm not sure if I've succeeded in getting it working or not, because I can't confidently interpret the status reported by kubectl
.
kubectl describe pod mypod
gives me something like this:
Name: myapp-5798dd798c-t7dqs
Namespace: dev
Node: docker-for-desktop/192.168.65.3
Start Time: Wed, 24 Oct 2018 13:22:54 +0200
Labels: app=myapp
pod-template-hash=1354883547
Annotations: version: v2
Status: Running
IP: 10.1.0.103
Controlled By: ReplicaSet/myapp-5798dd798c
Containers:
myapp:
Container ID: docker://5d39cb47d2278eccd6d28c1eb35f93112e3ad103485c1c825de634a490d5b736
Image: myapp:latest
Image ID: docker://sha256:61dafd0c208e2519d0165bf663e4b387ce4c2effd9237fb29fb48d316eda07ff
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 24 Oct 2018 13:23:06 +0200
Ready: True
Restart Count: 0
Liveness: http-get http://:80/healthz/live delay=0s timeout=10s period=60s #success=1 #failure=3
Readiness: http-get http://:80/healthz/ready delay=3s timeout=3s period=5s #success=1 #failure=3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gvnc2 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-gvnc2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gvnc2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 84s default-scheduler Successfully assigned myapp-5798dd798c-t7dqs to docker-for-desktop
Normal SuccessfulMountVolume 84s kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gvnc2"
Normal Pulled 75s kubelet, docker-for-desktop Container image "myapp:latest" already present on machine
Normal Created 74s kubelet, docker-for-desktop Created container
Normal Started 72s kubelet, docker-for-desktop Started container
Warning Unhealthy 65s kubelet, docker-for-desktop Readiness probe failed: Get http://10.1.0.103:80/healthz/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Now, I note that the container
has Status: Ready
, but the last event in the events list lists the state as Unhealthy
because of a failed readiness probe. (Looking in the application logs I can see that there has been many more incoming requests to the readiness probe since, and that they all succeeded.)
How should I interpret this information? Does Kubernetes consider my pod to be ready, or not ready?
Readiness probes: This probe will tell you when your app is ready to serve traffic. Kubernetes will ensure the readiness probe passes before allowing a service to send traffic to the pod. If the readiness probe fails, Kubernetes will not send the traffic to the pod until it passes.
Readiness Probes indicate whether your container is ready to serve requests. If the check fails, the container will not be restarted, but your Pod's IP address will be removed from the service, so it will not serve any further requests.
failureThreshold : When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready.
Liveness probe is used to indicate whether a container is running.
A pod is ready when readiness probes of all its containers return a success. In your case the readiness probe failed in first attempt but next probe was a success and the container went in ready state. Here in below example of failed readiness probe
the readiness probe below probed 58 times for last 11m and failed.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/upnready to mylabserver.com
Normal Pulling 11m kubelet, mylabserver.com pulling image "luksa/kubia:v3"
Normal Pulled 11m kubelet, mylabserver.com Successfully pulled image "luksa/kubia:v3"
Normal Created 11m kubelet, mylabserver.com Created container
Normal Started 11m kubelet, mylabserver.com Started container
Warning Unhealthy 103s (x58 over 11m) kubelet, mylabserver.com Readiness probe failed: Get http://10.44.0.123:80/: dial tcp 10.44.0.123:80: connect:
also the container status is not ready as can be seen below
kubectl get pods -l run=upnready
NAME READY STATUS RESTARTS AGE
upnready 0/1 Running 0 17m
In your case the readiness probe passed the health check and your pod is in ready state.
You can make use of initialDelaySeconds,periodSeconds,timeoutSeconds effectively to get better results. Here is a article.
article on readiness probe and liveness probe
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With