when I describe my pod, I can see the following conditions:
$ kubectl describe pod blah-84c6554d77-6wn42
...
Conditions:
Type Status
Initialized True
Ready False
ContainersReady True
PodScheduled True
...
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
blah-84c6554d77-6wn42 1/1 Running 46 23h 10.247.76.179 xxx-x-x-xx-123.nodes.xxx.d.ocp.xxx.xxx.br <none> <none>
...
I wonder how can this be possible: All the containers in the pod are showing ready=true
but pod is ready=false
Has anyone experienced this before? Do you know what else could be causing the pod to not be ready?
I'm running kubernetes version 1.15.4
.
I can see on the code that
// The status of "Ready" condition is "True", if all containers in a pod are ready
// AND all matching conditions specified in the ReadinessGates have status equal to "True".
but, I haven't defined any custom Readiness Gates. I wonder how can I check what's the reason for the check failure. I couldn't find this on the docs for pod-readiness-gate
here is the full pod yaml
$ kubectl get pod blah-84c6554d77-6wn42 -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-10-17T04:05:30Z"
generateName: blah-84c6554d77-
labels:
app: blah
commit: cb511424a5ec43f8dghdfdwervxz8a19edbb
pod-template-hash: 84c6554d77
name: blah-84c6554d77-6wn42
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: blah-84c6554d77
uid: 514da64b-c242-11e9-9c5b-0050123234523
resourceVersion: "19780031"
selfLink: /api/v1/namespaces/blah/pods/blah-84c6554d77-6wn42
uid: 62be74a1-541a-4fdf-987d-39c97644b0c8
spec:
containers:
- env:
- name: URL
valueFrom:
configMapKeyRef:
key: url
name: external-mk9249b92m
image: myregistry/blah:3.0.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 3
name: blah
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 10
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 3
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-4tp6z
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: xxxxxxxxxxx
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-4tp6z
secret:
defaultMode: 420
secretName: default-token-4tp6z
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-10-17T04:14:22Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-10-17T09:47:15Z"
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-10-17T07:54:55Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-10-17T04:14:18Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://33820f432a5a372d028c18f1b0376e2526ef65871f4f5c021e2cbea5dcdbe3ea
image: myregistry/blah:3.0.0
imageID: docker-pullable://myregistry/blah:@sha256:5c0634f03627bsdasdasdasdasdasdc91ce2147993a0181f53a
lastState:
terminated:
containerID: docker://5c8d284f79aaeaasdasdaqweqweqrwt9811e34da48f355081
exitCode: 1
finishedAt: "2019-10-17T07:49:36Z"
reason: Error
startedAt: "2019-10-17T07:49:35Z"
name: blah
ready: true
restartCount: 46
state:
running:
startedAt: "2019-10-17T07:54:39Z"
hostIP: 10.247.64.115
phase: Running
podIP: 10.247.76.179
qosClass: BestEffort
startTime: "2019-10-17T04:14:22Z"
Thanks
You got a ReadinessProbe:
readinessProbe:
failureThreshold: 10
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes
Readiness probes are configured similarly to liveness probes. The only difference is that you use the readinessProbe field instead of the livenessProbe field.
tl;dr; check if /healthcheck
on port 8080 is returning a successful HTTP status code and, if not used or not necessary, drop it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With