Can somebody explain why the following command shows that there have been no restarts but the age is 2 hours when it was started 17 days ago
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
api-depl-nm-xxx 1/1 Running 0 17d xxx.xxx.xxx.xxx ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal
ei-depl-nm-xxx 1/1 Running 0 2h xxx.xxx.xxx.xxx ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal
jenkins-depl-nm-xxx 1/1 Running 0 2h xxx.xxx.xxx.xxx ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal
The deployments have been running for 17 days:
kubectl get deploy -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINER(S) IMAGE(S) SELECTOR
api-depl-nm 1 1 1 1 17d api-depl-nm xxx name=api-depl-nm
ei-depl-nm 1 1 1 1 17d ei-depl-nm xxx name=ei-depl-nm
jenkins-depl-nm 1 1 1 1 17d jenkins-depl-nm xxx name=jenkins-depl-nm
The start time was 2 hours ago:
kubectl describe po ei-depl-nm-xxx | grep Start
Start Time: Tue, 24 Jul 2018 09:07:05 +0100
Started: Tue, 24 Jul 2018 09:10:33 +0100
The application logs show it restarted. So why is the restarts 0?
Updated with more information as a response to answer.
I may be wrong but I don't think the deployment was updated or scaled it certainly was not done be me and no one else has access to the system.
kubectl describe deployment ei-depl-nm
...
CreationTimestamp: Fri, 06 Jul 2018 17:06:24 +0100
Labels: name=ei-depl-nm
...
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
...
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: ei-depl-nm-xxx (1/1 replicas created)
Events: <none>
I may be wrong but I don't think the worker node was restarted or shut down
kubectl describe nodes ip-xxx.eu-west-1.compute.internal
Taints: <none>
CreationTimestamp: Fri, 06 Jul 2018 16:39:40 +0100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Fri, 06 Jul 2018 16:39:45 +0100 Fri, 06 Jul 2018 16:39:45 +0100 RouteCreated RouteController created a route
OutOfDisk False Wed, 25 Jul 2018 16:30:36 +0100 Fri, 06 Jul 2018 16:39:40 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Wed, 25 Jul 2018 16:30:36 +0100 Wed, 25 Jul 2018 02:23:01 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 25 Jul 2018 16:30:36 +0100 Wed, 25 Jul 2018 02:23:01 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Wed, 25 Jul 2018 16:30:36 +0100 Wed, 25 Jul 2018 02:23:11 +0100 KubeletReady kubelet is posting ready status
......
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default ei-depl-nm-xxx 100m (5%) 0 (0%) 0 (0%) 0 (0%)
default jenkins-depl-nm-xxx 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-xxx 260m (13%) 0 (0%) 110Mi (1%) 170Mi (2%)
kube-system kube-proxy-ip-xxx.eu-west-1.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
560m (28%) 0 (0%) 110Mi (1%) 170Mi (2%)
Events: <none>
Container RestartsA restarting container can indicate problems with memory (see the Out of Memory section), cpu usage, or just an application exiting prematurely. If a container is being restarted because of CPU usage, try increasing the requested and limit amounts for CPU in the pod spec.
You will need to change the restart policy of the pod: A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always. restartPolicy applies to all Containers in the Pod.
Kubectl doesn't have a direct way of restarting individual Pods. Pods are meant to stay running until they're replaced as part of your deployment routine. This is usually when you release a new version of your container image.
For example, Kubernetes may run 10 containers with a memory request value of 1 GB on a node with 10 GB memory. However, if these containers have a memory limit of 1.5 GB, some of the pods may use more than the minimum memory, and then the node will run out of memory and need to kill some of the pods.
There are two things that might happen:
The deployment was updated or scaled:
new ReplicaSet is created, old ReplicaSet is deleted. You can check it by running
$ kubectl describe deployment <deployment_name>
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 1m deployment-controller Scaled up replica set testdep1-75488876f6 to 1
Normal ScalingReplicaSet 1m deployment-controller Scaled down replica set testdep1-d4884df5f to 0
pods created by old ReplicaSet are terminated, new ReplicaSet created brand new pod with restarts 0 and age 0 sec.
Worker node was restarted or shut down.
You can check the node start events by running
kubectl describe nodes <node_name>
...
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 32s kubelet, <node-name> Starting kubelet.
Normal NodeHasSufficientPID 31s (x5 over 32s) kubelet, <node-name> Node <node-name> status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 31s kubelet, <node-name> Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 30s (x6 over 32s) kubelet, <node-name> Node <node-name> status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 30s (x6 over 32s) kubelet, <node-name> Node <node-name> status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 30s (x6 over 32s) kubelet, <node-name> Node <node-name> status is now: NodeHasNoDiskPressure
Normal Starting 10s kube-proxy, <node-name> Starting kube-proxy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With