Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

History of Kubernetes Pod Termination Events?

Tags:

kubernetes

Is there a way to view the history of pod Termination statuses? Eg. If I look at pod describe command I see output similar to this:

State:      Running
  Started:      Mon, 10 Jul 2017 13:09:20 +0300
Last State:     Terminated
  Reason:       OOMKilled
  Exit Code:    137
  Started:      Thu, 06 Jul 2017 11:01:21 +0300
  Finished:     Mon, 10 Jul 2017 13:09:18 +0300

The same pod describe does not show anything in the pod events:

   Events:
  FirstSeen LastSeen    Count   From                    SubObjectPath       Type        Reason  Message
  --------- --------    -----   ----                    -------------       --------    ------  -------
  10m       10m     1   kubelet, gke-dev-default-d8f2dbc5-mbkb  spec.containers{demo}   Normal      Pulled  Container image "eu.gcr.io/project/image:v1" already present on machine
  10m       10m     1   kubelet, gke-dev-default-d8f2dbc5-mbkb  spec.containers{demo}   Normal      Created Created container with id 1d857caae77bdc43f0bc90fe045ed5050f85436479073b0e6b46454500f4eb5a
  10m       10m     1   kubelet, gke-dev-default-d8f2dbc5-mbkb  spec.containers{demo}   Normal      Started Started container with id 1d857caae77bdc43f0bc90fe045ed5050f85436479073b0e6b46454500f4eb5a

If I look at the kubectl get events --all-namespaces I see this event, but there is no way how to relate this to the particular pod:

  default   12m       12m       1         gke-dev-default-d8f2dbc5-mbkb   Node                Warning   OOMKilling   kernel-monitor, gke-dev-default-d8f2dbc5-mbkb   Memory cgroup out of memory: Kill process 1639 (java) score 2014 or sacrifice child
Killed process 1639 (java) total-vm:10828960kB, anon-rss:1013756kB, file-rss:22308kB

And even the event Details repoted through the api have missleading info (like the defaultnamespace though pod is actualy in demonamespace):

    "metadata": {
        "name": "gke-dev-default-d8f2dbc5-mbkb.14cff03fe771b053",
        "namespace": "default",
        "selfLink": "/api/v1/namespaces/default/events/gke-dev-default-d8f2dbc5-mbkb.14cff03fe771b053",
        "uid": "d5d3230e-6557-11e7-a486-42010a8401d3",
        "resourceVersion": "5278875",
        "creationTimestamp": "2017-07-10T10:09:18Z"
    },
    "involvedObject": {
        "kind": "Node",
        "name": "gke-dev-default-d8f2dbc5-mbkb",
        "uid": "gke-dev-default-d8f2dbc5-mbkb"
    },
    "reason": "OOMKilling",
    "message": "Memory cgroup out of memory: Kill process 1639 (java) score 2014 or sacrifice child\nKilled process 1639 (java) total-vm:10828960kB, anon-rss:1013756kB, file-rss:22308kB",
    "source": {
        "component": "kernel-monitor",
        "host": "gke-dev-default-d8f2dbc5-mbkb"
    },
    "firstTimestamp": "2017-07-10T10:09:18Z",
    "lastTimestamp": "2017-07-10T10:09:18Z",
    "count": 1,
    "type": "Warning"

So while I can see last Termination status via pod describe, what about the previous ones?

like image 745
gerasalus Avatar asked Jul 10 '17 10:07

gerasalus


People also ask

How do you get logs of terminated pods in Kubernetes?

Running kubectl logs -p will fetch logs from existing resources at API level. This means that terminated pods' logs will be unavailable using this command. As mentioned in other answers, the best way is to have your logs centralized via logging agents or directly pushing these logs into an external service.

How do I check my Kubernetes pod history?

To see what happened on the pod before it restarted, use kubectl logs your-pod-name --previous . You can pipe this to a file for inspection e.g.

How long are Kubernetes events stored?

The default retention period of kubernetes events is 1 hour. The retention period can be increased using --event-ttl flag of kube-apiserver.

Where Kubernetes events are stored?

In a default Kubernetes setup, the events are persisted into etcd, a key-value store. etcd is optimized for quick strongly consistent lookups, but falls short on its ability to provide analytical abilities over the data.


2 Answers

Eviction events are Node events. That is why you don't see them in the Pod events. If you run kubectl describe node <node_name> with the node the pod was running on, you can see the eviction events.

Test this: run a deployment that will constantly get OOMKilled:

kubectl run memory-hog --image=gisleburt/my-memory-hog --replicas=2 --limits=memory=128m

Once pods start running and dying, you can run kubectl get events or use kubectl describe node <node_name> , then you'll see events like:

Events:
  FirstSeen LastSeen    Count   From                            SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----                            -------------   --------    ------      -------
  2m        2m      1   kernel-monitor, gke-test-default-pool-649c88dd-818j         Warning     OOMKilling  Memory cgroup out of memory: Kill process 7345 (exe) score 50000 or sacrifice child
Killed process 7345 (exe) total-vm:6092kB, anon-rss:64kB, file-rss:112kB
  2m    2m  1   kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  Memory cgroup out of memory: Kill process 7409 (exe) score 51000 or sacrifice child
Killed process 7409 (exe) total-vm:6092kB, anon-rss:68kB, file-rss:112kB
  2m    2m  1   kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  Memory cgroup out of memory: Kill process 7495 (exe) score 50000 or sacrifice child
Killed process 7495 (exe) total-vm:6092kB, anon-rss:64kB, file-rss:112kB
  2m    2m  1   kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  Memory cgroup out of memory: Kill process 7561 (exe) score 49000 or sacrifice child
Killed process 7561 (exe) total-vm:6092kB, anon-rss:60kB, file-rss:112kB
  2m    2m  1   kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  Memory cgroup out of memory: Kill process 7638 (exe) score 494000 or sacrifice child
Killed process 7638 (exe) total-vm:7536kB, anon-rss:148kB, file-rss:1832kB
  2m    2m  1   kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  Memory cgroup out of memory: Kill process 7728 (exe) score 49000 or sacrifice child
Killed process 7728 (exe) total-vm:6092kB, anon-rss:60kB, file-rss:112kB
  2m    2m  1   kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  Memory cgroup out of memory: Kill process 7876 (exe) score 48000 or sacrifice child
Killed process 7876 (exe) total-vm:6092kB, anon-rss:60kB, file-rss:112kB
  2m    2m  1   kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  Memory cgroup out of memory: Kill process 8013 (exe) score 480000 or sacrifice child
Killed process 8013 (exe) total-vm:15732kB, anon-rss:152kB, file-rss:1768kB
  2m    2m  1   kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  Memory cgroup out of memory: Kill process 8140 (exe) score 1023000 or sacrifice child
Killed process 8140 (exe) total-vm:24184kB, anon-rss:448kB, file-rss:3704kB
  2m    25s 50  kernel-monitor, gke-test-default-pool-649c88dd-818j     Warning OOMKilling  (events with common reason combined)
like image 67
ahmet alp balkan Avatar answered Sep 20 '22 07:09

ahmet alp balkan


You could alternatively, see the logs of the previous terminated pod with

$ kubectl logs podname –-previous
like image 42
rrh Avatar answered Sep 20 '22 07:09

rrh