Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between kubernetes metrics "/metrics/resource/v1alpha1" and "/metrics/cadvisor" endpoints

I'm working on memory monitoring using Prometheus (prometheus-operator Helm chart). While investigating values I've noticed that memory usage (container_memory_working_set_bytes ) is being scraped from two endpoints:

  • /metrics/cadvisor
  • /metrics/resource/v1alpha1 (/metrics/resource from kubernetes 1.18)

I've figured out how to disable one of the endpoints in the chart but I'd like to understand the purpose of both.
I understand that /metrics/cadvisor returns three values - pod's container (or more if a pod has multiple containers), some special container POD (is it some internal memory usage to run a POD service?) and a sum of all containers (then the result has empty label container="").
On the other hand /metrics/resource/v1alpha1 returns only memory usage of a pod's containers (without container="POD" and without sum of these container="")

Is /metrics/resource/v1alpha1 then planned to replace /metrics/cadvisor as a single source of metrics? Seeing that both endpoints (both are enabled by default in prometheus-operator) return the same metrics any sum() queries can return values 2 as big as a real memory usage.

Appreciate any clarification in this subject!

like image 550
Nav Avatar asked Jul 21 '20 17:07

Nav


1 Answers

Answer is partial

I understand that /metrics/cadvisor returns three values - pod's container (or more if a pod has multiple containers), some special container POD (is it some internal memory usage to run a POD service?) and a sum of all containers (then the result has empty label container="").

container_name=="POD" is the "pause" container for the pods. The pause container is a container which holds the network namespace for the pod. Kubernetes creates pause containers to acquire the respective pod’s IP address and set up the network namespace for all other containers that join that pod. This container is a part of whole ecosystem and it starts first in pods to configure PODs network in the first place prior to scheduling another pods. After pod has been started - there is nothing to do for pause container.

Pause container code for your reference: https://github.com/kubernetes/kubernetes/tree/master/build/pause

Example of pause containers:

docker ps |grep pause
k8s_POD_etcd-master-1_kube-system_ea5105896423fc919bf9bfc0ab339888_0
k8s_POD_kube-scheduler-master-1_kube-system_155707e0c19147c8dc5e997f089c0ad1_0
k8s_POD_kube-apiserver-master-1_kube-system_fe660a7e8840003352195a8c40a01ef8_0
k8s_POD_kube-controller-manager-master-1_kube-system_807045fe48b23a157f7fe1ef20001ba0_0
k8s_POD_kube-proxy-76g9l_kube-system_e2348a94-eb96-4630-86b2-1912a9ce3a0f_0
k8s_POD_kube-flannel-ds-amd64-76749_kube-system_bf441436-bca3-4b49-b6fb-9e031ef7513d_0

container_name!=="POD" It filters out metric streams for the pause container, not metadata generally. Most people, if they want to graph the containers in their pod, don't want to see resource usage for the pause container, as it doesn't do much. The name of the pause container is an implementation detail of some container runtimes, but doesn't apply to all, and isn't guaranteed to stick around.

Official (obsolete v1.14) page shows differences between cadvisor and metrics resource monitoring:

Kubelet

The Kubelet acts as a bridge between the Kubernetes master and the nodes. It manages the pods and containers running on a machine. Kubelet translates each pod into its constituent containers and fetches individual container usage statistics from the container runtime, through the container runtime interface. For the legacy docker integration, it fetches this information from cAdvisor. It then exposes the aggregated pod resource usage statistics through the kubelet resource metrics api. This api is served at /metrics/resource/v1alpha1 on the kubelet’s authenticated and read-only ports.

cAdvisor

cAdvisor is an open source container resource usage and performance analysis agent. It is purpose-built for containers and supports Docker containers natively. In Kubernetes, cAdvisor is integrated into the Kubelet binary. cAdvisor auto-discovers all containers in the machine and collects CPU, memory, filesystem, and network usage statistics. cAdvisor also provides the overall machine usage by analyzing the ‘root’ container on the machine.

Also you should know that kubelet exposes metrics in /metrics/cadvisor, /metrics/resource and /metrics/probes endpoints. Those 3 metrics do not have same lifecycle.

As per helm prometheus values yaml - there are 3 options and you can disable what you dont need

    ## Enable scraping /metrics/cadvisor from kubelet's service
    ##
    cAdvisor: true

    ## Enable scraping /metrics/probes from kubelet's service
    ##
    probes: true

    ## Enable scraping /metrics/resource from kubelet's service
    ##
    resource: true
    # From kubernetes 1.18, /metrics/resource/v1alpha1 renamed to /metrics/resource
    resourcePath: "/metrics/resource/v1alpha1" 

My opinion /metrics/resource/ wont replace google's cadvisor. Just disable in your case what you dont need. It just depends on your needs. For example, I found an article Kubernetes: monitoring with Prometheus – exporters, a Service Discovery, and its roles where 4 diff tools being used to monitor everything.

  1. metrics-server – CPU, memory, file-descriptors, disks, etc of the cluster

  2. cAdvisor – a Docker daemon metrics – containers monitoring

  3. kube-state-metrics – deployments, pods, nodes

  4. node-exporter: EC2 instances metrics – CPU, memory, network

In your case, to monitor memory i believe it will be enough 1 :)

like image 173
Vit Avatar answered Sep 28 '22 07:09

Vit