I want to monitor disk usages of persistent volumes in the cluster. I am using CoreOS Kube Prometheus. A dashboard is trying to query with a metric called kubelet_volume_stats_capacity_bytes which is not available anymore with Kubernetes versions starting from v1.12.
I am using Kubernetes version v1.13.4 and hostpath-provisioner to provision volumes based on persistent volume claim. I want to access current disk usage metrics for each persistent volume.
kube_persistentvolumeclaim_resource_requests_storage_bytes is available but it shows only the persistent claim request in bytes
container_fs_usage_bytes is not fully covers my problem.
To get the usage, create a debugging pod which will use your PVC, from which you will check the usage. This should work depending on your storage provider. Apply the above manifest with kubectl apply -f volume-size-debugger. yaml , and run a shell inside it with kubectl exec -it volume-size-debugger sh .
A persistent volume is a piece of storage in a cluster that an administrator has provisioned. It is a resource in the cluster, just as a node is a cluster resource.
The following metrics must be used for monitoring persistent volume stats in Kubernetes (the PVC name is exported in persistentvolumeclaim label): kubelet_volume_stats_capacity_bytes - the per-PVC capacity in bytes. kubelet_volume_stats_used_bytes - the per-PVC space usage in bytes.
Prometheus is quite popular solution but to monitor capacity especially disk usage you can use Heapster, now he is about to "retire", but just for this special case you can use it, but you will have to implement script too. Take look on repository: heapster-memory
Kubernetes supports two volumeModes of PersistentVolumes: Filesystem and Block. volumeMode is an optional API parameter. Filesystem is the default mode used when volumeMode parameter is omitted.
Yes, in newest version of Kubernetes you cannot monitor metric such as kubelet_volume_stats_capacity_bytes, but there are some workarounds. Unfortunately this is a bit fragmented in Kubernetes today. PVCs may have capacity and usage metrics, depending on the volume provider, and it seems that any CSI based volume doesn't have these at all.
Yes, in newest version of Kubernetes you cannot monitor metric such as kubelet_volume_stats_capacity_bytes, but there are some workarounds. Unfortunately this is a bit fragmented in Kubernetes today. PVCs may have capacity and usage metrics, depending on the volume provider, and it seems that any CSI based volume doesn't have these at all. We can do this on a best effort basis butit is simple to quickly hit cases where these metrics are not available.
First, just simply write your own script which will be every time values of metric like container_fs_usage_bytes are gathered will be count difference between capacity before measurement and container usage in bytes (metric will container_fs_usage_bytes be helpful).
Prometheus is quite popular solution but to monitor capacity especially disk usage you can use Heapster, now he is about to "retire", but just for this special case you can use it, but you will have to implement script too. Take look on repository: heapster-memory
"res.Containers = append(res.Containers, metrics.ContainerMetrics{Name: c.Name, Usage: usage})"
I hope it helps.
Per-PVC disk space usage in percentage can be determined with the following query:
100 * sum(kubelet_volume_stats_used_bytes) by (persistentvolumeclaim)
/
sum(kubelet_volume_stats_capacity_bytes) by (persistentvolumeclaim)
The kubelet_volume_stats_used_bytes
metric shows per-PVC disk space usage in bytes.
The kubelet_volume_stats_capacity_bytes
metric shows per-PVC disk size in bytes.
I have a job like the following in my prom config:
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
With this job in place I see the following metrics available in Prometheus:
kubelet_volume_stats_available_bytes
kubelet_volume_stats_capacity_bytes
kubelet_volume_stats_inodes
kubelet_volume_stats_inodes_free
kubelet_volume_stats_inodes_used
kubelet_volume_stats_used_bytes
More here: https://github.com/google/cadvisor/issues/1702
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With