I am implementing Prometheus to monitor my Kubernetes system health, where I have multiple clusters and namespaces.
My goal is to monitor only a specefic namespace which called default
and just my own pods excluding prometheus Pods and monitoring details.
I tried to specify the namespace in the kubernetes_sd_configs
like this:
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- 'default'
Buit I still getting metrics that I don't need.
Here is my configMap.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-server-conf
labels:
name: prometheus-server-conf
namespace: default
data:
prometheus.rules: |-
groups:
- name: devopscube demo alert
rules:
- alert: High Pod Memory
expr: sum(container_memory_usage_bytes) > 1
for: 1m
labels:
severity: slack
annotations:
summary: High Memory Usage
prometheus.yml: |-
global:
scrape_interval: 5s
evaluation_interval: 5s
rule_files:
- /etc/prometheus/prometheus.rules
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "alertmanager.monitoring.svc:9093"
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- 'default'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
namespaces:
names:
- 'default'
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- 'default'
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'kube-state-metrics'
static_configs:
- targets: ['kube-state-metrics.kube-system.svc.cluster.local:8080']
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
namespaces:
names:
- 'default'
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- 'default'
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
I don't want to this details below for example to be monitored:
✔container_memory_rss{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",id="/system.slice/kubelet.service",instance="minikube",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="minikube",kubernetes_io_os="linux"}
✔container_memory_rss{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",id="/system.slice/docker.service",instance="minikube",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="minikube",kubernetes_io_os="linux"}
✔container_memory_rss{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",id="/kubepods/podda7b74d8-b611-4dff-885c-70ea40091b7d",instance="minikube",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="minikube",kubernetes_io_os="linux",namespace="kube-system",pod="default-http-backend-59f7ff8999-ktqnl",pod_name="default-http-backend-59f7ff8999-ktqnl"}
Create a Namespace & ClusterRole Prometheus uses Kubernetes APIs to read all the available metrics from Nodes, Pods, Deployments, etc. For this reason, we need to create an RBAC policy with read access to required API groups and bind the policy to the monitoring namespace.
Yes, with the Prometheus/Grafana combination, you can monitor multiple Kubernetes clusters and multiple providers.
The metrics-server fetches resource metrics from the kubelets and exposes them in the Kubernetes API server through the Metrics API for use by the HPA and VPA. You can also view these metrics using the kubectl top command. The metrics-server uses the Kubernetes API to track nodes and pods in your cluster.
Prometheus pulls (scrapes) real-time metrics from application services and hosts by sending HTTP requests on Prometheus metrics exporters. It then compresses and stores them in a time-series database on a regular cadence. The new Dynatrace Kubernetes operator can collect metrics exposed by your exporters.
If you just want to prevent certain metrics from being ingested (i.e. prevent from being saved in the Prometheus database), you can use metric relabelling to drop them:
- job_name: kubernetes-cadvisor
metric_relabel_configs:
- source_labels: [__name__]
regex: container_memory_rss
action: drop
Note that in the kubernetes-cadvisor
job you use the node
service discovery role. This discovers Kubernetes nodes, which are non-namespaced resources, so your namespace restriction to default
might not have any effect in this case.
Hey just found this in the docs
# Optional namespace discovery. If omitted, all namespaces are used.
namespaces:
names:
[ - <string> ]
right under https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ingress
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With