Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

prometheus node-exporter on kubernetes

I have deployed prometheus on kubernetes cluster (EKS). I was able to successfully scrape prometheus and traefik with following

scrape_configs:
  # A scrape configuration containing exactly one endpoint to scrape:

  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    static_configs:
      - targets: ['prometheus.kube-monitoring.svc.cluster.local:9090']

  - job_name: 'traefik'
    static_configs:
      - targets: ['traefik.kube-system.svc.cluster.local:8080']

But node-exporter deployed as DaemonSet with following definition is not exposing the node metrics.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-monitoring
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      name: node-exporter
      labels:
        app: node-exporter
    spec:
      hostNetwork: true
      hostPID: true
      containers:
      - name: node-exporter
        image: prom/node-exporter:v0.18.1
        args:
        - "--path.procfs=/host/proc"
        - "--path.sysfs=/host/sys"
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: scrape
        resources:
          requests:
            memory: 30Mi
            cpu: 100m
          limits:
            memory: 50Mi
            cpu: 200m
        volumeMounts:
        - name: proc
          readOnly:  true
          mountPath: /host/proc
        - name: sys
          readOnly: true
          mountPath: /host/sys
      tolerations:
        - effect: NoSchedule
          operator: Exists
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys

and following scrape_configs in prometheus

scrape_configs:
  - job_name: 'kubernetes-nodes'
    scheme: http
    kubernetes_sd_configs:
    - role: node
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.kube-monitoring.svc.cluster.local:9100
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics 

I also tried to curl http://localhost:9100/metrics from one of the container, but got curl: (7) Failed to connect to localhost port 9100: Connection refused

What I am missing here with the configuration ?

After suggestion to install Prometheus by helm, I didn't install it on test cluster and tried to compare my original configuration with helm installed Prometheus.

Following pods were running :

NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0   2/2     Running   0          4m33s
prometheus-grafana-66c7bcbf4b-mh42x                      2/2     Running   0          4m38s
prometheus-kube-state-metrics-7fbb4697c-kcskq            1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-6bf9f                1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-gbrzr                1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-j6l9h                1/1     Running   0          4m38s
prometheus-prometheus-oper-operator-648f9ddc47-rxszj     1/1     Running   0          4m38s
prometheus-prometheus-prometheus-oper-prometheus-0       3/3     Running   0          4m23s

I didn't find any configuration for node exporter in pod prometheus-prometheus-prometheus-oper-prometheus-0 at /etc/prometheus/prometheus.yml

like image 881
roy Avatar asked Jul 09 '19 19:07

roy


2 Answers

The previous advice to use Helm is highly valid, I would also recommend that.

Regarding your issue: thing is that you are not scraping nodes directly, you're using node-exporter for that. So role: node is incorrect, you should instead use role: endpoints. For that you also need to create service for all pods of your DaemonSet.

Here is working example from my environment (installed by Helm):

- job_name: monitoring/kube-prometheus-exporter-node/0
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - monitoring
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: exporter-node
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: metrics
    action: replace
like image 62
Vasili Angapov Avatar answered Oct 17 '22 23:10

Vasili Angapov


How did you deploy Prometheus? Whenever i used the helm-chart (https://github.com/helm/charts/tree/master/stable/prometheus) the node-exporter had been deployed. Maybe this is a simpler solution.

like image 1
alexzimmer96 Avatar answered Oct 17 '22 21:10

alexzimmer96