Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Prometheus: cannot export metrics from connected Kubernetes cluster

The issue: I have a Prometheus outside of Kubernetes cluster. So, I want to export metrics from remote cluster.

I took the config sample from Prometheus Github repo and modified this a little bit. So, here is my jobs config.

  - job_name: 'kubernetes-apiservers'

    scheme: http

    kubernetes_sd_configs:
    - role: endpoints
      api_server: http://cluster-manager.dev.example.net:8080

    bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
    tls_config:
      insecure_skip_verify: true

    relabel_configs:
    - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
      action: keep
      regex: default;kubernetes;http

  - job_name: 'kubernetes-nodes'

    scheme: http

    kubernetes_sd_configs:
    - role: node
      api_server: http://cluster-manager.dev.example.net:8080

    bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
    tls_config:
      insecure_skip_verify: true

    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)

  - job_name: 'kubernetes-service-endpoints'

    scheme: http

    kubernetes_sd_configs:
    - role: endpoints
      api_server: http://cluster-manager.dev.example.net:8080

    bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
    tls_config:
      insecure_skip_verify: true

    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
      action: replace
      target_label: __scheme__
      regex: (http?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
      action: replace
      target_label: __address__
      regex: (.+)(?::\d+);(\d+)
      replacement: $1:$2
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name

  - job_name: 'kubernetes-services'

    scheme: http

    metrics_path: /probe
    params:
      module: [http_2xx]

    kubernetes_sd_configs:
    - role: service
      api_server: http://cluster-manager.dev.example.net:8080

    bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
    tls_config:
      insecure_skip_verify: true

    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
      action: keep
      regex: true
    - source_labels: [__address__]
      target_label: __param_target
    - target_label: __address__
      replacement: blackbox
    - source_labels: [__param_target]
      target_label: instance
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_service_namespace]
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      target_label: kubernetes_name

  - job_name: 'kubernetes-pods'

    scheme: http

    kubernetes_sd_configs:
    - role: pod
      api_server: http://cluster-manager.dev.example.net:8080

    bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
    tls_config:
      insecure_skip_verify: true

    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: (.+):(?:\d+);(\d+)
      replacement: ${1}:${2}
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name

I don't use a TLS connection to API, so I want to disable it.

When I curl /metrics URL from Prometheus host - it prints them.

Finally I connected to the cluster, but...the jobs are not up and therefore Prometheus doesn't expose relabeled metrics.

What I see in Console.

enter image description here

enter image description here

Targets state:

enter image description here

Also I checked the Prometheus debug. It's thought the system gets any necessary information and requests are done successfully.

time="2017-01-25T06:58:04Z" level=debug msg="pod update" kubernetes_sd=pod source="pod.go:66" tg="&config.TargetGroup{Targets:[]model.LabelSet{model.LabelSet{\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\", \"__address__\":\"10.32.0.2:10053\", \"__meta_kubernetes_pod_container_name\":\"kube-dns\", \"__meta_kubernetes_pod_container_port_number\":\"10053\", \"__meta_kubernetes_pod_container_port_name\":\"dns-local\"}, model.LabelSet{\"__address__\":\"10.32.0.2:10053\", \"__meta_kubernetes_pod_container_name\":\"kube-dns\", \"__meta_kubernetes_pod_container_port_number\":\"10053\", \"__meta_kubernetes_pod_container_port_name\":\"dns-tcp-local\", \"__meta_kubernetes_pod_container_port_protocol\":\"TCP\"}, model.LabelSet{\"__meta_kubernetes_pod_container_name\":\"kube-dns\", \"__meta_kubernetes_pod_container_port_number\":\"10055\", \"__meta_kubernetes_pod_container_port_name\":\"metrics\", \"__meta_kubernetes_pod_container_port_protocol\":\"TCP\", \"__address__\":\"10.32.0.2:10055\"}, model.LabelSet{\"__address__\":\"10.32.0.2:53\", \"__meta_kubernetes_pod_container_name\":\"dnsmasq\", \"__meta_kubernetes_pod_container_port_number\":\"53\", \"__meta_kubernetes_pod_container_port_name\":\"dns\", \"__meta_kubernetes_pod_container_port_protocol\":\"UDP\"}, model.LabelSet{\"__address__\":\"10.32.0.2:53\", \"__meta_kubernetes_pod_container_name\":\"dnsmasq\", \"__meta_kubernetes_pod_container_port_number\":\"53\", \"__meta_kubernetes_pod_container_port_name\":\"dns-tcp\", \"__meta_kubernetes_pod_container_port_protocol\":\"TCP\"}, model.LabelSet{\"__meta_kubernetes_pod_container_port_number\":\"10054\", \"__meta_kubernetes_pod_container_port_name\":\"metrics\", \"__meta_kubernetes_pod_container_port_protocol\":\"TCP\", \"__address__\":\"10.32.0.2:10054\", \"__meta_kubernetes_pod_container_name\":\"dnsmasq-metrics\"}, model.LabelSet{\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\", \"__address__\":\"10.32.0.2:8080\", \"__meta_kubernetes_pod_container_name\":\"healthz\", \"__meta_kubernetes_pod_container_port_number\":\"8080\", \"__meta_kubernetes_pod_container_port_name\":\"\"}}, Labels:model.LabelSet{\"__meta_kubernetes_pod_ready\":\"true\", \"__meta_kubernetes_pod_annotation_kubernetes_io_created_by\":\"{\\\"kind\\\":\\\"SerializedReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"ReplicaSet\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"name\\\":\\\"kube-dns-2924299975\\\",\\\"uid\\\":\\\"fa808d95-d7d9-11e6-9ac9-02dfdae1a1e9\\\",\\\"apiVersion\\\":\\\"extensions\\\",\\\"resourceVersion\\\":\\\"89\\\"}}\\n\", \"__meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_affinity\":\"{\\\"nodeAffinity\\\":{\\\"requiredDuringSchedulingIgnoredDuringExecution\\\":{\\\"nodeSelectorTerms\\\":[{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"beta.kubernetes.io/arch\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"amd64\\\"]}]}]}}}\", \"__meta_kubernetes_pod_name\":\"kube-dns-2924299975-dksg5\", \"__meta_kubernetes_pod_ip\":\"10.32.0.2\", \"__meta_kubernetes_pod_label_k8s_app\":\"kube-dns\", \"__meta_kubernetes_pod_label_pod_template_hash\":\"2924299975\", \"__meta_kubernetes_pod_label_tier\":\"node\", \"__meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_tolerations\":\"[{\\\"key\\\":\\\"dedicated\\\",\\\"value\\\":\\\"master\\\",\\\"effect\\\":\\\"NoSchedule\\\"}]\", \"__meta_kubernetes_namespace\":\"kube-system\", \"__meta_kubernetes_pod_node_name\":\"cluster-manager.dev.example.net\", \"__meta_kubernetes_pod_label_component\":\"kube-dns\", \"__meta_kubernetes_pod_label_kubernetes_io_cluster_service\":\"true\", \"__meta_kubernetes_pod_host_ip\":\"54.194.166.39\", \"__meta_kubernetes_pod_label_name\":\"kube-dns\"}, Source:\"pod/kube-system/kube-dns-2924299975-dksg5\"}" 
time="2017-01-25T06:58:04Z" level=debug msg="pod update" kubernetes_sd=pod source="pod.go:66" tg="&config.TargetGroup{Targets:[]model.LabelSet{model.LabelSet{\"__address__\":\"10.43.0.0\", \"__meta_kubernetes_pod_container_name\":\"bot\"}}, Labels:model.LabelSet{\"__meta_kubernetes_pod_host_ip\":\"172.17.101.25\", \"__meta_kubernetes_pod_label_app\":\"bot\", \"__meta_kubernetes_namespace\":\"default\", \"__meta_kubernetes_pod_name\":\"bot-272181271-pnzsz\", \"__meta_kubernetes_pod_ip\":\"10.43.0.0\", \"__meta_kubernetes_pod_node_name\":\"ip-172-17-101-25\", \"__meta_kubernetes_pod_annotation_kubernetes_io_created_by\":\"{\\\"kind\\\":\\\"SerializedReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"ReplicaSet\\\",\\\"namespace\\\":\\\"default\\\",\\\"name\\\":\\\"bot-272181271\\\",\\\"uid\\\":\\\"c297b3c2-e15d-11e6-a28a-02dfdae1a1e9\\\",\\\"apiVersion\\\":\\\"extensions\\\",\\\"resourceVersion\\\":\\\"1465127\\\"}}\\n\", \"__meta_kubernetes_pod_ready\":\"true\", \"__meta_kubernetes_pod_label_pod_template_hash\":\"272181271\", \"__meta_kubernetes_pod_label_version\":\"v0.1\"}, Source:\"pod/default/bot-272181271-pnzsz\"}" 

Prometheus fetches updates, but...doesn't convert them to metrics. So, I've broken my brain to figure out why is it going this way. So, please, help if you can figure out where might be mistake.

like image 437
cazorla19 Avatar asked Jan 25 '17 07:01

cazorla19


People also ask

How does Prometheus scrape Kubernetes metrics?

Prometheus uses service discovery to discover targets to scrape. Kubernetes clusters are equipped with labels, annotations, and a mechanism for tracking status and changes for different elements. To discover targets, Prometheus needs to use the Kubernetes API.

Can Prometheus monitor multiple Kubernetes clusters?

Yes, with the Prometheus/Grafana combination, you can monitor multiple Kubernetes clusters and multiple providers.


2 Answers

If you want to monitor a Kubernetes cluster from an external Prometheus server, I would suggest to set up a Prometheus federation topology:

  • Inside the K8s, install node-exporter pods and a Prometheus instance with short-term storage.
  • Expose the Prometheus service out of the K8s cluster, either via an ingress-controller (LB), or a node port. You can protect this endpoint with HTTPS + basic authentication.
  • Configure the center Prometheus to scrape metrics from above endpoint with proper authentication and tags.

This is the scalable solution. You can add monitor as many K8s clusters you want, until it reaches the capacities of the center Prometheus. Then you can add another center Prometheus instance to monitor others.

like image 172
canhnt Avatar answered Oct 01 '22 06:10

canhnt


Finally I came to the though it's not trivial to setup Kubernetes cluster monitoring outside of cluster. Cause Kubernetes architecture suggested to keep all infrastructure within one local network. So, every workaround is going to be messy. Also I came to the problem trying to debug why all configured jobs about Kubernetes roles such as nods, pods, services and endpoints doensn't even show up in targets status page. I may think wrong, but I didn't find out how to debug this issue in Prometheus.

My solution to monitor Kubernetes cluster outside was a kube-api-exporter. Pretty simple Python script which gets all metrics about ds, deployments and pods and finally provides the URL to fetch them. So, I'd recommend to come to this solution everyone who's got stuck with this sort of integration.

Also I started to fetch metrics from etcd. That's cool that etcd provides Prometheus-style metrics out of the box.

P.S.: thanks to FuzzyAmi for help.

like image 20
cazorla19 Avatar answered Oct 01 '22 06:10

cazorla19