Background
I have installed Prometheus on my Kubernetes cluster (hosted on Google Container Engineer) using the Helm chart for Prometheus.
The Problem
I cannot figure out how to add scrape targets to the Prometheus server. The prometheus.io site describes how I can mount a prometheus.yml file (which contains a list of scrape targets) to a Prometheus Docker container -- I have done this locally and it works. However, I don't know how to specify scrape targets for a Prometheus setup installed via Kubernetes-Helm. Do I need to add a volume to the Prometheus server pod that contains the scrape targets, and therefore update the YAML files generated by Helm??
I am also not clear on how to expose metrics in a Kubernetes Pod -- do I need to forward a particular port?
Just create a servicemonitor. yaml in the manifests folder. Since when we are deploying on K8s, we don't have access to the Prometheus. yaml file to mention the targets, we create the servicemonitor, which in-turn adds the target to the scrap_config in the Prometheus.
Prometheus uses service discovery to discover targets to scrape. Kubernetes clusters are equipped with labels, annotations, and a mechanism for tracking status and changes for different elements. To discover targets, Prometheus needs to use the Kubernetes API.
First of all, you need to create a Service Monitor, which is a custom K8s resource. Just create a servicemonitor.yaml
in the manifests folder.
Since when we are deploying on K8s, we don't have access to the Prometheus.yaml file to mention the targets, we create the servicemonitor, which in-turn adds the target to the scrap_config in the Prometheus.yaml file. You can read about it more from here.
This is a sample servicemonitor.yaml
file for exposing Flask App metrics in Prometheus.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-metrics
namespace: prometheus # namespace where prometheus is running
labels:
app: flask-app
release: prom # name of the release
# ( VERY IMPORTANT: You need to know the correct release name by viewing
# the servicemonitor of Prometheus itself: Without the correct name,
# Prometheus cannot identify the metrics of the Flask app as the target.)
spec:
selector:
matchLabels:
# Target app service
app: flask-app # same as above
release: prom # same as above
endpoints:
- interval: 15s # scrape interval
path: /metrics # path to scrape
port: http # named port in target app
namespaceSelector:
matchNames:
- flask # namespace where the app is running
Also add this Release Label to the Services and Deployments file too, in the metadata and spec section.
If you encounter a situation where Prometheus is showing the Target but not the endpoints, take a look at this: https://github.com/prometheus-operator/prometheus-operator/issues/3053
Some useful links:
You need to add annotations to the service you want to monitor.
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
From the prometheus.yml in the chart:
prometheus.io/scrape
: Only scrape services that have a value of true
prometheus.io/scheme
: http or httpsprometheus.io/path
: override if the metrics path is not /metrics
prometheus.io/port
: If the metrics are exposed on a different portAnd yes you need to expose the port with metrics to the service so Prometheus could access it
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With