I'm using this prometheus helm chart.
I was wondering if it is possible to setup the prometheus operator to automatically monitor every service in the cluster or namespace without having to create a ServiceMonitor
for every service.
With the current setup, when I want to monitor a service, I have to create a ServiceMonitor
with the label release: prometheus
.
Edit:
Service with monitoring: "true"
label
apiVersion: v1
kind: Service
metadata:
name: issue-manager-service
labels:
app: issue-manager-app
monitoring: "true"
spec:
selector:
app: issue-manager-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 7200
"Catch-All" Servicemonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: service-monitor-scraper
labels:
release: prometheus
spec:
endpoints:
- port: metrics
interval: 30s
path: /metrics
jobLabel: monitoring
namespaceSelector:
any: true
selector:
matchLabels:
monitoring: "true"
ServiceMonitors and PodMonitors are both pseudo-CRDs that map the scrape configuration of the Prometheus custom resource. These configuration objects declaratively specify the endpoints that Prometheus will scrape metrics from.
The ServiceMonitor is used to define an application you wish to scrape metrics from within Kubernetes, the controller will action the ServiceMonitors we define and automatically build the required Prometheus configuration.
The Prometheus custom resource definition (CRD) declaratively defines a desired Prometheus setup to run in a Kubernetes cluster. It provides options to configure the number of replicas, persistent storage, and Alertmanagers to which the deployed Prometheus instances send alerts to.
The ServiceMonitor is an object that defines the service endpoints that should be scraped by Prometheus and at what interval. In this article, we will deploy a custom REST app and Service that exposes Prometheus metrics to Kubernetes.
If you are running the Prometheus Operator as part of your monitoring stack (e.g. kube-prometheus-stack) then you can have your custom Service monitored by defining a ServiceMonitor CRD. The ServiceMonitor is an object that defines the service endpoints that should be scraped by Prometheus and at what interval.
As a servicemonitor does monitor services (haha), I missed the part of creating a service which isn't part of the gitlab helm chart. Finally this yaml did the trick for me and the metrics appear in Prometheus:
Here you can see the highlighted line shows that prometheus has found 1 service in the default namespace that matches our service monitor selection rules for the gopher-builder service. While this may seem like a use case that not many people need, it is useful for the times you will need it.
Only if you have a common label on all services
# for example:
org: "my-company"
# or
monitoring: "true"
# or
app.kubernetes.io/managed-by: "Helm" # <- in most cases this represents all
Then you define a single, cross-namespace ServiceMonitor
, that covers all labeled services:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: common-monitor
namespace: monitoring
spec:
endpoints:
- port: metrics
interval: 30s
path: /metrics
jobLabel: monitoring
namespaceSelector:
any: true # <- important
selector:
matchLabels:
monitoring: "true" # <- should match what you've chosen as common
Then to make sure this ServiceMonitor
is discovered by the Prometheus Operator you either:
ServiceMonitor
via the built-in operator template: https://github.com/prometheus-community/helm-charts/blob/4164ad5fdb6a977f1aba7b65f4e65582d3081528/charts/kube-prometheus-stack/values.yaml#L2008
serviceMonitorSelector
that points to your ServiceMonitor
https://github.com/prometheus-community/helm-charts/blob/4164ad5fdb6a977f1aba7b65f4e65582d3081528/charts/kube-prometheus-stack/values.yaml#L1760
This additional explicit linkage between Prometheus Operator and ServiceMonitor is done intentionally - in this way, if you have 2 Prometheus instances on your cluster (say Infra and Product) you can separate which Prometheus will get which Pods to its scraping config.
From your question, it sounds like you already have a serviceMonitorSelector
based on release: prometheus
label - try adding that on your catch-all ServiceMonitor
as well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With