Can someone guide the configuration for auto discover for K8s. The Prometheus server is outside of the cluster. I tried Service Discovery With Kubernetes and someone mentioned in this discussion
I'm not yet a K8s expert enough to explain all the details here, but fundamentally it's perfectly possible to run Prometheus outside of the cluster (and required for things like redundant cross-cluster meta-monitoring). Cf. the
in_cluster
config option in http://prometheus.io/docs/operating/configuration/#kubernetes-sd-configurations-kubernetes_sd_config . You need to jump through certificate hoops if you run it outside.
So, I made a simple configuration
- job_name: 'kubernetes'
kubernetes_sd_configs:
-
# The API server addresses. In a cluster this will normally be
# `https://kubernetes.default.svc`. Supports multiple HA API servers.
api_servers:
- https://xxx.xx.xx.xx
# Run in cluster. This will use the automounted CA certificate and bearer
# token file at /var/run/secrets/kubernetes.io/serviceaccount/ in the pod.
in_cluster: false
# Optional HTTP basic authentication information.
basic_auth:
username: prometheus
password: secret
# Retry interval between watches if they disconnect.
retry_interval: 5s
Getting unknown fields in kubernetes_sd_config: api_servers, in_cluster, retry_interval"
or some other indentation errors
In sample configuration, they mentioned ca_file:
. How to get that certificate file from K8s or is there any way to specify K8s config
file(~/.kube/config)
How Prometheus Works. Prometheus uses a pull based system that sends HTTP requests. Each request is called a scrape , and is created according to the config instructions defined in your deployment file. Each response to a scrape is parsed and stored in a repository along with the relevant metrics.
Prometheus is an open-source monitoring tool that can help you monitor your Kubernetes cluster. But, it's not the only choice and in this article, we'll explore it along with some other options.
Prometheus uses Kubernetes APIs to read all the available metrics from Nodes, Pods, Deployments, etc. For this reason, we need to create an RBAC policy with read access to required API groups and bind the policy to the monitoring namespace. Step 1: Create a file named clusterRole. yaml and copy the following RBAC role.
Results in PrometheusGo to the Prometheus GUI and navigate to Status -> Targets. You'll see that now all the pod endpoints 'magically' pop up at the kubernetes-services-endpoints heading. Any future prometheus.io related annotation changes in k8s Services will immediately come into effect after applying them!
By digging though the source code I figured out, that Prometheus always uses the in cluster config, if no api_server
is provided in the config (discovery/kubernetes/kubernetes.go#L90-L96
).
Somehow the docs don't say anything about the Kubernetes configuration parameters, but the source code does (config/config.go#L1026-L1037
). Therefore there is not list named api_servers
, but a single parameter named api_server
.
So your config should look like this (untested):
- job_name: 'kubernetes'
kubernetes_sd_configs:
-
# The API server addresses. In a cluster this will normally be
# `https://kubernetes.default.svc`. Supports multiple HA API servers.
api_server: https://xxx.xx.xx.xx
# Optional HTTP basic authentication information.
basic_auth:
username: prometheus
password: secret
# specify the CA
tls_config:
ca_file: /path/to/ca.crt
## If the actual CA file isn't available you need to disable verification:
# insecure_skip_verify: true
I don't know where the retry_interval
parameter comes from, but AFAIK this isn't a Kubernetes config parameter and it's also not part of the Prometheus config.
With help of @svenwltr answer I have create docker image which we can launch in K8s cluster. Check my repo
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With