Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Prometheus and Node Exporter architecture

I have been 3 days reading about this, even configuring a set of containers to test them, but I have doubts.

I understand that the architecture of Prometheus + Node exporter is based on:

  • Node exporter knows how to extract metrics. Those are exposed in HTTP, eg. :9201/metrics
  • Prometheus queries every X seconds those HTTP endpoints (node-exporter HTTTP) and stores the metrics. It also provide another HTTP for graph/console visualization/querying.

Question 1:

Assume you want CPU metrics every 15s, HDD metrics every 5m, Network every 1m, process every 30s.

Since it is prometheus who decides the scraping interval, how can be configured to just scrape those values?

Question 2:

Assume you want 1 prometheus instance and 3 node exporters, different public servers. I don't see anything regarding the node exporter and its security. The HTTP endpoint is public.

How can I securely query the metrics from my 3 servers?

Question 3:

I don't know if I am missing something. But, for example, comparing this to Telegraf, the latter sends the metrics to a database. Therefore, Telegraf acts as "node-exporter". I only need to secure the database connection (only exposed port).

Can node-exporter be configured to send a set of metrics every X time to the prometheus server? (so I don't have to expose a public port in every public server, just the prometheus server) I understand "pushgateway" is for that? How to change the node-exporter behavior?

Do you recommend me any other architecture that could suite my needs? (1 master, many slaves to query metrics)

like image 313
user3819881 Avatar asked Nov 22 '19 14:11

user3819881


1 Answers

Question 1

Since it is prometheus who decides the scraping interval, how can be configured to just scrape those values?

You can have different job configured each with its own scrape_interval and HTTP URL parameters params. Then, it depends on the features proposed by the exporter.

In the case of node_exporter, you can pass a list of collectors:

  • cpu every 15s (job: node_cpu)
  • process every 30s (job: node_process)
  • (well you get the idea) ...

Note that a scrape interval of 5min is likely to be too big because of data staleness: you run the risk of not getting any data in an instant vector on this data. A scrape interval of 1min is already big and has no impact on performance.

Question 2

How can I securely query the metrics from my 3 servers?

The original assumption of Prometheus is that you would use a private network. In the case of public network, you'll need some kind of proxy.

Personally, I have used exporter_exporter on a classical architecture.

Question 3

Can node-exporter be configured to send a set of metrics every X time to the prometheus server? (so I don't have to expose a public port in every public server, just the prometheus server) I understand "pushgateway" is for that? How to change the node-exporter behavior?

No, Prometheus is pull based architecture: you will need an URI accessible by Prometheus on each service you want to monitor.I imagine you could reuse components from another monitoring solution and use an adhoc exporter like the collectd exporter.

The push gateway is intended for short lived jobs that cannot wait to be scraped by Prometheus. This is a specific use case and general consensus is not to abuse it.

like image 116
Michael Doubez Avatar answered Sep 27 '22 19:09

Michael Doubez