I used Prometheus to measure business metrics like:
# HELP items_waiting_total Total number of items in a queue
# TYPE items_waiting_total gauge
items_waiting_total 149
I would like to keep this data for very long term (5 years retention) and I don't need high frequency in scrape_interval. So I set up scrape_interval: "900s"
.
When I check the graph in Prometheus with 60s resolution, it shows that flapping, but it is not true.
The question is, what is the maximum (recommended) scrape_interval in Prometheus?
Scraping, Evaluation and Alerting Prometheus scrape metrics from monitored targets at regular intervals, defined by the scrape_interval (defaults to 1m ). The scrape interval can be configured globally, and then overriden per job. Scraped metrics are then stored persistently on its local storage.
In this case the global setting is to scrape every 15 seconds. The evaluation_interval option controls how often Prometheus will evaluate rules. Prometheus uses rules to create new time series and to generate alerts. The rule_files block specifies the location of any rules we want the Prometheus server to load.
Open up your Prometheus config and check the scrape_interval setting. We recommend sticking with the Prometheus default of 60s (DPM of 1) and adjusting per-job scrape intervals as needed.
Every 5 minutes (scrape_interval) Prometheus will get the metrics from the given URL. It will try 30 seconds (scrape_timeout) to get the metrics if it can't scrape in this time it will time out.
It's not advisable to go above about 2 minutes. This is as staleness is 5 minutes by default (which is what's causing the gaps), and you want to allow for a failed scrape.
If you want to ignore gaps, it is possible to use some aggregation_over_time functions to get your DATA from Prometheus.
max_over_time(items_waiting_total[900s])
This is useful for situations where frequent gathering of DATA is expensive for collector.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With