Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

elasticsearch / kibana errors "Data too large, data for [@timestamp] would be larger than limit

On my test ELK cluster, I'm encountering the following error when trying to see data from the last week.

Data too large, data for [@timestamp] would be larger than limit 

The warning about shards failing appears to be misleading because the elasticsearch monitoring tools kopf and head show that all shards are working properly, and the elastic cluster is green.

enter image description here

One user in the google group for elasticsearch suggested increasing ram. I've increased my 3 nodes to 8GB each with a 4.7GB heap, but the issue continues. I'm generating about 5GB to 25GB of data per day, with a 30 day retention.

like image 251
spuder Avatar asked Apr 22 '15 23:04

spuder


People also ask

What is ELK Stack used for?

Often referred to as Elasticsearch, the ELK stack gives you the ability to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics, and more.

How much data can Elasticsearch hold?

Though there is technically no limit to how much data you can store on a single shard, Elasticsearch recommends a soft upper limit of 50 GB per shard, which you can use as a general guideline that signals when it's time to start a new index.

How big can an Elasticsearch index be?

There are no hard limits on shard size, but experience shows that shards between 10GB and 50GB typically work well for logs and time series data. You may be able to use larger shards depending on your network and use case.


1 Answers

Clearing the cache alleviates the symptoms for now.

http://www.elastic.co/guide/en/elasticsearch/reference/current/indices-clearcache.html

Clear a single index

curl -XPOST 'http://localhost:9200/twitter/_cache/clear' 

Clear multiple indicies

curl -XPOST 'http://localhost:9200/kimchy,elasticsearch/_cache/clear'  curl -XPOST 'http://localhost:9200/_cache/clear' 

Or as suggested by a user in IRC. This one seems to work the best.

curl -XPOST 'http://localhost:9200/_cache/clear' -d '{ "fielddata": "true" }' 

Update: these errors went away as soon as the cluster was moved to a faster hypervisor

like image 190
spuder Avatar answered Oct 09 '22 17:10

spuder