Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Elastic search kubernetes Sudden rise in data disk usage

Deployed elastic search Kubernetes in GKE. With 2GB memory and 1GB persistence disk.

We got an error out of storage exception. After that, we have Increased to 2GB on the next day itself it reached 2GB, but we haven’t run any big queries. Then again we have increased the persistence disk size to 10 GB. After that, there is no increase in the data persistence disk storage.

On further analysis, we have found total Indices take 20MB of memory unable to what are the data in the disk.

Used elastic search nodes stats API to get the details on disk and node statistics.

I am unable to find the exact reason why memory exceeds and what are the data in the disk. Also, suggest ways to prevent this future.

screen shot

like image 830
Raj Kumar Avatar asked Oct 05 '20 06:10

Raj Kumar


People also ask

Is Elasticsearch in memory or on disk?

However, Elasticsearch is effectively an on-disk service (writes index directly to disk, removes when asked).

Should you run Elasticsearch on Kubernetes?

Elasticsearch is a distributed database using a clustered architecture. It can be complex to deploy and manage Elasticsearch directly on hardware resources. Kubernetes, the world's most popular container orchestrator, makes it easier to deploy, scale, and manage Elasticsearch clusters at a large scale.


1 Answers

It is continuously receiving data and based on your config it creates multiple copies of indices and may create a new index daily. Check the config file.

if the elasticsearch cluster fails each time it creates a backup of data so you may need to delete old backups before restarting the cluster.

like image 136
avadhut007 Avatar answered Sep 27 '22 19:09

avadhut007