Deployed elastic search Kubernetes in GKE. With 2GB memory and 1GB persistence disk.
We got an error out of storage exception. After that, we have Increased to 2GB on the next day itself it reached 2GB, but we haven’t run any big queries. Then again we have increased the persistence disk size to 10 GB. After that, there is no increase in the data persistence disk storage.
On further analysis, we have found total Indices take 20MB of memory unable to what are the data in the disk.
Used elastic search nodes stats API to get the details on disk and node statistics.
I am unable to find the exact reason why memory exceeds and what are the data in the disk. Also, suggest ways to prevent this future.
However, Elasticsearch is effectively an on-disk service (writes index directly to disk, removes when asked).
Elasticsearch is a distributed database using a clustered architecture. It can be complex to deploy and manage Elasticsearch directly on hardware resources. Kubernetes, the world's most popular container orchestrator, makes it easier to deploy, scale, and manage Elasticsearch clusters at a large scale.
It is continuously receiving data and based on your config it creates multiple copies of indices and may create a new index daily. Check the config file.
if the elasticsearch cluster fails each time it creates a backup of data so you may need to delete old backups before restarting the cluster.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With