Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cluster has already maximum shards open

I'm using Windows 10 and I'm getting

Elasticsearch exception [type=validation_exception, reason=Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;]

How can I resolve this? I don't mind if I lose data since it only runs locally.

like image 787
nicholas Avatar asked Jun 09 '20 13:06

nicholas


People also ask

How many shards are in a cluster?

A good rule-of-thumb is to ensure you keep the number of shards per node below 20 per GB heap it has configured. A node with a 30GB heap should therefore have a maximum of 600 shards, but the further below this limit you can keep it the better. This will generally help the cluster stay in good health.

What is shards in Elasticsearch?

one index on one node will not take advantage of the distributed cluster configuration on which the elasticsearch works. So elasticsearch splits the documents in the index across multiple nodes in the cluster. Each and every split of the document is called a shard.

What are cluster shards?

CLUSTER SHARDS returns details about the shards of the cluster. A shard is defined as a collection of nodes that serve the same set of slots and that replicate from each other. A shard may only have a single master at a given time, but may have multiple or no replicas.


4 Answers

If you don't mind about the data loss, delete old indicies. The easy way is do it from the GUI Kibana > Management > DevTools, then to get all indicies:

GET /_cat/indices/

You can delete within a pattern like below:

DELETE /<index-name>

e.g.:

DELETE /logstash-2020-10*
like image 130
shock_in_sneakers Avatar answered Oct 22 '22 11:10

shock_in_sneakers


Aside from the answers mentioned above, you can also try increasing the shards until you try to rearchitect the nodes

curl -X PUT localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "3000" } }'

Besides, the following can be useful the should be proceeded with CAUTION ofcourse

  • Get total number of unassigned shards in cluster
curl -XGET -u elasticuser:yourpassword http://localhost:9200/_cluster/health\?pretty | grep unassigned_shards

USE WITH CAUTION

  • To DELETE the unassigned shards in a cluster (USE WITH CAUTION)
curl -XGET -u elasticuser:yourpassword http://localhost:9200/_cat/shards | grep UNASSIGNED | awk {'print $1'} #(USE WITH CAUTION) | xargs -i curl  -XDELETE -u elasticuser:yourpassword "http://localhost:9200/{}"
like image 11
codeaprendiz Avatar answered Oct 22 '22 10:10

codeaprendiz


You are reaching the limit cluster.max_shards_per_node. Add more data node or reduce the number of shards in cluster.

like image 10
Amit kumar Avatar answered Oct 22 '22 11:10

Amit kumar


You probably have too many shards per node.

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

like image 2
GAURAV MOKASHI Avatar answered Oct 22 '22 11:10

GAURAV MOKASHI