Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to increase _cluster/settings/cluster.max_shards_per_node for AWS Elasticsearch Service

I uses AWS Elasticsearch service version 7.1 and its built-it Kibana to manage application logs. New indexes are created daily by Logstash. My Logstash gets error about maximum shards limit reach from time to time and I have to delete old indexes for it to become working again.

I found from this document (https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-handling-errors.html) that I have an option to increase _cluster/settings/cluster.max_shards_per_node.

So I have tried that by put following command in Kibana Dev Tools

PUT /_cluster/settings
{
  "defaults" : {
      "cluster.max_shards_per_node": "2000"
  }
}

But I got this error

{
  "Message": "Your request: '/_cluster/settings' payload is not allowed."
}

Someone suggests that this error occurs when I try to update some settings that are not allowed by AWS, but this document (https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-es-operations.html#es_version_7_1) tells me that cluster.max_shards_per_node is one in the allowed list.

Please suggest how to update this settings.

like image 836
asinkxcoswt Avatar asked Dec 05 '22 08:12

asinkxcoswt


1 Answers

You're almost there, you need to rename defaults to persistent

PUT /_cluster/settings
{
  "persistent" : {
      "cluster.max_shards_per_node": "2000"
  }
}

Beware though, that the more shards you allow per node, the more resources each node will need and the worse the performance can get.

like image 155
Val Avatar answered Dec 09 '22 14:12

Val