To understand why an unassigned shard is not being assigned and what action you must take to allow Elasticsearch to assign it, use the cluster allocation explanation API. The index we want to diagnose. The unassigned shard ID. Indicates that we are diagnosing a primary shard.
When we create index, or have one of our nodes crashed, shards may go into unassigned state. Meaning, data is there but it is not assigned/replicated to a node to enable processing that shard.
This error sometimes occurs because text fields are not optimized for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default.
By default, Elasticsearch will re-assign shards to nodes dynamically. However, if you've disabled shard allocation (perhaps you did a rolling restart and forgot to re-enable it), you can re-enable shard allocation.
# v0.90.x and earlier
curl -XPUT 'localhost:9200/_settings' -d '{
"index.routing.allocation.disable_allocation": false
}'
# v1.0+
curl -XPUT 'localhost:9200/_cluster/settings' -d '{
"transient" : {
"cluster.routing.allocation.enable" : "all"
}
}'
Elasticsearch will then reassign shards as normal. This can be slow, consider raising indices.recovery.max_bytes_per_sec
and cluster.routing.allocation.node_concurrent_recoveries
to speed it up.
If you're still seeing issues, something else is probably wrong, so look in your Elasticsearch logs for errors. If you see EsRejectedExecutionException
your thread pools may be too small.
Finally, you can explicitly reassign a shard to a node with the reroute API.
# Suppose shard 4 of index "my-index" is unassigned, so you want to
# assign it to node search03:
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands": [{
"allocate": {
"index": "my-index",
"shard": 4,
"node": "search03",
"allow_primary": 1
}
}]
}'
OK, I've solved this with some help from ES support. Issue the following command to the API on all nodes (or the nodes you believe to be the cause of the problem):
curl -XPUT 'localhost:9200/<index>/_settings' \
-d '{"index.routing.allocation.disable_allocation": false}'
where <index>
is the index you believe to be the culprit. If you have no idea, just run this on all nodes:
curl -XPUT 'localhost:9200/_settings' \
-d '{"index.routing.allocation.disable_allocation": false}'
I also added this line to my yaml config and since then, any restarts of the server/service have been problem free. The shards re-allocated back immediately.
FWIW, to answer an oft sought after question, set MAX_HEAP_SIZE to 30G unless your machine has less than 60G RAM, in which case set it to half the available memory.
This little bash script will brute force reassign, you may lose data.
NODE="YOUR NODE NAME"
IFS=$'\n'
for line in $(curl -s 'localhost:9200/_cat/shards' | fgrep UNASSIGNED); do
INDEX=$(echo $line | (awk '{print $1}'))
SHARD=$(echo $line | (awk '{print $2}'))
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands": [
{
"allocate": {
"index": "'$INDEX'",
"shard": '$SHARD',
"node": "'$NODE'",
"allow_primary": true
}
}
]
}'
done
I also encountered similar error. It happened to me because one of my data node was full and due to which shards allocation failed. If unassigned shards are there and your cluster is RED and few indices also RED, in that case I have followed below steps and these worked like a champ.
in kibana dev tool-
GET _cluster/allocation/explain
If any unassigned shards are there then you will get details else will throw ERROR.
simply running below command will solve everything-
POST _cluster/reroute?retry_failed
Thanks to -
https://github.com/elastic/elasticsearch/issues/23199#issuecomment-280272888
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With