I made space, restarted the service (no good) and then rebooted but I still get this in the elasticsearch.log:
[2015-02-16 13:35:19,625][WARN ][cluster.action.shard ] [Server] [logstash-2015.02.16][1] sending failed shard for [logstash-2015.02.16][1], node[PFamB-ZJS7CwSdyyAcP_8A], [P], s[INITIALIZING], indexUUID [tZ3I9HZ6TDaZSicIuGWRWQ],
reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[logstash-2015.02.16][1]
failed to recover shard]; nested: TranslogCorruptedException[translog corruption while reading from stream]; nested: ElasticsearchIllegalArgumentException[No version type match [83]]; ]]
[2015-02-16 13:35:19,625][WARN ][cluster.action.shard ] [Server] [logstash-2015.02.16][1] received shard failed for [logstash-2015.02.16][1], node[PFamB-ZJS7CwSdyyAcP_8A], [P], s[INITIALIZING], indexUUID [tZ3I9HZ6TDaZSicIuGWRWQ],
reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[logstash-2015.02.16][1]
failed to recover shard]; nested: TranslogCorruptedException[translog corruption while reading from stream]; nested: ElasticsearchIllegalArgumentException[No version type match [83]]; ]]
[2015-02-16 13:35:43,570][DEBUG][action.index ] [Server] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2015-02-16 13:36:10,757][DEBUG][action.index ] [Server] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
What should I do?
As a Java application, Elasticsearch requires some logical memory (heap) allocation from the system's physical memory. This should be up to half of the physical RAM, capping at 32GB. Setting higher heap usage is usually in response to expensive queries and larger data storage.
Remove any unwanted files on your host to reduce disk usage. If this does not fix the issue (the fix might take a few minutes to apply), modify settings in the configuration file as described in the next step. Modify settings in the configuration file to manage shard allocation based on disk usage.
Yes. You can go to the node where the indices reside and check the indices subdirectory of where your data resides and a df will show they are gone after deletion. Filesystem queue of I/O operations may delay the eventual removal, but you should see that the indices disappear almost instantly.
Bigfoot's solution is the only one that seems to work.
While the stack trace observed seems to be similar to: https://github.com/elastic/elasticsearch/issues/12055
This pull request is supposed to fix the issue: https://github.com/elastic/elasticsearch/pull/9797
But upgrading to v1.5.0 also does not do the trick.
Thus the only thing that works:
find /var/lib/elasticsearch/elasticsearch/nodes/ -name "*.recovering"
And delete all recovering files. Of course this has side-effects.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With