If you use query_string in elasticsearch with an unlucky query over a large dataset, you can almost bring the cluster down, and it can take quite a long time until the query is finished. This can be a serious problem if you allow the users to specify arbitrary query strings.
So, is there a way to limit the time and/or resources the query takes? If you could tell elasticsearch to abort the querying automatically after, say, 10 seconds, that'd help quite a lot.
Answer #1: By default, the timeout value is set to 10 secs. If one wants to change the global timeout value, this can be achieved by setting the flag timeout=your-time while creating the object.
Give memory to the filesystem cacheedit Elasticsearch heavily relies on the filesystem cache in order to make search fast. In general, you should make sure that at least half the available memory goes to the filesystem cache so that Elasticsearch can keep hot regions of the index in physical memory.
By default, you cannot use from and size to page through more than 10,000 hits. This limit is a safeguard set by the index. max_result_window index setting. If you need to page through more than 10,000 hits, use the search_after parameter instead.
You can pass a search "timeout" in the request body or in the uri request. If the search request times out before collecting all results, the partial results will be returned.
Here's a uri request example for 10 seconds (units are in milliseconds):
http://localhost:9200/_search?timeout=10000
If you set the timeout low enough, the returned json will show "timed_out":true.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With