I'm trying to implement autocomplete using Elasticsearch thinking that I understand how to do it...
I'm trying to build multi-word (phrase) suggestions by using ES's edge_n_grams while indexing crawled data.
What is the difference between a tokenizer
and a token_filter
- I've read the docs on these but still need more understanding on them....
For instance is a token_filter what ES uses to search against user input? Is a tokenizer what ES uses to make tokens? What is a token?
Is it possible for ES to create multi-word suggestions using any of these things?
Tokenizer converts text to stream of tokens. Token filter works with each token of the stream. Token filter can modify stream by adding, updating, deleting its tokens.
The key difference is that normalizers can only emit a single token while analyzers can emit many. Since they only emit one token, normalizers do not use a tokenizer. They do use character filters and token filters but are limited to using those that work at a single character at a time.
Tokenization substitutes a sensitive identifier (e.g., a unique ID number or other PII) with a non-sensitive equivalent (i.e., a “token”) that has no extrinsic or exploitable meaning or value.
In the client, when filtering lists using the filter pane, users can enter filter tokens, which are special words that resolve to one or more values. This powerful feature makes filtering easier by reducing the need to navigate to other pages to look up values to enter as filter criteria.
A tokenizer will split the whole input into tokens and a token filter will apply some transformation on each token.
For instance, let's say the input is The quick brown fox
. If you use an edgeNGram tokenizer, you'll get the following tokens:
T
Th
The
The
(last character is a space)The q
The qu
The qui
The quic
The quick
The quick
(last character is a space)The quick b
The quick br
The quick bro
The quick brow
The quick brown
The quick brown
(last character is a space)The quick brown f
The quick brown fo
The quick brown fox
However, if you use a standard tokenizer which will split the input into words/tokens, and then an edgeNGram token filter, you'll get the following tokens
T
, Th
, The
q
, qu
, qui
, quic
, quick
b
, br
, bro
, brow
, brown
f
, fo
, fox
As you can see, choosing between an edgeNgram tokenizer or token filter depends on how you want to slice and dice your text and how you want to search it.
I suggest having a look at the excellent elyzer tool which provides a way to visualize the analysis process and see what is being produced during each step (tokenizing and token filtering).
As of ES 2.2, the _analyze
endpoint also supports an explain feature which shows the details during each step of the analysis process.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With