Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

elasticsearch mapping tokenizer keyword to avoid splitting tokens and enable use of wildcard

I try to make an autocomplete function with angularjs and elasticsearch on a given field, for example countryname. it can contain simple names like "France", "Spain" or "composed names" like "Sierra Leone".

In the mapping this field is not_analyzed to prevent elastic to tokenize "composed names"

"COUNTRYNAME" : {"type" : "string", "store" : "yes","index": "not_analyzed" }

I need to query elasticsearch:

  • to filter the document with something like "countryname:value" where value can contain wildcard
  • and make an aggregation on the countryname returned by the filter, ( i do aggregation to get only distinct data, the count is useless for me her, maybe there is a better solution)

I can't use wildcard with the "not_analyzed" field :

this is my query but wildcard in "value" variable doesn't work and it's case sensitive :

The wildcard alone her work :

curl -XGET 'local_host:9200/botanic/specimens/_search?size=0' -d '{
  "fields": [
    "COUNTRYNAME"
  ],
  "query": {
    "query_string": {
      "query": "COUNTRYNAME:*"
    }
  },
  "aggs": {
    "general": {
      "terms": {
        "field": "COUNTRYNAME",
        "size": 0
      }
    }
  }
}'

but this doesn't work (franc*) :

curl -XGET 'local_host:9200/botanic/specimens/_search?size=0' -d '{
  "fields": [
    "COUNTRYNAME"
  ],
  "query": {
    "query_string": {
      "query": "COUNTRYNAME:Franc*"
    }
  },
  "aggs": {
    "general": {
      "terms": {
        "field": "COUNTRYNAME",
        "size": 0
      }
    }
  }
}'

I tried also with bool must query but don't work with this not_analyzed field and wildcard :

curl -XGET 'local_host:9200/botanic/specimens/_search?size=0' -d '{
  "fields": [
    "COUNTRYNAME"
  ],
  "query": {
    "bool": {
      "must": [
        {
          "match": {
            "COUNTRYNAME": "Franc*"
          }
        }
      ]
    }
  },
  "aggs": {
    "general": {
      "terms": {
        "field": "COUNTRYNAME",
        "size": 0
      }
    }
  }
}'

What I'm missing or doing wrong? should I left the field analyzed in the mapping and use another analyser who don't split composed name into token??

like image 249
AlainIb Avatar asked Oct 21 '14 11:10

AlainIb


People also ask

What is keyword tokenizer?

The keyword tokenizer is a “noop” tokenizer that accepts whatever text it is given and outputs the exact same text as a single term. It can be combined with token filters to normalise output, e.g. lower-casing email addresses.

What is the use of tokenizer in Elasticsearch?

A tokenizer receives a stream of characters, breaks it up into individual tokens (usually individual words), and outputs a stream of tokens. For instance, a whitespace tokenizer breaks text into tokens whenever it sees any whitespace. It would convert the text "Quick brown fox!" into the terms [Quick, brown, fox!] .

What is the default tokenizer in Elasticsearch?

A standard tokenizer is used by Elasticsearch by default, which breaks the words based on grammar and punctuation. In addition to the standard tokenizer, there are a handful of off-the-shelf tokenizers: standard, keyword, N-gram, pattern, whitespace, lowercase and a handful of other tokenizers.

What is Analyzer and tokenizer in Elasticsearch?

Elasticsearch analyzers and normalizers are used to convert text into tokens that can be searched. Analyzers use a tokenizer to produce one or more tokens per text field. Normalizers use only character filters and token filters to produce a single token.


1 Answers

i found a working solution : the "keyword" tokenizer. create a custom analyzer and use it in the mapping for the field i want to keep without split by space :

    curl -XPUT 'localhost:9200/botanic/' -d '{
 "settings":{
     "index":{
        "analysis":{
           "analyzer":{
              "keylower":{
                 "tokenizer":"keyword",
                 "filter":"lowercase"
              }
           }
        }
     }
  },
  "mappings":{
        "specimens" : {
            "_all" : {"enabled" : true},
            "_index" : {"enabled" : true},
            "_id" : {"index": "not_analyzed", "store" : false},
            "properties" : {
                "_id" : {"type" : "string", "store" : "no","index": "not_analyzed"  } ,
            ...
                "LOCATIONID" : {"type" : "string",  "store" : "yes","index": "not_analyzed" } ,
                "AVERAGEALTITUDEROUNDED" : {"type" : "string",  "store" : "yes","index": "analyzed" } ,
                "CONTINENT" : {"type" : "string","analyzer":"keylower" } ,
                "COUNTRYNAME" : {"type" : "string","analyzer":"keylower" } ,                
                "COUNTRYCODE" : {"type" : "string", "store" : "yes","index": "analyzed" } ,
                "COUNTY" : {"type" : "string","analyzer":"keylower" } ,
                "LOCALITY" : {"type" : "string","analyzer":"keylower" }                 
            }
        }
    }
}'

so i can use wildcard in query on the field COUNTRYNAME, who is not splitted :

curl -XGET 'localhost:9200/botanic/specimens/_search?size=10' -d '{
"fields"  : ["COUNTRYNAME"],     
"query": {"query_string" : {
                    "query": "COUNTRYNAME:bol*"
}},
"aggs" : {
    "general" : {
        "terms" : {
            "field" : "COUNTRYNAME", "size":0
        }
    }
}}'

the result :

{
    "took" : 14,
    "timed_out" : false,
    "_shards" : {
        "total" : 5,
        "successful" : 5,
        "failed" : 0
    },
    "hits" : {
        "total" : 45,
        "max_score" : 1.0,
        "hits" : [{
                "_index" : "botanic",
                "_type" : "specimens",
                "_id" : "91E7B53B61DF4E76BF70C780315A5DFD",
                "_score" : 1.0,
                "fields" : {
                    "COUNTRYNAME" : ["Bolivia, Plurinational State of"]
                }
            }, {
                "_index" : "botanic",
                "_type" : "specimens",
                "_id" : "7D811B5D08FF4F17BA174A3D294B5986",
                "_score" : 1.0,
                "fields" : {
                    "COUNTRYNAME" : ["Bolivia, Plurinational State of"]
                }
            } ...
        ]
    },
    "aggregations" : {
        "general" : {
            "buckets" : [{
                    "key" : "bolivia, plurinational state of",
                    "doc_count" : 45
                }
            ]
        }
    }
}
like image 196
AlainIb Avatar answered Oct 06 '22 09:10

AlainIb