Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ElasticSearch returning only documents with distinct value

Let's say I have this given data

{
            "name" : "ABC",
            "favorite_cars" : [ "ferrari","toyota" ]
          }, {
            "name" : "ABC",
            "favorite_cars" : [ "ferrari","toyota" ]
          }, {
            "name" : "GEORGE",
            "favorite_cars" : [ "honda","Hyundae" ]
          }

Whenever I query this data when searching for people who's favorite car is toyota, it returns this data

{

            "name" : "ABC",
            "favorite_cars" : [ "ferrari","toyota" ]
          }, {
            "name" : "ABC",
            "favorite_cars" : [ "ferrari","toyota" ]
          }

the result is Two records of with a name of ABC. How do I select distinct documents only? The result I want to get is only this

{
                "name" : "ABC",
                "favorite_cars" : [ "ferrari","toyota" ]
              }

Here's my Query

{
    "fuzzy_like_this_field" : {
        "favorite_cars" : {
            "like_text" : "toyota",
            "max_query_terms" : 12
        }
    }
}

I am using ElasticSearch 1.0.0. with the java api client

like image 399
user962206 Avatar asked Jul 01 '14 10:07

user962206


People also ask

How do I get distinct values in Elasticsearch?

Use a terms aggregation on the color field. And you need to pay attention to how that field you want to get distinct values on is analyzed, meaning you need to make sure you're not tokenizing it while indexing, otherwise every entry in the aggregation will be a different term that is part of the field content.

How do I capture a specific field in Elasticsearch?

There are two recommended methods to retrieve selected fields from a search query: Use the fields option to extract the values of fields present in the index mapping. Use the _source option if you need to access the original data that was passed at index time.

How do I make a field unique in Elasticsearch?

One solution will be to use uniqueId field value for specifying document ID and use op_type=create while storing the documents in ES. With this you can make sure your uniqueId field will have unique value and will not be overridden by another same valued document.

What is cardinality aggregation?

Cardinality aggregationedit. A single-value metrics aggregation that calculates an approximate count of distinct values. Assume you are indexing store sales and would like to count the unique number of sold products that match a query: POST /sales/_search?


2 Answers

You can eliminate duplicates using aggregations. With term aggregation the results will be grouped by one field, e.g. name, also providing a count of the ocurrences of each value of the field, and will sort the results by this count (descending).

{
  "query": {
    "fuzzy_like_this_field": {
      "favorite_cars": {
        "like_text": "toyota",
        "max_query_terms": 12
      }
    }
  },
  "aggs": {
    "grouped_by_name": {
      "terms": {
        "field": "name",
        "size": 0
      }
    }
  }
}

In addition to the hits, the result will also contain the buckets with the unique values in key and with the count in doc_count:

{
  "took" : 4,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 0.19178301,
    "hits" : [ {
      "_index" : "pru",
      "_type" : "pru",
      "_id" : "vGkoVV5cR8SN3lvbWzLaFQ",
      "_score" : 0.19178301,
      "_source":{"name":"ABC","favorite_cars":["ferrari","toyota"]}
    }, {
      "_index" : "pru",
      "_type" : "pru",
      "_id" : "IdEbAcI6TM6oCVxCI_3fug",
      "_score" : 0.19178301,
      "_source":{"name":"ABC","favorite_cars":["ferrari","toyota"]}
    } ]
  },
  "aggregations" : {
    "grouped_by_name" : {
      "buckets" : [ {
        "key" : "abc",
        "doc_count" : 2
      } ]
    }
  }
}

Note that using aggregations will be costly because of duplicate elimination and result sorting.

like image 138
JRL Avatar answered Nov 02 '22 23:11

JRL


ElasticSearch doesn't provide any query by which you can get distinct documents based a field value.

Ideally you should have indexed the same document with same type and id since these two things are used by ElasticSearch to give a _uid unique id to a document. Unique id is important not only because of its way of detecting duplicate documents but also updating the same document in case of any modification instead of inserting a new one. For more information about indexing documents you can read this.

But there is definitely a work around for your problem. Since you are using java api client, you can remove duplicate documents based on a field value on your own. Infact, it gives you more flexibility to perform custom operations on the responses that you get from ES.

SearchResponse response = client.prepareSearch().execute().actionGet();
SearchHits hits = response.getHits();

Iterator<SearchHit> iterator = hits.iterator();
Map<String, SearchHit> distinctObjects = new HashMap<String,SearchHit>();
while (iterator.hasNext()) {
    SearchHit searchHit = (SearchHit) iterator.next();
    Map<String, Object> source = searchHit.getSource();
    if(source.get("name") != null){
        distinctObjects.put(source.get("name").toString(),source);
    }

} 

So, you will have a map of unique searchHit objects in your map.

You can also create an object mapping and use that in place of SearchHit.

I hope this solves your problem. Please forgive me if there are any errors in the code. This is just a pseudo-ish code to make you understand how you can solve your problem.

Thanks

like image 29
dark_shadow Avatar answered Nov 02 '22 23:11

dark_shadow