Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

aggregation query and return all fields in elasticsearch

I have an large(20GB) csv file by flowing format.

date,ip,dev_type,env,time,cpu_usage 
2015-11-09,10.241.121.172,M2,production,11:01,8 
2015-11-09,10.241.121.172,M2,production,11:02,9 
2015-11-09,10.241.121.243,C1,preproduction,11:01,4 
2015-11-09,10.241.121.243,C1,preproduction,11:02,8
2015-11-10,10.241.121.172,M2,production,11:01,3 
2015-11-10,10.241.121.172,M2,production,11:02,9 
2015-11-10,10.241.121.243,C1,preproduction,11:01,4 
2015-11-10,10.241.121.243,C1,preproduction,11:02,8

and import into elasticheaseh as flowing format

{
  "_index": "cpuusage",
  "_type": "logs",
  "_id": "AVFOkMS7Q4jUWMFNfSrZ",
  "_score": 1,
  "_source": {
    "date": "2015-11-10",
    "ip": "10.241.121.172",
    "dev_type": "M2",
    "env": "production",
    "time": "11:02",
    "cpu_usage": "9"
  },
  "fields": {
    "date": [
      1447113600000
    ]
  }
}
...

so how could i output all field (date, ip, dev_type, env, cpu_usage) when i find out the maximum value of cpu_usage for each ip in each day

curl -XGET localhost:9200/cpuusage/_search?pretty -d '{
    "size": 0,
        "aggs": {
                 "by_date": {
                    "date_histogram": {
                       "field": "date",
                       "interval": "day"
                    },
                   "aggs" : {
                           "genders" : {
                               "terms" : {
                                   "field" : "ip",
                                   "size": 100000,
                                    "order" : { "_count" : "asc" }
                               },

                               "aggs" : {
                                   "cpu_usage" : { "max" : { "field" : "cpu_usage" } }
                               }
                           }
                       }
                    }
              } 

}'

---cut---

 ----output ----   
 "aggregations" : {
        "events_by_date" : {
          "buckets" : [ {
            "key_as_string" : "2015-11-09T00:00:00.000Z",
            "key" : 1447027200000,
            "doc_count" : 4,
            "genders" : {
              "doc_count_error_upper_bound" : 0,
              "sum_other_doc_count" : 0,
              "buckets" : [ {
                "key" : "10.241.121.172",
                "doc_count" : 2,
                "cpu_usage" : {
                  "value" : 9.0
                }
              }, {
                "key" : "10.241.121.243",
                "doc_count" : 2,
                "cpu_usage" : {
                  "value" : 8.0
                }
              } ]
            }
          },
like image 408
mk_ Avatar asked Nov 28 '15 16:11

mk_


1 Answers

You can do that with top hits aggregation

Try this

{
  "size": 0,
  "aggs": {
    "by_date": {
      "date_histogram": {
        "field": "date",
        "interval": "day"
      },
      "aggs": {
        "genders": {
          "terms": {
            "field": "ip",
            "size": 100000,
            "order": {
              "_count": "asc"
            }
          },
          "aggs": {
            "cpu_usage": {
              "max": {
                "field": "cpu_usage"
              }
            },
            "include_source": {
              "top_hits": {
                "size": 1,
                "_source": {
                  "include": [
                    "date", "ip", "dev_type", "env", "cpu_usage"
                  ]
                }
              }
            }
          }
        }
      }
    }
  }
}

Does this help?

like image 117
ChintanShah25 Avatar answered Nov 15 '22 06:11

ChintanShah25