Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to define seperated indexes for different logs in Filebeat/ELK?

I am wondering how to create separated indexes for different logs fetched into logstash (which were later passed onto elasticsearch), so that in kibana, I can define two indexes for them and discover them.

In my case, I have a few client servers (each of which is installed with filebeat) and a centralized log server (ELK). Each client server has different kinds of logs, e.g. redis.log, python logs, mongodb logs, that I like to sort them into different indexes and stored in elasticsearch.

Each client server also serves different purposes, e.g. databases, UIs, applications. Hence I also like to give them different index names (by changing output index in filebeat.yml?).

like image 574
daiyue Avatar asked Aug 08 '16 13:08

daiyue


People also ask

Can we join two indexes in Elasticsearch?

Joining queriesedit Instead, Elasticsearch offers two forms of join which are designed to scale horizontally. Documents may contain fields of type nested . These fields are used to index arrays of objects, where each object can be queried (with the nested query) as an independent document.

Can Filebeat have multiple outputs?

You can have as many inputs as you want but you can only have one output, you will need to send your logs to a single logstash and from there you can send them to other places. Show activity on this post. Filebeat does not support sending the same data to multiple logstash servers simultaneously.

What is an elastic index?

In Elasticsearch, an index (plural: indices) contains a schema and can have one or more shards and replicas. An Elasticsearch index is divided into shards and each shard is an instance of a Lucene index. Indices are used to store the documents in dedicated data structures corresponding to the data type of fields.


1 Answers

In your Filebeat configuration you can use document_type to identify the different logs that you have. Then inside of Logstash you can set the value of the type field to control the destination index.

However before you separate your logs into different indices you should consider leaving them in a single index and using either type or some custom field to distinguish between log types. See index vs type.

Example Filebeat prospector config:

filebeat:
  prospectors:
    - paths:
        - /var/log/redis/*.log
      document_type: redis

    - paths:
        - /var/log/python/*.log
      document_type: python

    - paths:
        - /var/log/mongodb/*.log
      document_type: mongodb

Example Logstash config:

input {
  beats {
    port => 5044
  }
}

output {
  # Customize elasticsearch output for Filebeat.
  if [@metadata][beat] == "filebeat" {
    elasticsearch {
      hosts => "localhost:9200"
      manage_template => false
      # Use the Filebeat document_type value for the Elasticsearch index name.
      index => "%{[@metadata][type]}-%{+YYYY.MM.dd}"
      document_type => "log"
    }
  }
}
like image 85
A J Avatar answered Oct 27 '22 00:10

A J