I use logstash-forwarder and logstash and create a dynamic index with tags with this configuration:
/etc/logstash/conf.d/10-output.conf
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "logstash-%{tags}-%{+YYYY.MM.dd}"
}
}
/etc/logstash-forwarder.conf
"files": [
{
"paths": [
"/var/log/httpd/ssl_access_log",
"/var/log/httpd/ssl_error_log"
],
"fields": { "type": "apache", "tags": "mytag" }
},
The associated filebeat configuration is:
/etc/filebeat/filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/httpd/access_log
input_type: log
document_type: apache
fields:
tags: mytag
In Kibana, instead of mytag
I see beats_input_codec_plain_applied
on all of my indices.
I can see two problems mentioned in this topic. Let me summarize for my own benefit and hopefully for other visitors struggling with that problem too.
bad:
fields:
tags: mytag
good:
fields:
tags: ["mytag"]
However, there's more important issue
If you are adding only one tag, the workaround (as per hellb0y77) would be to remove the automatic tag that filebeat adds, in logstash (central server side):
filter {
if "beats_input_codec_plain_applied" in [tags] {
mutate {
remove_tag => ["beats_input_codec_plain_applied"]
}
}
}
This would not work if one wanted to add multiple tags in filebeat.
One would have to make logstash split a concatenated string and add each item to tags. Perhaps it would be better in this case, to put tags on filebeat end into some custom field, not "tags" field and extract them from that custom field on logstash.
Anyway, there seems to be no way to make it work by changing filebeat configuration. The only way is by doing some parsing on receiving logstash filter chain. See also https://github.com/elastic/filebeat/issues/220
If you can remove logstash then this could also be solution for you. When sending logs from filebeat directly to elasticsearch, the tags appear in ES as expected.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With