I want to have a centralized logging server.
I have created two Ubuntu 12.04 LTS servers based on this iso.
After successfully and strictly following this tutorial steps, I have:
My question is:
Mind that catalina.out has not just one uniform format, and that multi-line Java exceptions are going to be registered as well.
I have read some tutorials regarding this, but I feel they weren't sufficiently well-explained for a dummy like me: Lacking examples of full paths to files, appropriate file names, required ubuntu commands, and so on.
Thanks in advance!
Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.
Simply put, Logstash allows you to put the data it gathers from your various systems into a single common format. This allows analytics engines like Elasticsearch and visualisation tools like Kibana to make the most of your data.
Logstash is one of many data frontends that can deliver data in Elasticsearch-friendly way - consequently, Logstash's indexer indexes the data (extracting fields, deciding what index to store the data in, etc.), and its Shipper ships the data to Elasticsearch... – zwer.
Here is what I have, note that the access logs use a custom log format (documented below) and I extract a bit more information out of the Tomcat logs (it is useful to have logLevel as a field, for example):
input {
file {
type => "access-log"
path => [ "C:/apache-tomcat-6.0.18/logs/*.txt" ]
}
file {
type => "tomcat"
path => [ "C:/apache-tomcat-6.0.18/logs/*.log" ]
codec => multiline {
negate => true
pattern => "(^%{MONTH} %{MONTHDAY}, 20%{YEAR} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) (?:AM|PM))"
what => "previous"
}
}
}
filter {
if [type] == "access-log" {
grok {
# Access log pattern is %a %{waffle.servlet.NegotiateSecurityFilter.PRINCIPAL}s %t %m %U%q %s %B %T "%{Referer}i" "%{User-Agent}i"
match => [ "message" , "%{IPV4:clientIP} %{NOTSPACE:user} \[%{DATA:timestamp}\] %{WORD:method} %{NOTSPACE:request} %{NUMBER:status} %{NUMBER:bytesSent} %{NUMBER:duration} \"%{NOTSPACE:referer}\" \"%{DATA:userAgent}\"" ]
remove_field => [ "message" ]
}
grok{
match => [ "request", "/%{USERNAME:app}/" ]
tag_on_failure => [ ]
}
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
remove_field => [ "timestamp" ]
}
geoip {
source => ["clientIP"]
}
dns {
reverse => [ "clientIP" ]
}
mutate {
lowercase => [ "user" ]
convert => [ "bytesSent", "integer", "duration", "float" ]
}
if [referer] == "-" {
mutate {
remove_field => [ "referer" ]
}
}
if [user] == "-" {
mutate {
remove_field => [ "user" ]
}
}
}
if [type] == "tomcat" {
if [message] !~ /(.+)/ {
drop { }
}
grok{
patterns_dir => "./patterns"
match => [ "message", "%{CATALINA_DATESTAMP:timestamp} %{NOTSPACE:className} %{WORD:methodName}\r\n%{LOGLEVEL: logLevel}: %{GREEDYDATA:message}" ]
overwrite => [ "message" ]
}
grok{
match => [ "path", "/%{USERNAME:app}.20%{NOTSPACE}.log"]
tag_on_failure => [ ]
}
#Aug 25, 2014 11:23:31 AM
date{
match => [ "timestamp", "MMM dd, YYYY hh:mm:ss a" ]
remove_field => [ "timestamp" ]
}
}
}
output {
elasticsearch { host => somehost}
}
You can push the catalina logs also directly to logstash using GELF: http://www.paluch.biz/blog/105-integrating-logstash-with-tomcat-7.html
logstash-gelf does not provide a access logging valve for pushing access logs to logstash.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With