I've been asked to consolidate our log4j log files (NOT using Socket calls for now) into a Logstash JSON file that I'll then feed to Elasticsearch. Our code use the RollingFileAppender. Here's an example log entry.
2016-04-22 16:43:25,172 ERROR :SomeUser : 2 [com.mycompany.SomeClass] AttributeSchema 'Customer |Customer |Individual|Individual|Quarter|Date' : 17.203 The Log Message.
Here's the ConversionPattern value in our log4j.properties file
<param name="ConversionPattern" value="%d{ISO8601} %p %x %X{username}:%t [%c] %m %n" />
Can someone please help me write a Logstash Grok filter that will parse the line? I have the following so far
filter {
if [type] == "log4j" {
grok {
match => ["message", "%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:loglevel} %{GREEDYDATA:messsage}"]
}
date {
match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS", "ISO8601"]
}
}
}
But of course it takes everything after the priority as the message. I want to further segregate AT LEAST the following fields (defined in Log4j Pattern Layout)
I was able to make the following filter work.
filter {
mutate {
strip => "message"
}
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:loglevel} :%{DATA:thread} : %{NUMBER:thread_pool} \[(?<classname>[^\]]+)\] %{SPACE} %{GREEDYDATA:msgbody}"
}
}
date {
match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS", "ISO8601"]
}
}
However, this is specific to the above log.
I have a followup question. How can i "pad" the patterns to manage the "spaces" in each pattern. For example, an ERROR log level takes up 5 spaces, while an INFO log level takes up 4, so how do manage this so it works for both ERROR and INFO logs?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With