Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Logging from Java app to ELK without need for parsing logs

I want to send logs from a Java app to ElasticSearch, and the conventional approach seems to be to set up Logstash on the server running the app, and have logstash parse the log files (with regex...!) and load them into ElasticSearch.

Is there a reason it's done this way, rather than just setting up log4J (or logback) to log things in the desired format directly into a log collector that can then be shipped to ElasticSearch asynchronously? It seems crazy to me to have to fiddle with grok filters to deal with multiline stack traces (and burn CPU cycles on log parsing) when the app itself could just log it the desired format in the first place?

On a tangentially related note, for apps running in a Docker container, is best practice to log directly to ElasticSearch, given the need to run only one process?

like image 773
Rob Avatar asked Aug 31 '15 00:08

Rob


People also ask

Can you send logs directly to Elasticsearch?

Java logs can be sent to Elasticsearch for later retrieval and analysis. They can be sent directly from the application or written to files and later shipped by a data shipper such as Elasticsearch's own Filebeat.

Which tools can be used for logging in Microservices architecture elk stack?

One such popular set of tools are Elastic Search, Logstash and Kibana – together referred as ELK stack. They are used for searching, analyzing, and visualizing log data in a real time. In this ELK stack tutorial, learn to integrate ELK stack to microservices ecosystem.


2 Answers

If you really want to go down that path, the idea would be to use something like an Elasticsearch appender (or this one or this other one) which would ship your logs directly to your ES cluster.

However, I'd advise against it for the same reasons mentioned by @Vineeth Mohan. You'd also need to ask yourself a couple questions, but mainly what would happen if your ES cluster goes down for any reason (OOM, network down, ES upgrade, etc)?

There are many reasons why asynchronicity exists, one of which is robustness of your architecture and most of the time that's much more important than burning a few more CPU cycles on log parsing.

Also note that there is an ongoing discussion about this very subject going on in the official ES discussion forum.

like image 169
Val Avatar answered Oct 05 '22 11:10

Val


I think it's usually ill-advised to log directly to Elasticsearch from a Log4j/Logback/whatever appender, but I agree that writing Logstash filters to parse a "normal" human-readable Java log is a bad idea too. I use https://github.com/logstash/log4j-jsonevent-layout everywhere I can to have Log4j's regular file appenders produce JSON logs that don't require any further parsing by Logstash.

like image 37
Magnus Bäck Avatar answered Oct 05 '22 12:10

Magnus Bäck