Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best Docker logging architecture using ELK stack

Recently I am trying to find out best Docker logging mechanism using ELK stack. I am having some questions regarding the best work flow that companies use in production. Our system has typical software stack including Tomcat, PostgreSQL, MongoDB, Nginx, RabbitMQ, Couchbase etc. As of now, our stack runs in CoreOS cluster. Please find my questions below

  1. With ELK stack, what is the best methodology to do the log forwarding - Should I use Lumberjack ?. I am asking this because I have seen workflows where people use Syslog/Rsyslog to forward the logs to logstash.
  2. Since all of our software pieces are containerized, should I include Log-forwarder in all my containers ? I am planning to do this as most of my containers switch nodes based on health so I am not keen on mounting the file system from the container to host.
  3. Should I use redis as a broker in forwarding the logs ? If yes why ?
  4. How difficult is it to write log-config files that defines the log format to be forwarded to log-stash ?

This is a subjective questions but I am sure that this is a problem that people have solved long ago and I am not keen on re-inventing the wheel.

like image 785
cucucool Avatar asked Jul 09 '15 20:07

cucucool


People also ask

Is Elk stack is only used for logging?

The Elastic Stack is used in infrastructure metrics and container monitoring, logging and log analytics, application performance monitoring, geospatial data analysis and visualization, security and business analytics, and scraping and combining public data.

What is elk stack logging?

Often referred to as Elasticsearch, the ELK stack gives you the ability to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics, and more.

How do you pull the logs from Docker container to the elk and describe the process?

A typical ELK pipeline in a Dockerized environment looks as follows: Logs are pulled from the various Docker containers and hosts by Logstash, the stack's workhorse that applies filters to parse the logs better. Logstash forwards the logs to Elasticsearch for indexing, and Kibana analyzes and visualizes the data.


2 Answers

Good questions and the answer like in many other cases are - "it depends".

  1. Shipping Logs - we use rsyslog as docker containers internally and logstash-forwarder in some cases - the advantage of logstash-forwarder is that it encrypts the logs and compresses them so in some cases that's important. I find rsyslog to be very stable and low on resources so we use it as a default shipper. The full logstash might be heavy for small machines (some more data about logstash - http://logz.io/blog/5-logstash-pitfalls-and-how-to-avoid-them/)

  2. We're also fully dockerized and use a separate Docker for each rsyslog/lumberjack. Easy to maintain, update versions and move around when needed.

  3. Yes, definitely use Redis. I wrote a blog about how to build production ELK (http://logz.io/blog/deploy-elk-production/) - I spoke about what I find to be the right architecture to deploy ELK in production

  4. Not sure what exactly are you trying to achieve with that.

HTH

like image 94
Tomer Levy Avatar answered Oct 11 '22 04:10

Tomer Levy


Docker as of Aug 2015, has "Logging Driver", so that you can ship logs into other places. These are the supported way to ship the logs remotely.

  • syslog
  • fluentd
  • journald
  • gelf
  • etc..
like image 41
Kazuki Ohta Avatar answered Oct 11 '22 05:10

Kazuki Ohta