Recently I am trying to find out best Docker logging mechanism using ELK stack. I am having some questions regarding the best work flow that companies use in production. Our system has typical software stack including Tomcat, PostgreSQL, MongoDB, Nginx, RabbitMQ, Couchbase etc. As of now, our stack runs in CoreOS cluster. Please find my questions below
This is a subjective questions but I am sure that this is a problem that people have solved long ago and I am not keen on re-inventing the wheel.
The Elastic Stack is used in infrastructure metrics and container monitoring, logging and log analytics, application performance monitoring, geospatial data analysis and visualization, security and business analytics, and scraping and combining public data.
Often referred to as Elasticsearch, the ELK stack gives you the ability to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics, and more.
A typical ELK pipeline in a Dockerized environment looks as follows: Logs are pulled from the various Docker containers and hosts by Logstash, the stack's workhorse that applies filters to parse the logs better. Logstash forwards the logs to Elasticsearch for indexing, and Kibana analyzes and visualizes the data.
Good questions and the answer like in many other cases are - "it depends".
Shipping Logs - we use rsyslog as docker containers internally and logstash-forwarder in some cases - the advantage of logstash-forwarder is that it encrypts the logs and compresses them so in some cases that's important. I find rsyslog to be very stable and low on resources so we use it as a default shipper. The full logstash might be heavy for small machines (some more data about logstash - http://logz.io/blog/5-logstash-pitfalls-and-how-to-avoid-them/)
We're also fully dockerized and use a separate Docker for each rsyslog/lumberjack. Easy to maintain, update versions and move around when needed.
Yes, definitely use Redis. I wrote a blog about how to build production ELK (http://logz.io/blog/deploy-elk-production/) - I spoke about what I find to be the right architecture to deploy ELK in production
Not sure what exactly are you trying to achieve with that.
HTH
Docker as of Aug 2015, has "Logging Driver", so that you can ship logs into other places. These are the supported way to ship the logs remotely.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With