I have a set of dockerized applications scattered across multiple servers and trying to setup production-level centralized logging with ELK. I'm ok with the ELK part itself, but I'm a little confused about how to forward the logs to my logstashes. I'm trying to use Filebeat, because of its loadbalance feature. I'd also like to avoid packing Filebeat (or anything else) into all my dockers, and keep it separated, dockerized or not.
How can I proceed?
I've been trying the following. My Dockers log on stdout so with a non-dockerized Filebeat configured to read from stdin I do:
docker logs -f mycontainer | ./filebeat -e -c filebeat.yml
That appears to work at the beginning. The first logs are forwarded to my logstash. The cached one I guess. But at some point it gets stuck and keep sending the same event
Is that just a bug or am I headed in the wrong direction? What solution have you setup?
In the context of shipping logs into ELK, using the syslog logging driver is probably the easiest way to go. You will need to run this per container, and the result is a pipeline of Docker container logs being outputted into your syslog instance.
The important difference between Logstash and Filebeat is their functionalities, and Filebeat consumes fewer resources. But in general, Logstash consumes a variety of inputs, and the specialized beats do the work of gathering the data with minimum RAM and CPU.
You can now use filebeat to send logs to elasticsearch directly or logstash (without a logstash agent, but still need a logstash server of course).
Here's one way to forward docker logs
to the ELK stack (requires docker >= 1.8 for the gelf log driver):
Start a Logstash container with the gelf input plugin to reads from gelf and outputs to an Elasticsearch host (ES_HOST:port):
docker run --rm -p 12201:12201/udp logstash \
logstash -e 'input { gelf { } } output { elasticsearch { hosts => ["ES_HOST:PORT"] } }'
Now start a Docker container and use the gelf Docker logging driver. Here's a dumb example:
docker run --log-driver=gelf --log-opt gelf-address=udp://localhost:12201 busybox \
/bin/sh -c 'while true; do echo "Hello $(date)"; sleep 1; done'
Load up Kibana and things that would've landed in docker logs
are now visible. The gelf source code shows that some handy fields are generated for you (hat-tip: Christophe Labouisse): _container_id
, _container_name
, _image_id
, _image_name
, _command
, _tag
, _created
.
If you use docker-compose (make sure to use docker-compose >= 1.5) and add the appropriate settings in docker-compose.yml
after starting the logstash container:
log_driver: "gelf"
log_opt:
gelf-address: "udp://localhost:12201"
Docker allows you to specify the logDriver in use. This answer does not care about Filebeat or load balancing.
In a presentation I used syslog to forward the logs to a Logstash (ELK) instance listening on port 5000. The following command constantly sends messages through syslog to Logstash:
docker run -t -d --log-driver=syslog --log-opt syslog-address=tcp://127.0.0.1:5000 ubuntu /bin/bash -c 'while true; do echo "Hello $(date)"; sleep 1; done'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With