Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to ship logs in a microservice architecture with docker?

Heroku describes logs in its Twelve-Factor App manifest as simple event streams:

Logs are the stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services. Logs in their raw form are typically a text format with one event per line (though backtraces from exceptions may span multiple lines). Logs have no fixed beginning or end, but flow continuously as long as the app is operating.

Additionally, apps should simply write logs to stdout, leaving the task to the "environment".

A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.

In staging or production deploys, each process’ stream will be captured by the execution environment, collated together with all other streams from the app, and routed to one or more final destinations for viewing and long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment. Open-source log routers (such as Logplex and Fluent) are available for this purpose.

So what's the best way to achieve this in a docker environment in terms of reliability, efficiency and ease of use? I think the following questions come to mind:

  • Is it safe to rely on Docker's own log facility (docker logs)?
  • Is it safe to run docker undetached and consider its output as the logging stream?
  • Can stdout be redirected to a file directly (disk space)?
  • If using a file, should it be inside the docker image or a bound volume (docker run --volume=[])?
  • Is logrotation required?
  • Is it safe to redirect stdout directly into a logshipper (and which logshipper)?
  • Is a named pipe (aka FIFO) an option?
  • (more questions?)
like image 569
sfussenegger Avatar asked Jul 17 '14 15:07

sfussenegger


People also ask

How Docker is used for microservices?

Using Docker Containers for Microservices In a microservices architecture, the application can be made independent of the host environment by encapsulating each of them in Docker containers. It helps developers with the process of packaging applications into containers.

How do I collect Docker logs?

Docker container logs are generated by the Docker containers. They need to be collected directly from the containers. Any messages that a container sends to stdout or stderr is logged then passed on to a logging driver that forwards them to a remote destination of your choosing.

What is the Docker command for container logs?

First of all, to list all running containers, use the docker ps command. Then, with the docker logs command you can list the logs for a particular container. Most of the time you'll end up tailing these logs in real time, or checking the last few logs lines.


1 Answers

Docker 1.6 introduced the notion of logging drivers to offer more control over log output. The --log-driver flag configures where stdout & stderr from the process running in a container should be directed. See also Configuring Logging drivers.

Several drivers are available. Note that all of these except json-file disable the use of docker logs to gather container logs.

  • none - disable container logs.
  • json-file - Behaves as it did previously, with json formatted stdout available in /var/lib/docker/containers/<containerid>/<containerid>-json.log
  • syslog - Writes messages to syslog. Also accepts --log-opt to direct log messages to a specified syslog via TCP, UDP or Unix domain socket. Also disables docker logs
  • journald - Writes to the systemd journal.
  • *gelf - Graylog Extended Log Format (GELF). Writes log messages to a GELF endpoint such as Graylog or Logstash
  • *fluentd - Send container logs to fluentd. Accepts some options to customize the address of the fluentd and send tags with log messages.
  • **awslogs - Writes log messages to AWS CloudWatch Logs

* New in Docker 1.8

** New in Docker 1.9

For example:

docker run --log-driver=syslog --log-opt syslog-address=tcp://10.0.0.10:1514 ... 

This is the Docker-recommended solution for software that writes its log messages to stdout & stderr. Some software, however, does not write log messages to stdout/stderr. They instead write to log files or to syslog, for example. In those cases, some of the details from the original answer below still apply. To recap:

If the app writes to a local log file, mount a volume from the host (or use a data-only container to the container and write log messages to that location.

If the app writes to syslog, there are several options:

  • Send to the host's syslog by mount the host's syslog socket (/dev/log) to the container using -v /dev/log:/dev/log.
  • If the app accepts a syslog endpoint in its configuration, configure the host's syslog daemon to listen over TCP and/or UDP on the Docker bridge network, and use that endpoint. Or just send to a remote syslog host.
  • Run a syslog daemon in a container, and use Docker links to access it from other running containers.
  • Use logspout to automatically route container logs to a remote syslog via UDP

Don't forget that any logs within a container should be rotated just as they would on a host OS.


Original Answer for Docker pre-1.6

Is it safe to rely on Docker's own log facility (docker logs)?

docker logs prints the entire stream each time, not just new logs, so it's not appropriate. docker logs --follow will give tail -f-like functionality, but then you have a docker CLI command running all the time. Thus while it is safe to run docker logs, it's not optimal.

Is it safe to run docker undetached and consider its output as the logging stream?

You can start containers with systemd and not daemonize, thus capturing all the stdout in the systemd journal which can then be managed by the host however you'd like.

Can stdout be redirected to a file directly (disk space)?

You could do this with docker run ... > logfile of course, but it feels brittle and harder to automate and manage.

If using a file, should it be inside the docker image or a bound volume (docker run --volume=[])?

If you write inside the container then you need to run logrotate or something in the container to manage the log files. Better to mount a volume from the host and control it using the host's log rotation daemon.

Is logrotation required?

Sure, if the app writes logs you need to rotate them just like in a native OS environment. But it's harder if you write inside the container since the log file location isn't as predictable. If you rotate on the host, the log file would live under, for example with devicemapper as the storage driver, /var/lib/docker/devicemapper/mnt/<containerid>/rootfs/.... Some ugly wrapper would be needed to have logrotate find the logs under that path.

Is it safe to redirect stdout directly into a logshipper (and which logshipper)?

Better to use syslog and let the log collector deal with syslog.

Is a named pipe (aka FIFO) an option?

A named pipe isn't ideal because if the reading end of the pipe dies, the writer (the container) will get a broken pipe. Even if that event is handled by the app, it will be blocked until there is a reader again. Plus it circumvents docker logs.

See also this post on fluentd with docker.

See Jeff Lindsay's tool logspout that collects logs from running containers and routes them however you want.

Finally, note that stdout from the container logs to a file on the host in /var/lib/docker/containers/<containerid>/<containerid>-json.log.

like image 160
Ben Whaley Avatar answered Sep 20 '22 10:09

Ben Whaley