I'm using the nginx method of symlinking linking to /dev/stdout for any log files that I want to appear in 'docker logs', however this is not working.
I have tested this with a simple cronjob in /etc/crontab, if a symlink is present (pointing to /dev/stdout) it doesn't write anything (as far as I can tell), but if I delete the symlink and it writes to the file.
Also if I echo into /dev/stdout it is echo'd back on the command line however it isn't found in 'docker logs'...
Question: Should this work? (It seems to work with nginx). Else, how would I get logs from 'secondary' processes to appear in docker logs.
For ref:
Nginx Dockerfile showing the symlinking method: https://github.com/nginxinc/docker-nginx/blob/a8b6da8425c4a41a5dedb1fb52e429232a55ad41/Dockerfile
Created an official bug report for this: https://github.com/docker/docker/issues/19616
My Dockerfile:
FROM ubuntu:trusty
#FROM quay.io/letsencrypt/letsencrypt:latest # For testing
ENV v="Fri Jan 22 10:08:39 EST 2016"
# Setup the cronjob
ADD crontab /etc/crontab
RUN chmod 600 /etc/crontab
# Setup letsencrypt logs
RUN ln -sf /dev/stdout /var/log/letsencrypt.log
# Setup cron logs
RUN ln -sf /dev/stdout /var/log/cron.log
RUN ln -sf /dev/stdout /var/log/syslog
# Setup keepalive script
ADD keepalive.sh /usr/bin/keepalive.sh
RUN chmod +x /usr/bin/keepalive.sh
ENTRYPOINT /usr/bin/keepalive.sh
The crontab file:
* * * * * root date >> /var/log/letsencrypt.log
keepalive.sh script
#!/bin/bash
# Start cron
rsyslogd
cron
echo "Keepalive script running!"
while true; do
echo 'Sleeping for an hour...'
sleep 10
done
Docker daemon logs are generated by the Docker platform and located on the host. Depending on the host operating system, daemon logs are written to the system's logging service or to a log file. If you were to collect only container logs you'd get insight into the state of your services.
Your Ubuntu 18.04 docker image will require a one-time configuration of Supervisord to get started with process monitoring. Supervisord — A Process Control System, which will run and manage your command-based programs from a simple configuration file setup.
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect. If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified with the JSON array format.
A container's main running process is the ENTRYPOINT and/or CMD at the end of the Dockerfile . It is generally recommended that you separate areas of concern by using one service per container.
End result is that the /dev/stdout for the cron job was pointed to the different device.
/proc/self/fd/1 and should have been /proc/1/fd/1 because as docker only expects one process to be running this is the only stdout it monitors.
So once I had modified the symlinks to point at /proc/1/fd/1 it should have worked however apparmor (on the host) was actually denying the requests (and getting permissions errors when echoing to /proc/1/fd/1) because of the default docker profile (which is automatically generated but can be modified with --security-opts).
Once over the apparmor hurdle it all works!
This said, after looking at what is required to be modified in apparmor to allow the required request I decided to use the mkfifo method as show below.
Dockerfile
FROM ubuntu:latest
ENV v="RAND-4123"
# Run the wrapper script (to keep the container alive)
ADD daemon.sh /usr/bin/daemon.sh
RUN chmod +x /usr/bin/daemon.sh
# Create the pseudo log file to point to stdout
RUN mkfifo /var/log/stdout
RUN mkfifo /var/log/stderr
# Create a cronjob to echo into the logfile just created
RUN echo '* * * * * root date 2>/var/log/stderr 1>/var/log/stdout' > /etc/crontab
CMD "/usr/bin/daemon.sh"
daemon.sh
#!/bin/bash
# Start cron
cron
tail -qf --follow=name --retry /var/log/stdout /var/log/stderr
Well, it was mentioned in the comments, but for reference - I find the best solution to docker
logging is generally rely on the 'standard' multi-system logging mechanisms - specifically syslog
as much as possible.
This is because you can either use the inbuilt syslogd on your host, or use logstash as a syslogd. It has an inbuilt filter, but actually that tends to suffer a bit from not being flexible enough, so instead I use a TCP/UDP listener, and parse the logs explicitly - as outlined in "When logstash and syslog goes wrong"
input {
tcp {
port => 514
type => syslog
}
udp {
port => 514
type => syslog
}
}
And then filter the log:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
syslog_pri { }
}
}
You can then feed this logstash to elasticsearch - either on a remote host, local container or what I'm doing now is a docker network
with a multi-node elasticsearch instance. (I've rolled my own using a download and docker file, but I'm pretty sure a standalone container exists too).
output {
elasticsearch {
hosts => [ "es-tgt" ]
}
}
The advantage here is - docker lets you either use --link
or --net
to specify a name of your elasticsearch container, so you can just alias the logstash config to point to the right location. (e.g. docker run -d --link my_es_container_name:es-tgt -p 514:514 -p 514:514/udp mylogstash
or just docker run --net es_net ....
)
The docker network
setup is slightly more convoluted, in that you need to set up a key-value store (I used etcd
but other options are available). Or you can do something like Kubernetes.
And then use kibana
to visualise, again exposing the kibana port, but forwarding onto the elasticsearch network to talk to the cluster.
But once this is setup, you can configure nginx
to log to syslog
, and anything else you want to routinely capture logging results. The real advantage IMO is that you're using a single service for logging, one which can be scaled (thanks to the networking/containerisation) according to your need.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With