Currently we are redirecting all application logs to stdout from multiple containers and collect /var/log/message via rsyslog in host to ELK stack.
All docker container logs shows as docker/xxxxxxxx, we can't tell which application is this log for, anyway we can easily differentiate applications from multiple container logs from docker stdout?
You can run both containers using different host ports, and use a haproxy/nginx/varnish (native or inside another container) listening to the host port, and redirecting to the right container based on the URL. Show activity on this post. This is as much a question about the way tcp ports work as the way docker works.
Yes you can run multiple containers on a single host; docker is designed for exactly that. Yes, the containers on a single host can communicate with each other, by container name. For example if you have one container running MongoDB called mongo, and another one running Node.
Surprisingly or not, neither Docker nor Podman support exposing multiple containers on the same host's port right out of the box. Example: docker-compose failing scenario with "Service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash."
(Instructions for OS X but should work in Linux)
There doesn't appear to be a way to do this with a docker command, however in bash you can run multiple commands at the same time, and with sed
you can prefix with your container name.
docker logs -f --tail=30 container1 | sed -e 's/^/[-- containerA1 --]/' &
docker logs -f --tail=30 container2 | sed -e 's/^/[-- containerM2 --]/' &
And you will see output from both containers at the same time.
[-- containerA1 --] :: logging line
[-- containerA1 --] :: logging line
[-- containerM2 --] :: logging line
[-- containerM2 --] :: logging line
[-- containerA1 --] :: logging line
[-- containerA1 --] :: logging line
[-- containerM2 --] :: logging line
[-- containerM2 --] :: logging line
To tail all your containers at once:
#!/bin/bash
names=$(docker ps --format "{{.Names}}")
echo "tailing $names"
while read -r name
do
# eval to show container name in jobs list
eval "docker logs -f --tail=5 \"$name\" | sed -e \"s/^/[-- $name --] /\" &"
# For Ubuntu 16.04
#eval "docker logs -f --tail=5 \"$name\" |& sed -e \"s/^/[-- $name --] /\" &"
done <<< "$names"
function _exit {
echo
echo "Stopping tails $(jobs -p | tr '\n' ' ')"
echo "..."
# Using `sh -c` so that if some have exited, that error will
# not prevent further tails from being killed.
jobs -p | tr '\n' ' ' | xargs -I % sh -c "kill % || true"
echo "Done"
}
# On ctrl+c, kill all tails started by this script.
trap _exit EXIT
# For Ubuntu 16.04
#trap _exit INT
# Don't exit this script until ctrl+c or all tails exit.
wait
And to stop them run fg
and then press ctrl+c
for each container.
Update: Thanks to @Flo-Woo for Ubuntu 16.04 support
Here is a script tailing all docker containers.
Based on the answer by @nate, but a bit shorter. Tested on CentOS.
#!/bin/bash
function _exit {
kill $(jobs -p)
}
trap _exit EXIT
for name in $(docker ps --format "{{.Names}}"); do
eval "docker logs -f --tail=5 \"$name\" | sed -e \"s/^/[-- $name --] /\" &";
done
wait
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With