Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker container save logs on the host directory

I have a question similar to this one. When run my docker-compose.yml file, I automatically create a docker image and as specified in my dockerfile, I run certain apps. These apps produce some logs, but these logs are written inside the docker container, in /home/logs/ folder.

How can I indicate that these logs be writted outside the container, on my /path/on/host address? Simply because if the container was failed, I need to see the logs and not lose them!

This is my docker-compose.yml:

version: '3'
services:
  myapp:
    build: .
    image: myapp
    ports:
      - "9001:9001"

and here is my dockerfile:

FROM java:latest
COPY myapp-1.0.jar /home
CMD java -jar /home/myapp-1.0.jar

And I simply run it on production machine with docker-compose up -d.

(BTW, I'm new to dockers. Are all of my steps correct? Am I missing anything?! I see all is fine and myapp is running though!)

like image 951
Tina J Avatar asked Jan 26 '19 00:01

Tina J


People also ask

Where are docker logs stored on host?

By default, Docker stores log files in a dedicated directory on the host using the json-file log driver. The log file directory is /var/lib/docker/containers/<container_id> on the host where the container is running.

How do I persist docker container logs?

You can persist all container log files by creating a volume mount point to the Docker host machine or central log server. Since every container has its own unique log folder ( containerType _ containerId ), you can simply mount all container log directories (*/logs/) to the same path on your host machine.

Where are docker console logs stored?

Where Are Docker Logs Stored By Default? The logging driver enables you to choose how and where to ship your data. The default logging driver as I mentioned above is a JSON file located on the local disk of your Docker host: /var/lib/docker/containers/[container-id]/[container-id]-json.

Does docker container save data?

Docker uses storage drivers to store image layers, and to store data in the writable layer of a container. The container's writable layer does not persist after the container is deleted, but is suitable for storing ephemeral data that is generated at runtime.


2 Answers

All you need is a docker volume in order to persist the log files. So in the same directory as your docker-compose.yml create a logs directory, then define a volume mount. When defining a mount remember the syntax is <host_machine_directy>:<container_directory>.

Give the following volume a try and let me know what you get back.

version: '3'
services:
  myapp:
    build: .
    image: myapp
    ports:
      - "9001:9001"
    volumes:
      - ./logs:/home/logs

Also worth noting that persistence goes both ways with this approach. Any changes made to the files from within the container are reflected back onto the host. Any changes from the host, are also reflected inside the container.

like image 99
Nebri Avatar answered Oct 04 '22 08:10

Nebri


  • Yes , you can mount the volume from host into the container as outlined in the above answer , using a bind mount

  • In production , I would highly recommend sending logs of all containers to some central location , so that even if the whole docker host goes down you still have access to logs and maybe can easily analyse , filter, set watchers on log errors and make a dashboard , such as ELK

https://docs.docker.com/config/containers/logging/configure/

For this to work , you need to configure the app to send logs to stdout instead , and then configure docker daemon to send logs to one of your end points such as logstash , then you can configure logstash to do some pre-processing ( if needed ) and then stream it to your elasticsearch instance.

Going one step further , you may consider a container management system such as kubernetes with central logging to ELK and metering to promethous.

like image 25
Ijaz Ahmad Avatar answered Oct 04 '22 09:10

Ijaz Ahmad