I have the following in a docker-compose.yml
web:
image: my_web
build:
context: ./
dockerfile: web.docker
container_name: my_web
networks:
- front
ports:
- "80:8080"
volumes:
- wwwlogs:/var/logs/www
env_file:
- ${SERVICE_ENVIRONMENT}.env
links:
- revproxy
logging:
driver: awslogs
options:
awslogs-group: my-web-group
awslogs-region: us-east-1
awslogs-stream-prefix: my-web
This works fine in production, and sends everything off to CloudWatch as expected. However I'm not clear how this is supposed to work when I want to use the same docker file locally (do not send to AWS, just log to STDOUT/STDERR), and in staging (where I want to send to a different awslogs-group/-prefix).
Any thoughts? In general I'm not a fan of having separate docker files for each environment - duplicated code entry increases the likely hood that something will get missed or not maintained properly. But Docker seems to have limited ability to conditionally provision things.
This is more of limitation in docker that you can't specify multiple logging drivers. It will be more complicated sending to multiple destinations with a single docker-compose file as it's not supported by docker but it's doable.
For example, you can use the Fluentd logging driver and you will have to start a separate sidecar container for Fluentd. Then on your configs, you can create a routing rule based on the environment. You can say dev routes to 'stdout' and prod routes to 'awslogs' using something like the fluentd CloudWatch logs plugin.
This is another example on how to configure Fluentd with docker-compose.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With