My configuration:
My process:
My little PROBLEM:
When running a simple chown over hugely populated folders, such as node_modules, the container memory requirements go mad, crashing not just the container, but my whole server....
I've tried:
Setting runArgs with:
Setting a ulimit in my Dockerfile, for the user that later runs the chown command.
Deleting all images and containers in Docker host, to force --no-cache (which I haven't found how to inject it neither :-/ )
HELP! Nothing works... Someone has any clue as to what could I do to prevent the container to consume all the memory from the server?
Repo with config: https://github.com/gsusI/vscode-remote_dev-config_test
In case anyone find themselves in the same infinite loop. I found what's the issue!!
It appears that runArgs is not used when using docker-compose, hence configurations here have no effect.
I know!! You'll expect a warning somewhere, right?
The next best option is to do this through the docker-compose.yml file, right? Well, this is only true if you're using docker-composer version 2, as version 3 would only work with Docker swarm. In my case, I switched to version 2, and now everything works smoothly.
TL;DR
Your docker-compose.yml
file should look like this:
version: '2'
services:
<your-service-name>:
...
mem_limit: 2g
mem_reservation: 2g
Check this for syntax hints: https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With