I was trying to setup a command using Yarn to create directories, build my Docker images and then launching docker-compose up
.
I added a start script in my package.json
to execute a shell script:
"scripts": {
"start": "./start-docker.sh",
...
}
This is the start-docker.sh:
#!/bin/bash
mkdir -p volumes/mysql volumes/wordpress
docker-compose build
docker-compose up
It didn't work at first because my containers had no rights to access the created directories.
I then added this line after the creation of the directories in order to give full permissions to containers:
sudo chmod -R 777 volumes
But as you can see, this command requires to be executed using sudo. This means that executing the yarn start
command asks for a password, thing I didn't want.
I get rid of the shell script and used yarn scripts only:
"scripts": {
"prestart": "mkdir -p volumes/mysql volumes/wordpress && docker-compose build",
"start": "docker-compose up",
...
}
Surprisingly enough it worked, but I don't understand why.. do you guys have any idea?
After some experience with running "dockerized" node containers, I suggest a different approach.
To achieve your goal, we'll take a different approach in which I consider as a better practice since it solves the issue at hand and some other un-raised issues as well.
The package scripts are meant to run javascript either installed from dependencies or from the package content. The package.json
file should have the entrances of your node application only, not to your docker container.
Now you will have a separate bash file that you run to start node in a "dockerized environment". While the package.json
runs your node files.
The docker image will contain something similar to the following in its final lines:
ENTRYPOINT ["yarn"]
CMD [ "start" ]
Since you have extra steps to do before running the image (running mkdir
locally), you can run bash ./start-docker.sh
from the command line to run the script, rather than through yarn.
The docker image will run your
package.json
scripts and not the other way around.
Here are some of the added benefits of having your docker separated from the package.json
:
You get rid of the permissions issues, since running bash directly will allow it to create files and do things that the user that ran it has.
It allows non-docker users to access your node application through the package.json
scripts, just like they are used to do.
It allows the users of docker to understand what the docker image is doing, since it will eventually run a package.json
script after the setup process. It also allows changing the entrance without changing the bash or docker files.
The docker CMD
can be changed at runtime to run different package scripts other than start
(such as test
etc).
Remove the dependency of docker to run the project, this is important because docker is not installed via yarn install
making the package a "non-real" package per se.
The core issue is Not in Yarn. It is in calling (“bash ./start-docker.sh”
vs “./start-docker.sh”
).
Running ./start-docker.sh
will require sudo permissions, regardless of running it through yarn or not. Yarn does not do any gimmicks in changing users or such.
"scripts": {
"start": "bash ./start-docker.sh",
...
}
Adding bash
upfront will solve the issue. Again, I really do not recommend this solution. Separate docker from your package.json
.
The ./start-docker.sh
requires execution and readable bits, while bash ./start-docker.sh
only requires readable bit. execution bits require permissions.
You can read more about it here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With