I've found two common approaches to automatically deploying website updates using a bare remote repo.
The first requires that the repo is cloned into the document root of the webserver and in the post-update hook a git pull is used.
cd /srv/www/siteA/ || exit
unset GIT_DIR
git pull hub master
The second approach adds a 'detached work tree' to the bare repository. The post-receive hook uses git checkout -f to replicate the repository's HEAD into the work directory which is the webservers document root i.e.
GIT_WORK_TREE=/srv/www/siteA/ git checkout -f
The first approach has the advantage that changes made in the websites working directory can be committed and pushed back to the bare repo (however files should not be updated on the live server). The second approach has the advantage that the git directory is not within the document root but this is easily solved using htaccess.
Is one method objectively better than the other in terms of best practice? What other advantages and disadvantages am I missing?
In term of release management (here deployment), it is best to have a target environment which is independent from the release mechanism.
In other words, the second solution (checkout -f
) will modify a plain web directory structure, without any other subdirectories which shouldn't be part of it (like a .git
folder).
I use it, for instance, in "using git to deploy my node.js app to my production server".
That minimizes any side-effect and allows the production environment to work with just what it needs to run, without interference.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With