How we do things now
We have a file server (using NFS) that multiple web servers mount and use these mounts as the web root. When we deploy our codebase, we SCP an archive (tar.gz) to the NFS server and unarchive the data directly in the "web directory" of file server.
The issue
During the deploy process we are seeing some i/o errors, mostly when a requested file cannot be read: Smarty error: unable to read resource: "header.tpl"
These errors seem to go away after the deploy is finished, so we assume that it's because unarchiving the data directly to the web directory isn't the safest of things. I'm guessing we need something atomic.
My Question
How can we atomically copy new files into an existing directory (the web server's root directory)?
EDIT
The files that we are uncompromising into the web directory are not the only files that are in the directory. We are adding files to the directory, that already has files. So copying the directory or using a symlink is not an option (that I know of).
Here's what I do.
DocumentRoot is, for example, /var/www/sites/www.example.com/public_html/:
cd /var/www/sites/www.example.com/
svn export http://svn/path/to/tags/1.2.3 1.2.3
ln -snf 1.2.3 public_html
You could easily modify this to expand your .tar.gz before changing the symlink instead of exporting from svn. The important part is that the change is the atomic application of the symlink.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With