Two main ways to deploy a J2EE/Java Web app (in a very simplistic sense):
Here, we create the .war
(or whatever) elsewhere, configure it for production (possibly creating numerous artifacts for numerous boxes) and place the resulting artifacts on the production servers.
Here, the same process used day-to-day to build and deploy locally on developer boxes is used to deploy to production.
I've mostly used the second process, admittedly out of necessity (no time/priority for another deployment process). Personally I don't buy arguments like "the production box has to be clean of all compilers, etc.", but I can see the logic in deploying what you've tested (as opposed to building another artifact).
However, Java Enterprise applications are so sensitive to configuration, it feels like asking for trouble having two processes for configuring artifacts.
Thoughts?
Here's a concrete example:
We use OSCache, and enable the disk cache. The configuration file must be inside the .war file and it references a file path. This path is different on every environment. The build process detects the user's configured location and ensures that the properties file placed in the war is correct for his environment.
If we were to use the build process for deployment, it would be a matter of creating the right configuration for the production environment (e.g. production.build.properties
).
If we were to follow the "deploy assembled artifacts to the production box", we would need an additional process to extract the (incorrect) OSCache properties and replace it with one appropriate to the production environment.
This creates two processes to accomplish the same thing.
So, the questions are:
I'm firmly against building on the production box, because it means you're using a different build than you tested with. It also means every deployment machine has a different JAR/WAR file. If nothing else, do a unified build just so that when bug tracking you won't have to worry about inconsistencies between servers.
Also, you don't need to put the builds into version control if you can easily map between a build and the source that created it.
Where I work, our deployment process is as follows. (This is on Linux, with Tomcat.)
Test changes and check into Subversion. (Not necessarily in that order; we don't require that committed code is tested. I'm the only full-time developer, so the SVN tree is essentially my development branch. Your mileage may vary.)
Copy the JAR/WAR files to a production server in a shared directory named after the Subversion revision number. The web servers only have read access.
The deployment directory contains relative symlinks to the files in the revision-named directories. That way, a directory listing will always show you what version of the source code produced the running version. When deploying, we update a log file which is little more than a directory listing. That makes roll-backs easy. (One gotcha, though; Tomcat checks for new WAR files by the modify date of the real file, not the symlink, so we have to touch the old file when rolling back.)
Our web servers unpack the WAR files onto a local directory. The approach is scalable, since the WAR files are on a single file server; we could have an unlimited number of web servers and only do a single deployment.
Most of the places I've worked have used the first method with environment specific configuration information deployed separately (and updated much more rarely) outside of the war/ear.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With