I've been developing a workflow for practicing a mostly automated continuous deployment cycle for a PHP project. I'd like some feedback on possible process or technical bottlenecks in this workflow, suggestions for improvement, and ideas for how to better automate and increase the ease-of-use for my team.
Hudson
CI serverGit
and GitHub
PHPUnit
unit testsSelenium RC
Sauce OnDemand
for automated, cross-browser, cloud testing with Selenium RC
Puppet
for automating test server deploymentsGerrit
for Git code reviewGerrit Trigger
for Hudson
EDIT: I've changed the workflow graphic to take ircmaxwell's contributions into account by: removing PHPUnit
's extension for Selenium RC
and running those tests only as part of the QC stage; adding a QC stage; moving UI testing after code review but before merges; moving merges after the QC stage; moving deployment after the merge.
This workflow graphic describes the process. My questions / thoughts / concerns follow.
Overall difficulty using this system.
Time involvement.
Difficulty employing Gerrit
.
Difficulty employing Puppet
.
We'll be deploying on Amazon EC2
instances later. If we're going about setting up Debian
packages with Puppet
and deploying to Linode
slices now, is there a potential for a working deployment on Linode
to break on EC2
? Should we instead be doing our builds and deployments on EC2
from the get-go?
Another question re: EC2
and Puppet
. We're also considering using Scalr as a solution. Would it make as much sense to avoid the overhead of Puppet
for this alone and invest in Scalr instead? I have a secondary (ha!) concern here about cost; the Selenium
tests shouldn't be running that often that EC2
build instances will be running 24/7, but for something like a five-minute build, paying for an hour of EC2
usage seems a bit much.
Possible process bottlenecks on merges.
Could "A" be moved?
Credits: Portions of this workflow are inspired by Digg's awesome post on continuous deployment. The workflow graphic above is inspired by the Android OS Project.
How many people are working on it? If you only have maybe 10 or 20 developers, I'm not sure it will make sense to put such an elaborate workflow into place. If you're managing 500, sure...
My personal feeling is KISS. Keep It Simple, Stupid... You want a process that's both efficient, and more important: simple. If it's complicated, either nobody is going to do it right, or after time parts will slip. If you make it simple, it will become second nature and after a few weeks nobody would question the process (Well, the semantics of it anyway)...
And the other personal feeling is always run all of your UNIT tests. That way, you can skip a whole decision tree in your flow chart. After all, what's more expensive, a few minutes of CPU time, or the brain cycles to understand the difference between the partial test passing and the massive test failing. Remember, a fail is a fail, and there's no practical reason that code should ever be shown to a reviewer that has the potential to fail the build.
Now, Selenium tests are typically quite expensive, so I might agree to push those off until after the reviewer approves. But you'll need to think about that one...
Oh, and if I was implementing this, I would put a formal QC stage in there. I want human testers to look at any changes that are being made. Yes, Selenium can verify the things you know about, but only a human can find things you didn't think of. Feed back their findings into new Selenium and Integration tests to prevent regressions...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With