Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to approach CI

I'm in the process of building a company from scratch (Tomcat+Spring Rest+Java) so we have the luxury to do some things right (or at list not repeat our past mistakes) , one of the goals we want to achieve is the ability automatically build , test(unit,integration) & deploy .

Our platform is built with one static HTML/JS interface site served with NGiNX and a few API servers (different applications) ,some of them are exposed and some of them are only accessible from within the farm by the exposed API applications .

I've chosen TeamCity as a CI server as i'm somewhat familiar with it and i've had an excellent experience so far with all of Jetbrain's products.

so far I've defined two build configurations

  1. Development sanity : Checks out from git , runs DB scripts to prepare Database , execute maven goals of clean install (so our testng suite is being executed) , executes code coverage and static code analysis This configuration is executing and is great .

  2. Integration : Checks out from git , runs DB scripts to prepare Database , execute maven goals of clean install (so our testng suite is being executed)

Now I've reached the problematic part , our configuration needs several .war files to be deployed to different machines before our Integration testing can begin , also i would like to build this in such a way that i will be able to add a third configuration that will deploy to live production once the Integration has passed so it basically does the same as the 2nd configuration but add some functions such as taking an application off and put it back online once deployed gracefully, I've seen several approaches on how to do this from maven cargo,shell scripts, fabric etc...

is there a recommended way on how to approach this drawing from your past experiences ? also i'm not clear what is the best way to run Integration testing that involve several applications to be deployed , i've seen many examples of embedded jetty etc.. but that's only good for one application or a very simple configuration when u need 3-4 applications to be deployed before u can start testing , what is the best way to this ? add another project that's dedicated to integration testing and execute another maven goal with a specific profile after the deployment has finished ?

BTW - Deploying to AWS

Thanks Guys .

like image 445
Amnon Avatar asked Dec 05 '11 14:12

Amnon


People also ask

What are the three basic steps of continuous integration?

Continuous integration, deployment, and delivery are three phases of an automated software release pipeline, including a DevOps pipeline. These three phases take software from idea to delivery to the end-user.

How do you get CI?

Continuous Integration is a prerequisite for CI/CD, and requires: Developers to merge their changes to the main code branch many times per day. Each code merge to trigger an automated code build and test sequence. Developers ideally receive results in less than 10 minutes, so that they can stay focused on their work.

What are the steps in continuous integration?

5 Steps to Setup Continuous IntegrationStart writing tests for the critical parts of your codebase. Get a CI service to run those tests automatically on every push to the main repository. Make sure that your team integrates their changes everyday. Fix the build as soon as it's broken.


1 Answers

First off, I thoroughly recommend you read Continuous Delivery (Jez Humble, David Farley), it's got a wealth of information on this. There's a sample chapter here.

Since reading this I've started to implement a build pipeline where each commit to svn goes through every stage in the pipeline with the environment getting gradually more like production as the build moves through. We use Jenkins for this.

  1. Commit stage - dev sanity - compile, unit test & some metrics. This initial stage also builds the binaries needed for the rest of the pipeline
  2. Integration stage - this takes the same files as the previous stage (not a new checkout) and runs db integration tests in memory
  3. Automated acceptance test stage - takes the binaries from the commit stage and deploys to a server where we run selenium tests
  4. QA stage - this is deployed by qa who simply click a button to pull whatever build they want, again it just deploys the binaries from the commit stage to a QA server
  5. UAT - same as QA but a more production like environment where we also do performance tests
  6. Production - takes the binaries from commit stage and deploys to production.

Each of these stages acts as a 'quality gate' - the build is not allowed to progress further until it passes some threshold - test failures, metric %s etc. Some stages flow automatically others are manually triggered. Any configuration changes needed for each environment are done by unpacking the original binary files, changing settings, packing it up again - ideally I would like to separate the configuration from the application binaries but not found a way to do that yet.

The automated acceptance test stage just updates an existing application on the server - the qa stage does full a stop, uninstall, install & start action. Each runs a different script - a combination of ant & python.

Here's what a pipeline looks like in jenkins with the build pipeline plugin.

[edit]

You don't actually have to implement each stage of this in one go, it's quite easy to have place holders for each stage which just flow onto the next without actually doing anything. If you map your current process you should be able to automate parts of it and move towards a pipeline bit by bit.

The commit stage is the easiest to do, it's basically what you'd do in setting up a normal CI server, make a project, hook it into version control, compile, execute tests, run some stats all from ant/maven tasks. This takes a little over 5 minutes to run.

The stats task takes too long to run (> 15 minutes) so I run a subset on commit and have a nightly run that does the whole lot of Findbugs, PMD, Checkstyle & Cobertura. I'd much rather run all of these on commit but that would require some more hardware & work to set up some kind of build grid.

The Selenium tests are not at the moment in a separate project, but they are packaged as a separate jar and this is made available to the automated acceptance test stage via the jenkins 'copy artifact' plugin - the ant/python scripts package the WAR file and deploy to a container then ant unpacks and runs the Selenium tests (via junit). There's only a handfull of 'smoke tests' at the moment and they have no dependancy on the main WAR although I can possibly see that changing. I don't actually like the idea of having separate projects for code & tests - the build scripts just package the classes and libraries needed for each module from the main project - for your situation (and soon, ours) you might have to do something different - how about firing up a VM or two with the configuration you need and deploying to that. (lots of info in the Continuous Delivery book on this)

It's good that Jenkins supports a lot of this via plugins - we moved from Atlassian Bamboo because most of what we wanted wasn't available and existing plugins either would not work or were not compatible with the Bamboo version. I've not used Team City for a while so I've no idea if it supports this idea of a 'pipeline' [aparently not]. The 'Build Pipeline' plugin is fairly new and has a few rough edges but is in active development - I think it's possible to do this with Jenkins' 'promoted builds' & touchstone builds but have not attempted that. If you have enough resources (money!) you might want to look at Go

like image 125
blank Avatar answered Sep 20 '22 03:09

blank