Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Git branching strategy integated with testing/QA process

People also ask

What is a QA branch?

qa branch: This is the stable branch where all commits are integrated and release to QA. Code can be merged to QA branch only via Pull Requests as we had to control what comes to QA branch. stage branch: Sole purpose of this branch is to release to Stage environment for Acceptance testing.


The way we do it is the following:

We test on the feature branches after we merge the latest develop branch code on them. The main reason is that we do not want to "pollute" the develop branch code before a feature is accepted. In case a feature would not be accepted after testing but we would like to release other features already merged on develop that would be hell. Develop is a branch from which a release is made and thus should better be in a releasable state. The long version is that we test in many phases. More analytically:

  1. Developer creates a feature branch for every new feature.
  2. The feature branch is (automatically) deployed on our TEST environment with every commit for the developer to test.
  3. When the developer is done with deployment and the feature is ready to be tested he merges the develop branch on the feature branch and deploys the feature branch that contains all the latest develop changes on TEST.
  4. The tester tests on TEST. When he is done he "accepts" the story and merges the feature branch on develop. Since the developer had previously merged the develop branch on feature we normally don't expect too many conflicts. However, if that's the case the developer can help. This is a tricky step, I think the best way to avoid it is to keep features as small/specific as possible. Different features have to be eventually merged, one way or another. Of course the size of the team plays a role on this step's complexity.
  5. The develop branch is also (automatically) deployed on TEST. We have a policy that even though the features branch builds can fail the develop branch should never fail.
  6. Once we have reached a feature freeze we create a release from develop. This is automatically deployed on STAGING. Extensive end to end tests take place on there before the production deployment. (ok maybe I exaggerate a bit they are not very extensive but I think they should be). Ideally beta testers/colleagues i.e. real users should test there.

What do you think of this approach?


Before test, we merge the changes from the develop branch to the feature branch

No. Don't, especially if 'we' is the QA tester. Merging would involve resolving potential conflicts, which is best done by developers (they know their code), and not by QA tester (who should proceed to test as quickly as possible).

Make the developer do a rebase of his/her feature branch on top of devel, and push that feature branch (which has been validated by the developer as compiling and working on top of the most recent devel branch state).
That allows for:

  • a very simple integration on the feature branch (trivial fast-forward merge).
  • or, as recommended by Aspasia below in the comments, a pull request (GitHub) or merge request (GitLab): the maintainer does a merge between the feature PR/MR branch and develop, but only if not conflict are detected by GitHub/GitLab.

Each time the tester detects bug, he/she will report it to the developer and delete the current feature branch.
The developer can:

  • fix the bug
  • rebase on top of a recently fetched develop branch (again, to be sure that his/her code works in integration with other validated features)
  • push the feature branch.

General idea: make sure the merge/integration part is done by the developer, leaving the testing to the QA.


The best approach is continuous integration, where the general idea is to merge the feature branches into the developer branch as frequently as possible. This reduces on the overhead of merging pains.

Rely on automated tests as much as possible, and have builds automatically kick off with unit tests by Jenkins. Have the developers do all the work with merging their changes into the main branch and provide unit tests for all their code.

The testers/QA can take participate in code reviews, check off on unit tests and write automated integration tests to be added to the regression suite as features are completed.

For more info check out this link.


We use what we call "gold", "silver", and "bronze". This could be called prod, staging, and qa.

I've come to call this the melting pot model. It works well for us because we have a huge need for QA in the business side of things since requirements can be hard to understand vs the technicals.

When a bug or feature is ready for testing it goes into "bronze". This triggers a jenkins build that pushes the code to a pre-built environment. Our testers (not super techies by the way) just hit a link and don't care about the source control. This build also runs tests etc. We've gone back and forth on this build actually pushing the code to the testing\qa environment if the tests (unit, integration, selenium ) fail. If you test on a separate system ( we call it lead ) you can prevent the changes from being pushed to your qa environment.

The initial fear was that we'd have lots of conflicts between this features. It does happen were feature X makes it seem like feature Y is breaking, but it is infrequent enough and actually helps. It helps get a wide swath of testing outside what seems is the context of the change. Many times by luck you will find out how your change effects parallel development.

Once a feature passes QA we move it into "silver" or staging. A build is ran and tests are run again. Weekly we push these changes to our "gold" or prod tree and then deploy them to our production system.

Developers start their changes from the gold tree. Technically you could start from the staging since those will go up soon.

Emergency fixes are plopped directly into the gold tree. If a change is simple and hard to QA it can go directly into silver which will find its way to the testing tree.

After our release we push the changes in gold(prod) to bronze(testing) just to keep everything in sync.

You may want to rebase before pushing into your staging folder. We have found that purging the testing tree from time to time keeps it clean. There are times when features get abandoned in the testing tree especially if a developer leaves.

For large multi-developer features we create a separate shared repo, but merge it into the testing tree the same when we are all ready. Things do to tend bounce from QA so it is important to keep your changesets isolated so you can add on and then merge/squash into your staging tree.

"Baking" is also a nice side effect. If you have some fundamental change you want to let sit for a while there is a nice place for it.

Also keep in mind we don't maintain past releases. The current version is always the only version. Even so you could probably have a master baking tree where your testers or community can bang on see how various contributors stuff interact.


In our company we cant use agile development and need approval for every change by business, this causes a lot of issues.

Our approach for working with GIT is this;

We have implemented "Git Flow" in our company. We using JIRA and only approved JIRA Tickets should be go to production. For Test approval we exted it with a created a seperate Test-Branch.

Steps for processing a JIRA Tickets are:

  1. Create a new Branch from Develop-Branch
  2. Do the code Changes on the Feature-Branch
  3. Pull from Feature the Changes to the Test/QA Branch
  4. After business approval we pull the change from feature branch into develop
  5. The develop goes frequently in a release and then finally master branch

Splitting each request in an own feature ensures, only approved changes went to production.

The complete process looks like this: enter image description here


I would not rely on manual testing alone. I would automate the testing of each feature branch with Jenkins. I setup a VMWare lab to run Jenkins tests on Linux and Windows for all browsers. It's truly an awesome cross browser, cross platform testing solution. I test functional/integration with Selenium Webdriver. My selenium tests run under Rspec. And I wrote them specially to be loaded by jRuby on Windows. I run traditional unit tests under Rspec and Javascript tests under Jasmine. I setup headless testing with Phantom JS.