Continuous Integration Setup

Improving the code quality of software in an everyday process is challenging. It consists of multiple parts like unit and integration tests, test code coverage, static code analysis including code metrics and proper comments. All this work is additional to main task of software developers: creating fantastic new features … and fixing bugs from time to time. Over the years I’ve played around with several tools to easy this task for all parties involved and come up with a solution, that performs most of the job automatically. Without no further ado I present you a continuous integration setup that I think works best (it’s actually very similar to what my company is doing).

Overview of the continuous integration setup

continuous_integration_setup

Now that you managed to scroll over the large image you want to know more of the details, right? First of all this scenario is meant for Java EE based development processes, but I see no reason not to apply it to other types of developments, too (but keeping in mind that some components might need different implementations). I’d like to look at this from two different angles: the developer view and the manager view.

Developer view

When doing development I want my focus to be on the primary target: adding a new, cool feature. In the currently workflow at my company it is necessary that I also add a black-box test case for whichever code I added to verify I did well. The next step (usually done by a fellow software engineer of the same team) is to convert the black-box test case into a white-box test case, by expanding to cover most of the code paths (especially including negative testing). We are currently focussing on integration tests (i.e. not testing using mock objects, but on a dedicated machine hosting the complete software including a database), simply because we want to test a scenario most similar to our production system. Also having a large integration test suite is really awesome, as it can be easily used for regression testing.

But integration tests are not suited for running the tests on a developer machine. The main reason is that the tests are quite time consuming and required lots of resources (RAM, CPU, etc.). Another reason is that comparing the results is like comparing apples and pears: They were done on different systems with possibly different configurations and possibly different datasets. This is where our automated process comes into play: Each time a developer commits his code and pushes it to the central server (we are using Git), our build server will go ahead and compile our software, run the unit tests and notify the developer of bad results. Now we have a built- and unit-tested piece of software. The next step is: Running the integration tests and, also, getting notified about bad results. In best case scenarios the developer will not be notified about anything, which makes every developer happy.

After a successful result the last step is publishing the artifacts. There are two possible points on when to publish the built artifact: Either after it was built- and unit-tested or after it was verified by the integration tests. It boils down to the question: Is an artifact with a failed integration test still a valid artifact? In my current process we accept artifacts with failed integration tests for non-release versions (i.e. maven SNAPSHOT versions), but not for software to possibly be deployed to production servers.

And this is basically where the developer view ends. Every developer is encouraged to look at other measurements for quality, but in a team solely consisting of senior developers it’s not essential.

Manager and quality assurance view

All information must be aggregated in some kind of portal to be easily accessible for everyone. From a management perspective it can be used to see how good (or bad) things are going. This is essential for everyone not directly involved in the actual development process, but still working on or working with the product in development. So the portal is primary used by software architects, team leads, even CTOs or vice presidents (and of course the product owner in a scrum process).

When diving deeper into the metrics, test results and documentation the same portal can be used for quality assurance. It helps the product owner to verify if stories were implemented properly (e.g. no failing tests and proper documentation) or testers to identify which parts of the software aren’t completely tested yet and write tests for those cases. The real crucial part is to aggregate the information in a proper way. In our setup SonarQube does a lot of this work for us; it provides coverage for unit and integration tests next to code metrics. Other tools can be used as well (e.g. Checkstype, FindBugs, or PMD) and there are really good plugins integrating the data into Jenkins.

A key component is also automatic delivery of the software to an internal test system. It is a reference system hosting the latest software for everyone; holding the necessary configuration options, specifying the matching database and also providing a stable test system for the latest code.

Final words

Obviously: Setting up the complete infrastructure is a little bit time consuming, but it is absolutely worth it. Many components have to be brought together, but the grade of automation really simplifies the everyday work. The key is to have a transparent development process especially highlighting its weaknesses. This enables developers to improve their skill and largely improves the code quality.

A final remark is that not all default rules of several analysis tools will match your project. You have to keep in mind that tweaking the rules (e.g. starting with a little bit more relaxed rules) is an essential part of setting up your infrastructure. It will allow you to start with a decreased amount of issues.

Copyright © christophbrill.de, 2002-2017.