Martin Fowler has identified the key process elements of making Continuous Integration work. You could even argue that they are the elements that define Continuous Integration (done correctly). We include his list and our thoughts below:
- Maintain a Single Source Repository
- Automate the Build
- Make Your Build Self-Testing
- Everyone Commits Every Day
- Every Commit Should Build the Mainline on an Integration Machine
- Keep the Build Fast
- Test in a Clone of the Production Environment
- Make it Easy for Anyone to Get the Latest Executable
- Everyone can see what’s happening
- Automate Deployment
For background information, check out the Foundation Series on Continuous Integration article.
1. Maintain a Single Source Repository
The smartest people I know use subversion when they have been able to make the choice themselves. Aside from being open source, it provides two key differentiated benefits relative to “everything else”.
- Atomic commits: The ability to check in everything or nothing, so you don’t risk breaking the build with a partial check-in.
- Overall project versioning: Allows you to track changes in source file directory hierarchies, file renaming, etc. Each version is of the entire project, not of a single file.
2. Automate the Build
Fowler sums it up perfectly:
“…anyone should be able to bring in a virgin machine, check the sources out of the repository, issue a single command, and have a running system on their machine”
3. Make Your Build Self-Testing
Include automated testing as part of the build.
4. Everyone Commits Every Day
More frequently when possible. This is the minimum.
5. Every Commit Should Build the Mainline on an Integration Machine
A seperate, dedicated machine does a daily (or more frequent) build and full test suite run autonomously. The “build and test” model above relies on people to kick off the build when they commit. A scheduled task on a seperate machine provides a safety net for human error (oversight). Alternately, companies like Calavista can make this foolproof by automatically triggering an automated build as part of every commit. With Calavista’s devEdge, if the developer “commits”, what really happens is that the automated build/test cycle is triggered with his new code, and it gets promoted only if all the tests pass.
6. Keep the Build Fast
Tests can take a long time, builds should only take ten minutes. Fowler suggests a strategy of staged builds to address the 10-minute threshold. Run unit tests against the 10-minute build, and run the full suite in parallel or series.
Another option is to use statistical sampling of tests to get a “10 minute answer” while the full suite is kicked off in parallel.
7. Test in a Clone of the Production Environment
Eliminate even more variables. Make sure the tests are running against a clone of the production environment. Teams that are pushing the envelope today use virtual machines (VMs) to quickly create cloned production environments, install the software and run the tests.
8. Make it Easy for Anyone to Get the Latest Executable
Make sure everyone knows where the latest build can be found. Probably a good idea to keep recent builds in the same place too, in case a problem sneaks through the process temporarily (like a memory leak or other obscure, not-yet-tested situation).
9. Everyone can see what’s happening
Visibility! eMail the team when builds start/finish, including success/failure information. Put a rubber chicken on the desk of the person currently running the build (don’t ask – just read Fowler’s post). Ring a desk bell when the build passes. Have fun with it.
10. Automate Deployment
Make deployment into production as easy as running the build. Since tests are run against a production clone, and are already automated, this presents minor incremental effort.
Martin presents a great list. In addition to the above, we would suggest
- Generate test-results documents per requirement. For each build, identify which requirements pass, fail, or are untested. The most relevant information for communicating outside of the team is status of previous requirements (did our regression tests pass?) and current requirements (are we almost done with this timebox?).