This post is a test automation case study, but at a higher level.
We’ll talk about it in terms of defining the problem, and then discuss the objective (what we proposed to do to solve the problem), the strategy (how we went about doing it) and the tactics (how we executed the strategy). Since this happened in the real world, we’ll also identify the constraints within which we had to operate.
Our hope is that it will spur questions that allow us to dive deeper in conversation on topics mentioned, implied and inspired.
Why we needed something (aka The Problem)
I was working with a client manager in the past, and he had a “quality problem.” This manager was getting pressure from his VP, who was getting negative feedback from users about the quality of one of the manager’s software products. Bugs in this software regularly led to 10s to 100s of hours (per bug) in cost when they reached the field. These bugs would also introduce a risk of lost sales or profits. This manager was responsible for development and testing, but not requirements.
This existing enterprise application was written about ten years ago, had significant changes in every monthly release, and had a small development team averaging about five people, with regular rotations onto and off the project. There were over a quarter of a million lines of code in this application. The application had a very large user interface, and complicated integration with other systems. The team had an existing process of manual testing, both by developers and dedicated testers, and a large suite of automated blackbox system tests. The developers did not have practical experience in creating unit tests or applying unit test automation.
An analysis of the bugs revealed a majority of them as being introduced in the development cycle, with requirements bugs in second place.
The Objective
- Immediately improve the perception of quality of the software by outside organizations.
- Improve quality measurably for the long term.
- Reduce the cost of quality for the software from existing levels.
The Constraints
- No personnel changes – any changes must be supported by the current team (no permanent additions or replacements).
- No changes in existing committments to stakeholders – commitments are in place for 6 months (at the full capacity of the team).
- Small budget for the project – a one-time cost of less than 5% of the operating budget (for the current team), with long term costs offset by other gains in productivity.
The Strategy
- Improve existing automated regression testing to improve quality for the long term.
- Change the development process to include regression testing as part of code-promotion (versus the current practice of regression testing release candidates).
The Tactics
- Use unit testing (specifically whitebox testing) to augment existing test framework – overall, a gray box testing process. To minimize the maintenance effort over time, the testing framework was developed to use data-driven scripts that represent user sessions with the software. This allowed the team to easily create (and delegate creation of) scripts that represented user sessions. These scripts were combined with a set of inspections that tested the application for particular parameters, outputs, and behaviors. The intersections of scripts and inspections result in unit tests.
- Immediately start writing test for all new code. We flipped a switch and required developers “from this day forward” to replace their current manual feature testing for ongoing development with creation of automated unit tests for ongoing development. Kent Beck first suggested this technique to me about five years ago as a way to “add testing” to an existing application. His theory is that the areas of the code that are being modified are the areas of the code most likely to be broken – existing code is less likely to spontaneously break, and is not the top priority for testing. Over time, if all of the code gets modified, then all of the code gets tested.
- Jump start a small initial test suite. We timeboxed a small inital effort to identify the “high risk” areas of the application usage by defining the most common usage patterns. These patterns were then embodied in a set of scripts that were used in the testing framework. We also set aside time for creating a set of initial inspections designed to provide valuable insight into the guts of the application. The developers identified those things that they “commonly looked at” when making changes to the application. These inspections instrumented elements of the application (like a temperature gauge in your car – it tells you if the coolant is too hot, even if it doesn’t tell you why).
Unfortunately, we can’t share the results beyond the client’s internal team. Anecdotally, a very similar approach for a different client, team, and application netted a 10% reduction in development effort and had a dramatically positive affect on both quality and perceived quality. At Tyner Blain, we strongly encourage using this approach.
Top seven tips for rolling out this plan effectively
- Set expectations. A key constraint for this approach was “don’t spend a bunch of money.” The process is designed to improve quality over time, with little (and dwindling) incremental cost. Every month, as more tests are added along with new code, the opportunity for bugs to be released to the test team (much less to the field) goes down. The rate of quality improvement will be proportional to the rate of change in the code base. Also point out that only those bugs introduced in the development cycle will be caught – requirements bugs will not be caught.
- Educate the development team. When asking developers to change the way they’ve been writing and releasing code for a decade, it can be tricky to get acceptance. Responses can be as bad as “We don’t have a problem with quality” or “I know how to do my job – you’re telling me that I don’t?” if this isn’t done. Start with education about the techniques and highlight the tangible benefits to developers. There will be reduced complaints about the quality of the code – most developers are proud of their work, and will gladly adopt any technique that helps them improve it – as long as they don’t feel defensive about it.
- Educate the managers. Help managers understand that unit testing isn’t a silver bullet – it can’t catch every bug, but done correctly, unit testing will catch the most bugs per dollar invested.
- Educate the test team. No, we’re not automating you out of a job. A gray box testing strategy is comprehensive. Automating regression testing effectively allows manual testers to focus on system level testing and overall quality assurance. The time saved can be applied to testing that should be, but isn’t being done today.
- Establish ownership. The developers are being asked to take ownership explicitly for something they already own implicitly. Before incorporating regression testing as part of the development cycle, the contract with the test team was “The new stuff works. Let me know if I broke any of the old stuff.” With this process in place, the contract between the development team and the test team becomes “The new stuff works, some of the old stuff still works, and the new stuff will continue to work forever.”
- Provide feedback. Track the metrics, such as bugs versus lines of code (existing and modified) or bugs versus user sessions. Absolute numbers (bugs in the field, bugs found in test, numbers of inspections and scripts and unit tests) are also good. Communicate these and other metrics to everyone on the team – managers, developers, testers. Provide the feedback regularly (at least with every release). This will help the project gain momentum and visibility. That will validate the ideas, and help propogate the approach to other software development cycles.
- Leverage the feedback cycle to empower the team members to make it even better.
[Update: The series of posts, Organizing a test suite with tags recounts a real-world followup to the solution implemented as described in this post. We explore a design concept associated with refactoring the solution from above. The first of those posts is here, or you can follow the links below]
– – –
Check out the index of software testing series posts for more articles.
Hi Scott,
recently I accepted a management role. Having previously applied use case driven and test driven development for my personal work, I’m currently trying to do this with the team. It isn’t easy… but, this case study is giving me good hints on what to do.
I’ll add that in my case first we didn’t even collected metrics on the development tests, such as number of tests per module, executed tests, passed tests, etc.. First we added these in the context of manual tests. After they started to spend some time executing the tests (including repetition – regression testing), they are now becoming more and more convinced that this is something worth for automation. They are also starting to value testing and accepting that this is something that should be formalized.
Thanks for your hints,
Luis
Congratulations Luis, and thanks for the comments!
We’ve been helping our current client do the same thing – establish an automated test paradigm to augment their existing manual testing.
We were able to get management support for the program, although they were initially skeptical about the expected results we proposed. The dev team was interested in the potential benefits but did not expect them to materialize.
In the very first release cycle, with probably 5% of the testing in place, they caught a bug that was unexpected (and would have slipped through the existing test process) – so it has already had some benefit for them. The size of their suite will double again this week – now the developers are accelerating the rollout of testing.
We also made a point of visibly communicating both the tangible benefit and the fact that one of our client’s developers did the work to their manager, who has forwarded the information up the chain to his manager.
Success breeds success – and it’s really fun to help a team transform in this way.
Thanks again for sharing with all of us!