Most teams think about testing in terms of code coverage – what % of the lines of code are covered? What matters to our stakeholders is how well the software works. More precisely, how well does the software let the users work? We should be targeting our quality message in relevant terms, because users care about what they can do with software, not how well we programmed it.
The problem is that we tend to think about our software from the inside out, and our customers think about it from the outside in. We need a way to communicate our understanding of the insides within the customer’s framework of understanding the outside.
Inside Out Quality Measurement
Inside-out measurement of quality is what most people developers and testers think about. Users don’t. Executives don’t. Customers don’t. This section recaps this view in order to contrast it with the outside-in view.
We’ve talked about how to view software as a framework of structured requirements, designs, code, and tests. This is the right way to think about it, when thinking about the inside of our process. A diagram we’ve used before to show the structured requirements view of the world puts it pretty succinctly.
Interpreting the Inside-Out Diagram
- The user has goals.
- Each goal is achieved by enabling one or more use cases.
- Each use case is enabled by implementing one or more functional requirements with a set of characteristics and subject to restrictions.
- Each functional requirement drives a set of design decisions.
- Each design element is implemented by writing software (code).
- Each element of code is tested with one or more tests. These are generally unit tests, and by definition are whitebox tests.
Incorporating an interaction design process into this approach results in a more complex, blended view of the world. Wiegers’ view is simpler, so we’ll focus on how to communicate with our users in this framework. These ideas can be easily extended to other frameworks.
One Step Back
Taking one step back, we can see a slightly bigger picture. In the following diagram, we collapse all of the requirements elements into a single rectangle, and add testing.
This diagram shows a single use case, enabled through two requirements, each of which is driving a design element. Each design element is implemented with a section of code, and each section of code is also tested with one or more white box tests.
The Problem with Inside-Out
While inside-out is the way that we have to think about our software when developing it, it couldn’t be more wrong as a way to describe our quality to our stakeholders. We might be able to communicate the overly simplified diagram above to a client, but even adding one level of complexity will derail the message. The diagram below will make most stakeholders’ eyes water, even though it is still simplistic.
When we deliver a release, we need to communicate about the quality of the release. We can do this by providing the results of our test suite. The test suite is represented by the “T” boxes in these simplified diagrams. We can tell our stakeholders that we have 90% code coverage, or that 85% of our tests pass. Most measurements of quality are meaningless once you get outside of the box.
More gibberish for our customers.
Outside-In Quality Measurement
Our customers view software from the outside in.
Interpreting the Customer-View Diagram
- The user has one or more goals. (WHY?)
- The user achieves those goals by enacting use cases. (WHAT?)
- The use cases are enabled by buying software. (HOW?)
We engage with users during developmen in an agile or iterative process. During that engagement, the users will care about the next level of detail (requirements), but only because what they care about are use cases (or scenarios). We need to write requirements so that they can get value out of the software. The responsibility is ours, they care about how they use the software.
Using Inside Knowledge for Outside Communication
We need to communicate some message about the quality of each release to our stakeholders, because keeping them in the loop keeps our project alive. It sets expectations, can create momentum, and prevents surprises. All of these are very good things(tm).
We can do this by providing the results of our test suite. What we want to tell them is “Use Case 1 has 100% passing tests, UC 2 has 100% passing tests, and UC 3 has 50% passing tests.” This lets our stakeholders know that Use Case 1 and UC 2 are both ready start generating ROI for them. UC 3 is not ready yet, and needs more work. When we combine this “quality by use case” message with “release planning by use case” we are providing a clean message for our customers, that is targeted for them, and makes sense from their perspective.
In the diagrams, we see how each level of the diagram is supported by the next level down. Conversely, each level is supporting the level above it. By following the arrows backwards, we can see which code is supported by a given test case. We can then determine which design element is supported by that code, and keep moving up until we find the use cases. In our example, the mappings would look like the following:
Interpreting the Use Case Mapping Diagram
- The first three test cases all support Use Case 1
- The next two test cases support UC 2.
- The last two test cases support both UC 2 and UC 3.
The last two test cases are doing double duty in our example, because both UC 2 and UC 3 depend upon the same requirement. This is a very common element of real-world diagrams like this one. The tests of a common code element will support multiple use cases.
Quality Measurement and Motivation
Suddenly, some test cases are more important than others. When there is a system of metrics in place, people tend to optimize on those metrics.
With inside-out quality measurements, all test cases are created equal. If 5 of 1000 tests fail, quality is really good. Maybe. What if those failed test cases are in the database connection that is critical to every use case? Five tests fail (half a percent!) and nothing works.
With outside-in quality measurements, critical test cases carry the most weight. The five failing test cases will cause all of our use cases to fail, and they will get the attention they deserve.
The same approach can be used for measuring code-coverage, cyclomatic complexity, or any other (normally) inside-out metric. Developers are smart. When they see that they can kill N birds with one stone, they jump at the chance. Fixing a critical bug, or adding a well-placed test case can have multiplied impact with this approach.
Use cases that are isolated will get the least attention. Unless we also prioritize them.
Conclusion
We write software for our customers. They buy it because it is valuable to them. Our customers think about that value in terms of what they can accomplish with the software.
When we communicate with our customers about quality, it should be on our customer’s terms, not ours.
Wow, this is great stuff; in particular I like the way you break it down and then relate it back to what is most important to the people who are paying for the development in the first place. We often play less attention to the buyer than we should as we get caught up in the importance of our job as development teams.
You must have figured out how to get the Mountain Dew away from the puppy. Keep up the good work – it makes me want to do better in my own blogging efforts.
Anthony
Anthony, thanks so much!
For folks who didn’t like the article as much, Lidor Wyssocky posted a great critique of it at The Mindset.
His concern is that by focusing on the one topic, outbound quality messages, readers might mistakenly conclude that the inbound management of quality is somehow less important. It isn’t. Check out Lidor’s post for an alternate view – it’s the best written critique of my writing that I’ve seen to date.
Thanks to Lidor too!