Giving a functional spec to developers and testers is not sufficient for creating great software. To a developer, a spec is only the what and not the why. And for a tester, the software requirements specification is neither. Use cases provide the why that explains the intent of the system for the implementation team.
We started a series of posts exploring why we apply use cases as part of product management, identifying 8 goals for which use cases are relevant. That series post contains links to detailed articles on each goal as we write them over the next weeks. Bookmark it and come back to it to see the puzzle come together.
We recently wrote an article, Communicating Intent With Stakeholders, that shows how use cases are consumed from the perspective of users and customers, or stakeholders. In that article, we showed a diagram that compares the different perspectives of the software development artifacts.
Use cases fall in the “requirements” row of this diagram. The requirements row represents the documents used to articulate the value of a proposed solution. The article, Requirements Documents – One Man’s Trash, provides a more detailed explanation of these differing perspectives.
Communication of intent with the implementation team is different than it is with stakeholders. The implementation team represents people performing one of two activites: building the solution and assuring the solution is correct. Different teams will staff these roles differently. Some teams will have separate QA organizations, and others will rely upon the developers to also be responsible for quality.
Some people will argue that a development team only needs the specification to create software. That’s absolutely true in a turn-the-crank situation. And its very common in many outsourcing arrangements that use a low-level outsourcing model. The downside of these approaches is that we hamper our developer’s ability to apply their creative skills to creating innovative solutions.
By providing developers with an understanding of why they are being asked to implement software that conforms to a specification, we get the opportunity to benefit from their feedback. A free electron developer [scroll down to the People section of this article for a definition] may have an epiphany about a significantly better way to solve the problem. Wthout insight into the intent of the software, that star developer is hamstrung.
Quality assurance personnel (QA) are responsible for assuring that the software does what it is supposed to do. While developers can write whitebox tests to assure that their code behaves as anticipated, QA has to rely on blackbox tests to assure that the intent of the system is being met. This requires that QA understand the intent of the system.
Blackbox tests are generally described as a series of user actions (or the automated equivalent), usually referred to as a script. Each script, to be as valuable as possible, should be designed to mimic what a user would do when trying to achieve a particular goal. Functional requirements are written to support use cases. While we can write short tests that validate individual functional requirements, these tests would really only be scriplets, because they do not represent an entire user session.
Good tests are atomic. When a test fails, we want to be able to say that test X for functional requirement Y failed. And scripts should be written with these atomic assertions in mind. There are two reasons, however, to group these assertions together into scripts that match use cases.
First, when using a structured requirements approach, a single functional requirement may support multiple use cases. Developers will want to know which functional requirement failed, and the context in which it failed. However, when we communicate project status with stakeholders, we will be talking to them in the language of use cases. Providing an association between functional requirements, their tests, and the relevant use cases makes this much easier.
Second, because these are blackbox tests, they are written without insight into the implementation by definition. It is possible that a particular functional requirement will pass all of its tests when performing one use case, but fail them when performing another. We can use pairwise testing or other techniques to find these circumstances by brute force. But we can also re-use the assertions (an individual test of a functional requirement) across multiple scripts (use cases).
Members of the development and quality organizations benefit from understanding why they are implementing or testing a particular functional spec. Use cases provide them with that context. Use cases are also the artifacts most easily understood by all team members.
People in all three groups can easily consume use cases. If an individual is unfamiliar, a quick lesson on how to read use cases can help.