I just found Roger Cauvin’s blog, Cauvin, and was reading through his archive. I came across a posting from July, Should all Requirements Be Testable, that is worth thinking about.
In his post, Roger uses an example of an untestable requirement “We might specify that the car should last seven years without repairs as long as the owner maintains the car according to a certain maintenance schedule and doesn’t have a collision.†He makes a great point, that just because you can’t directly test the requirement, you should not ignore the requirement. And I agree.
The premise behind the rule that requirements must be testable is driven by the goal of avoiding ambiguous language in your requirements. Statements like “the application must have a clean user interface†or “search response times must be fast†are also untestable, but more because of language than anything else.
You can rewrite these hypothetical ambiguous requirements in a testable way –
“The application will meet the (attached) user interface guidelines.†Where the UI-guidelines describe detailed inspectable criteria (sharing a common navigation bar at the top of a page, no horizontal scrolling on an 800×600 interface, controls must be at least 10 pixels apart, etc).â€
“Search results must return the first page of results within 2 seconds, when the user is connected to the same LAN as the server. If there are multiple pages, each additional page must be presented to the same user within 2 seconds of selection.â€
Back to Roger’s example…
While you can’t wait 7 years to test the car before you decide to build it, you can rewrite the requirement to make it testable.
First, I would point out that the example requirement is ambiguous. Do they mean that none of the cars will have a warranty-repair? Or no more than 1% of the cars? Greater specificity should be included. Let’s add the 1% number. We will also want to specify “normal usage patterns†– which can mean no off-road driving for sedans, a specified temperature range, maximum miles per month, etcetera.
We still can’t directly test the requirement. And it’s not actionable – you haven’t told the engineers how to know when they’ve completed the design of the car.
How do car manufacturers build quality cars today? They test components and assemblies of components, and characterize their failure rates statistically. Then they combine that empirical data with a statistical model of the expected wear and tear of the vehicle over time. The result is a statistical prediction of when the car is likely to have it’s first warranty repair. And that statistical prediction is a continuum. But it’s testable, if you rewrite the requirement:
“The results of running our existing lifetime-quality-test* for sedans on the vehicle design will predict fewer than 1% of cars will have a warranty repair during their first 7 years of usage, with a 90% confidence level.†The lifetime-quality-test is a referenced document in the requirements, and it describes how components are tested.
Anyone have an “untestable†example for me?
#
Roger L. Cauvin said,
December 12, 2005 at 5:36 pm · Edit
Tyner, you’re quite right that you can rewrite requirements so that they are directly testable. Unfortunately, in some cases you then lose sight of the real requirement.
As I mentioned in my post, the seven year requirement is, in principle, testable. It just takes seven years to test it. It is testability in principle that ensures the requirement is clear and unambiguous. (You are right that driving habits and such would also need to be included.)
The seven year requirement is not practical to test directly, however. Thus, as you suggest, we must devise a test that is reasonably expected to simulate (or predict) what will happen in seven years. But that is the job of the tester, not the requirements analyst. For testability in practice is not what ensures a good requirement. Testability in practice is important .. er .. for testing!
#
tynerblain said,
December 12, 2005 at 8:38 pm · Edit
Roger,
Thanks for the comment and insight. I completely agree that it is the responsibility of someone on the development team (coder or tester) to design the particular tests – or in the car example, a quality engineer.
However, part of the validation of a requirement is that it can be implemented – or it shouldn’t be a requirement. One component of validating a requirement is assessing the language of the requirement, to determine if you can unambiguously identify it as being complete. The QE needs to manage the design of the test, but as part of accepting the requirement from the requirement writer, the QE should make sure that he knows how to test it.
Collaboration and iteration are key to writing great requirements – feedback from the QE is what helps the requirement writer rewrite the requirement. And that feedback cycle is important, because the requirement writer will then have to get signoff from the stakeholders that the rewritten requirement is sufficient – perhaps they only need 80% confidence.
Also, the design engineers need to know when they’re done. Do they have to build a Yugo, or a Volvo? Without an objective criteria as an input to their design process, you can reasonably expect that they will design something you don’t want.
You make an excellent point about “losing sight of the real requirement†– I will post in depth on this topic soon. The goal (or “Goalâ€) is high reliability for the car, and presumably someone has done an ROI analysis that says that 7 years without breakage is more profitable than 6 or 8. This should absolutely be documented. A supporting functional requirement should include the actionable details.
Thanks again for the comments, and keep posting good stuff to your blog – I enjoy it. Oh yeah – Tyner Blain is the company – I’m Scott :)
#
Roger L. Cauvin said,
December 13, 2005 at 8:21 am · Edit
Scott, I fully agree specifications directly testable in practice are important for testers and developers, and that someone on the team should formulate these specifications. Where we differ is in calling these specifications “requirements†when the underlying motivator – what I in this thread have called the “real requirement†– is something unambiguously testable in principle.
By the way, I don’t agree that “controls must be at least 10 pixels apart†is a requirement. My belief is that true user interface requirements generally specify how easy it is to accomplish functional goals, not designs or design guidelines. See “Mistake 5″ in my article, “How to Guarantee Product Failureâ€. The link is http://www.cauvin-inc.com/articles/ProductFailure.htm.
Sorry, I just ran across this thread.
Tyner asked: Anyone have an “untestable†example for me?
Yes – “Self-driving vehicle control software must be safe i.e., having a failure rate no greater than 10 (to the -8) failures/hour.”
It is known that measuring this level of ultra-quality is infeasible.