As product managers, we talk about creating the right solutions with our products. Understanding the very real problems our customers face, understanding the very real opportunities our markets present, and manifesting that understanding in a product roadmap.
Other than being “not as good,” how expensive is it to build the wrong product?
The Cost of Poor Quality
There’s an analog to the market dynamics of making poor product decisions – executing with poor quality. Many research studies and articles have identified the market impacts of poor quality. This has become so well accepted that people today cite it like a law of physics (one example here based on this 1988 IEEE research by Barry Boehm and Philip Papaccio) as the “1-10-100 rule.” The primary conclusion of that research is that ten dollars spent on fixing bugs:
- Costs and saves $10 when you catch (and fix) the bug during implementation.
- Avoids $100 in costs when you catch the bug during QA and send the product back to development (then test again).
- Avoids $1,000 in costs versus waiting until your customers catch the bug in the field, causing the team to remedy the problems, rush out a patch release, and/or go to heroic lengths to manage a PR problem.
This is an opportunity in front of your product team – a 100x payback from investing in quality during the development process. Of course, be pragmatic about it - if the cost of testing exceeds the cost of bugs, don’t test.
This is not a solved problem, by any stretch, but the solutions and methods to solve this problem are well understood now. In fact, a 2001 article by Barry Boehm and Victor Basili shows that in some cases, the labor costs to resolve bugs can be as low as 5:1 – when considering a subset of smaller systems, when using more “agile” processes. That lowered ratio does not take into account the lost market opportunities and the costs of cleaning up collateral damage to your product – just the immediately realizable (and measurable) costs of resolution.
One very real problem, when talking about “bugs” is in defining what a “bug” is. And the definition of a bug is a matter of perspective. A developer can reasonably assert that “if it meets the spec it is not a bug, it is working as designed.” What if the spec is wrong? The developer may not be guilty, but collectively, your team screwed up. There’s a “bug” in the requirements.
What Is A Requirements Bug?
Now things are getting interesting.
If you wrote a requirement that you interpret as “A” and your developers interpret as “B” – you definitely have a bug – the team won’t build the right product. For each $1 you could spend making sure you have bug-free requirements, you could:
- Make sure you have a shared understanding of the documented requirements through active listening before development begins ($1). Following the Rules of Writing Requirements will help prevent this miscommunication.
- Wait until the engineering team is ready to demo their progress ($10). They will have to build it again, because they built the wrong stuff.
- Wait until development is complete and QA is validating that the code meets the spec ($100). This gets tricky if you are thinking “A”, the developers are thinking “B”, and QA is thinking “C.”
- In classic throw-it-over-the-wall mode, wait until the product is launched, and it is the wrong product ($1000). Assuming “A” was the right problem to solve, the cost of entering the market with a solution to “B”, leaving “A” unaddressed, is impressively high.
This gets interesting because the above assumes that “A” was the right problem to solve. What if “G” was the right problem to solve, and “A” was the wrong market problem? Even if everything (else) is working perfectly – you document requirements for “A”, the engineering team creates a marvelous “A” and it launches without implementation errors – you still fail, and incur the 1,000x cost of a failed product launch.
There is an even larger opportunity in front of your product team – a 1,000x payback on discovering and choosing to solve the right problems for your customers and markets.
- Would Palm still be independent if the Pre had solved a compelling problem?
- Why did Intuit have to buy Mint.com – could they have embraced the same customers with Quicken?
- What is Garmin going to do now that “free” GPS mapping and turn-by-turn directions are becoming ubiquitous? If it is “more of the same,” how much are they wasting?
I’m not aware of any studies that show that “requirements bugs” fit the same 1/10/100/1000 cost explosion model that “implementation bugs” exhibit. Emotionally, it “feels about right” to me – it passes my “sniff test.”
There are times when days of research would have been required to avoid or redirect a few hours of implementation effort on projects I’ve worked on. And I’ve seen man-years invested solving problems that didn’t involve much more research.
My intuition from products and teams I’ve worked with is that it probably averages out somewhere around 10x.
What does your gut (or your data – if you have some, post a link below!) tell you?