Measuring the Cost of Quality: Software Testing Series

scale

Should we test our software? Should we test it more?

The answer to the first question is almost invariably yes. The answer to the second question is usually “I don’t know.”

We write a lot about the importance of testing. We have several other posts in our series on software testing. How do we know when we should do more automated testing?

Determining the costs is an ROI analysis. Kent Beck has a great position –

If testing costs more than not testing, then don’t test.

At first glance, the statement sounds trite, but it really is the right answer. If we don’t increase our profits by adding more testing, we shouldn’t do it. Kent is suggesting that we only increase the costs and overhead of testing to the point that there are offsetting benefits.

We need to compare the costs and benefits on both sides of the equation. We’ll start with a baseline of the status quo (keeping our current level of testing), and identify the benefits and costs of additional testing, relative to our current levels.

We should do more automated testing when the benefits outweigh the costs

We’ll limit our analysis to increasing the amount of automated testing, and exclude manual testing from our analysis. We will use the assumption that more testing now will reduce the number of introduced bugs in the future. This assumption will only hold true when developers have the ability to run the automated tests as part of their personal development process. We’ve written before about the sources of bugs in the software development process, and in other posts in this series we show how automated testing can prevent future bugs (unlike manual testing, which can only identify current bugs).

We are also assuming that developers are running whitebox unit tests and the testing team is running blackbox tests. We don’t believe that has an impact on this analysis, but it may be skewing our perspective.

Benefits

  • Reduced costs of bugs in the field. Bugs in the field can cause us to have “emergency releases” to fix them. They can increase the costs of (internal teams) using our software and working around the bugs. They can cause delayed sales. Bugs cause lost customers.
  • Reduced costs of catching future bugs. When developers can run a regression suite to validate that their code didn’t break anything before asking the testing team to test it, they can prevent introducing regression bugs. And thereby prevent the costs of finding, triaging, and managing those bugs.
  • Reduced costs of developing around existing bugs. Developers can debug new code faster when they can isolate it’s effects from other (buggy) code.
  • Reduced costs of testing around existing bugs. There is a saying – “What’s the bug behind the bug?” we’ve heard when testers are trying to validate a release. A bug is discovered, and the slack time in the schedule is used fixing that bug – then the code is resubmitted to test to confirm that the bug was fixed. Another bug was hiding behind it, and untestable because the first bug obfuscated the second bug. Addressing the second bug introduces unplanned testing costs. Preventing the first bug will reduce the costs of testing the latent bug.

Costs

Most of these increased costs are easy to measure once they are identified – they are straightforward tasks that can be measured as labor costs.

  • Cost of time spent creating additional tests.
  • Cost of time spent waiting for test results.
  • Cost of time spent analyzing test results.
  • Cost of time spent fixing discovered bugs.
  • Cost of incremental testing infrastructure. If we are in a situation where we have to increase our level of assets dedicated to testing (new server, database license, testing software licenses, etc) in order to increase the amount of automated testing, then this cost should be captured.

Conclusion

This is a good framework for making the decision to increase automated testing. By focusing on the efficiencies of our testing approaches and tools, we can reduce the costs of automated testing. This ultimately allows us to do more automated testing – shifting the pareto optimal point such that we can increase our incremental benefits by reducing our incremental costs.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

6 thoughts on “Measuring the Cost of Quality: Software Testing Series

  1. With all due respect, I think you missed the forest for the trees in this article.

    Any product or service has some optimal mix of features, quality, cost and speed to market. Each of these product attributes impacts the competitive advantage and the profit generated. The competitive advantage is based on the level of customer expectation, which increases over time.

    So now removing my MBA hat and putting my CQM hat on …there should be some optimal quality level (balanced with speed, cost and feature set) that will produce the greatest customer satisfaction (and revenue).

    This level of quality requires some $ amount put toward quality assurance. This $ amount should then be split between automation and manual testing that will generate the greatest level of quality.

    Now you can certainly argue that there are processes or automation techniques that reduce overall cost and speed the development cycle …but my response would be that this activity would generate more code, which would in turn require more testing.

  2. Hmmm …so I guess my point is that you have to follow it through to the bottom line. Will the increased cost necessary to generate additional quality increase revenue by a greater amount (and thus increase profit)?

    This depends on the product (or service), the maturity of the industry, customer expectations, brand positioning, legal exposure, and a bunch of other stuff.

    – If you are creating a luxury car then quality is more important than price or speed to market.
    – If you are creating version 1.0 of some hot new software app, then speed to market may be most important.
    – If you are producing a commodity or an older technology product then price may be your primary driver.

    So it really depends on the product. And even having a good understanding of the importance of quality to the product it is very hard to turn this into specific decisions regarding increases or decreases in quality spending.

  3. Jim – great point that the ultimate driver of quality investments should be the impact on the desirability (or marketability) of the product.

    While I didn’t talk at all about what the most profitable level of quality is – and to your point, that’s a much bigger topic – I hope that I provided good suggestions on tactically addressing the decisions around a particular (undefined) level of quality.

    I hope future readers of this article also read your comments – there is absolutely a strategic decision to be made about a desired level of quality. Once that decision has been made, these tips will help people make penny-wise choices about how to achieve that target level of quality (which hopefully was not made pound-foolishly).

    Thanks again for reading and for commenting!

    Scott

  4. Pingback: Ron Geens
  5. Pingback: uaoeu
  6. [John Hunter writes]

    I agree with the idea of improving testing so as to get larger benefit at lower than current costs (there is often plenty of room to do this – get better quality without increasing costs). I also think the largest “cost” is often lost confidence of users and potential customers. Bugs make you seem sloppy and un-trustworth. If you rely on customers trusting you that is very dangerous. In this post I talk about improving software development with automated tests:

    http://management.curiouscatblog.net/2010/03/08/improving-software-development-with-automated-tests/

Leave a Reply to Jim Hancock Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.