Passing the Wrong Whitebox Tests

broken chain

We’ve talked about the value of using whitebox testing in our Software testing series post on whitebox testing. What we haven’t explored is how to make sure we are creating the right tests. We have to validate our tests against the requirements. This post shows where the flaw is in the typical whitebox testing process, and how to fix it.

A reader emailed us with the comment, “It’s been my experience that developers can’t test their own code.” Their problem was that they were missing a link in the software development chain (missing a step in the process).

Typical whitebox testing process

whitebox process

In the typical whitebox testing process, the developer receives a PRD from the product manager. There is a communication step in the process next (rectangle with two vertical bars), where the developer interprets the requirements. That interpretation leads to the design and creation of tests, as well as the design and creation of code. The tests are run against the software, and results are obtained.

The sneaky problem

The sneaky problem is in the communication step – the developer interprets the requirements document. That interpretation yields both code and tests. If the developer interprets incorrectly, then he will design the wrong whitebox tests. Ultimately, he will deliver software that does the wrong thing very well.

The solution

The number one way to be a better listener is active listening. Listening is the Family Feud #1 answer to “Name a form of communication”. We can, should, and must use the same technique when interpreting requirements documents. We can modify the process to include validation (active listening) of our interpretation of the requirements.

better whitebox testing process

The steps that don’t change in this modified whitebox testing process have been greyed out.

Process changes

  • Split the “Design and create tests” process step into two steps – “Design tests” and “Create tests”
  • Insert a test-design validation step into the process.

The test design validation step involves the developer (who designed the tests) and the product manager who created the PRD. In this conversation, the developer will explain the test design based upon his interpretation of the PRD. The requirements manager will validate that the developer’s interpretation is correct. This requires our product manager to be able to understand from the test designs if the tests will actually confirm that the requirement is satisfied.

Summary

Without validation that whitebox tests are going to test the correct interpretation of the requirements, we run the risk of building the wrong software right.

Further reading

For a view of the overall software development process, check out the Software development process example post which organizes eight other articles that focus on other details of the big picture.

To see how other erros like this are introduced in the software development process, check out Where bugs come from, which includes details of how to incorporate feedback loops designed to prevent bugs.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.