Test Smarter, Not Harder – Part 3

interconnected controls

This series is a reprint of an article by Scott Sehlhorst, written for developer.* in March 2006. A recent article on dailytech about “new” methods for software testing points to some very interesting research by the NIST (the National Institute of Standards and Technology) Information Technology Lab. We’ve split the original article into three articles to be more consistent in length with other Tyner Blain articles.

This is part 3 of a 3 part series.

Article Overview

How to make it even better

When we don’t know anything, or don’t apply any knowledge about our application to our testing strategy, we end up with far too many tests. By applying knowledge of the application to our test design we can greatly reduce the size of our test suite. Tests that incorporate knowledge of the application being tested are known as whitebox tests.

Map out the control dependencies

In our previous examples, we applied no knowledge of the interactions of controls (or the interactions within the program of having made selections in the controls. If we consider a visual map of the controls and their possible relationships, it would look like the following diagram.

diagram unknown

There is a possibly-relevant connection between the selections in every pair of controls. We have designed our testing around the lack of knowledge that is clearly visible in the diagram.

It is likely that we can rule out some of the dependencies, but possibly not all of them. Our approach should be conservative – only remove those dependencies that we know don’t exist. This knowledge comes from an understanding of the underlying application. Once we remove these links the diagram will look like this:

diagram understood

This clarified mapping allows us to reduce the size of our test suite dramatically, because we’ve identified the independence of many controls. In an ideal case, the result will be two or more completely disconnected graphs, and we can build a set of tests for our suite around each separate graph. As the diagram above shows, we do not have two completely independent graphs. We can take a testing approach as shown in the following diagram:

subdivided diagram

We’ve grouped all of the controls on the left in a blue box – these controls will be used with the N-wise generation tool to create a set of tests. The grouping of controls on the right will also be used to generate a set of tests.

In this example, we reduce the number of tests required by a significant amount when order matters.


Also note that we increase the number of tests required when order doesn’t matter, if we have any overlapping controls (if the graphs can’t be separated). When the graphs can be separated, this reduces the amount of testing even if order is irrelevant.

The key to separating the graphs is to make sure that all controls only connect to other controls within their region (including the overlapping region).

Eliminate equivalent values from the inputs

When we know how the code is implemented, or have insights into the requirements, we can further reduce the scope of testing by eliminating equivalent values.

Consider the following example requirements for an application:


And two variables that we are evaluating in our testing – imagine that they are controls in a user interface (or values imported from an external system).

all values

Which we can collapse for testing purposes into:

collapsed values

This consolidation of equivalent values reduces the number of tests we need to run. For our simple pairwise test, we reduce the number from 18 to 12. When there are more controls involved, and when we are doing N-wise testing with N=3, the impact is much more significant.


We can test very complex software without doing exhaustive testing.

Random sampling is a common technique, but falls short of high quality goals – very good quality requires very high quantities of tests.

Pairwise testing allows us to test very complex software with a small number of tests, and reasonable (on the order of 90%) code coverage. This also falls short of high-quality goals, but is very effective for lower expectations.

N-wise testing with N=3 provides high quality capable test suites, but at the expense of larger suites. When the order of inputs into the software matters, N-wise approaches become limited in the number of variables they can support (fewer than 10), due to limitations of test-generation tools available today.

We can apply knowledge of the underlying software and requirements to improve our testing strategy. None of the previous techniques require knowledge of the application, and thus rely on brute force to assure coverage. This approach results in conceptually redundant tests in the suite. By mapping out the grid of interdependency between inputs and subdividing the testing into multiple areas we reduce the number of tests in our suite. By removing redundant or equivalent values from the test suite we also reduce the number of tests required to achieve high quality.

Article Overview

One of the interesting points, not addressed by this article, and not

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.