Foundation Series: Black Box and White Box Software Testing

Blackbox tests and whitebox tests.

These terms get thrown about quite a bit. In a previous post, we referenced Marc Clifton’s advanced unit testing series. If you were already familiar with the domain, his article could immediately build on that background knowledge and extend it.Software testing can be most simply described as “for a given set of inputs into a software application, evaluate a set of outputs.”
Software testing is a cause-and-effect analysis.

The inputs can be user actions, data from external systems, or any combination of the two.

The outputs can be outputs on the computer screen, data output to a file (like a log or report), a communication with other computers (like web services responses), or any other change in state of the system.

As an example of simple inputs and outputs, a calculator program has the requirement: “Allow user to calculate the result of dividing one number by another, non-zero number.” If a/b=c, the inputs would be the dividend (a) and the divisor (b). The outputs would be the quotient (c).

As an example of change in state as an output, consider a requirement: “After the user logs in, the system will make future decisions based upon that user’s predefined preferences.” For that requirement, an input would be “logged-in user id” and an output (or change in state) would be “system loads user’s preferences.”

Black box testing

Black box testing is a stimulus-response analysis of behavior. To run (or define) a black box test, we don’t need to know anything about how the software works. We provide it with a stimulus (user selects “advanced search” button) and inspect for a response (advanced search page input form is presented to the user).
In biology class, we performed black box tests on dead frogs. We hooked up electrodes to their legs and applied current. The legs kicked. We didn’t know anything about how the frogs worked, or we were told that the muscles would respond to small electrical currents by contracting. And we didn’t need to know. We just tested the frogs.

With a calculator program, we don’t need to know the hoops that the cpu jumps through to calculate the quotient. We don’t need to wonder if the program is written in php or python or c++. All we need to do is inspect the output for a given set of inputs.

This highlights the primary benefit of black box testing: a system can be tested by someone with no knowledge of how it works.

This allows us to more easily find people capable of testing our software – the pool of available people with the skills to keep track of what they input and what they output is much larger than that of people who understand the stuff “under the hood”. It also saves our testers from having to learn how it works – they can start testing immediately.

When a team is organized with a dedicated testing (only) staff, the tests they create are typically black box tests – because the team can be staffed more cost effectively.

Blackbox tests are sometimes referred to as opaque tests or closed-box tests. They are sometimes also referred to as behavioral tests – in that they only test the behavior of the system, not how (or how well) it is constructed.

White box testing

A white box test is one that requires insight into how the code is implemented. The test takes advantage of understanding the data structures or flows to provide specific information about how the code is working as an output of the test.

inspection machine

White box tests are also sometimes called clear-box tests or structural tests because they can provide insight into how the code is performing.

When you take your car for a state inspection test, it is a black-box test. The overall performance (breaking distance, emissions levels, no sharp edges, etc) is inspected. When you take your car for an oil change, you usually get a set of white-box test results (air-filter cleanliness check, coolant mixture analysis). These pieces of detailed information (coolant at 60% mix) don’t tell you if the car is “good” – but they give you insight into how it’s running.

Unit testing, or testing a subset of the functionality of a piece of software can use black box or white box testing, but is most commonly done using white box tests. A unit test is a test that provides a piece of specific information (like coolant mix, or testing a connection to a database, or the speed of a SQL query), without neccessarily making a statement about the overall quality of the software or system.

The primary benefit of white box testing is that you can use insight of how the software is constructed to efficiently test it. This testing efficiency comes from having the ability to target specific areas of the code for testing, and also allows more efficient selections of tests to run. The weakness of white box testing is that it requires knowledge of how the software is written in order to design the appropriate tests. Another weakness is that misinterpretation of the requirements can result in whitebox tests of the wrong functionality.

When a team is organized so that developers are responsible for testing their own code, they are more likely to incorporate white box tests, and more specifically unit tests. There are efficiencies to using these types of tests, making the development process easier.

Grey box testing

When we combine black box and white box tests in the same test suite, we get what is called grey box testing, or greybox testing. This systematic approach to testing allows us to combine the benefits of both blackbox testing and whitebox testing in the same test suite.

Test automation

Both black box and white box tests can be automated. Different tools and techniques are applied for the different types of tests, but both are feasible and common. Continuous integration is a development process that leverages test automation to reduce costs and improve quality.

There are more articles on software testing in our Software Testing Series.

What book should I read to learn more?

Software Testing, by Ron Patton.

Here’s a review from Randy Rice “Software Testing Consultant & Trainer” (Oklahoma City, OK)

Software Testing is a book oriented toward people just entering or considering the testing field, although there are nuggets of information that even seasoned professionals will find helpful. Perhaps the greatest value of this book would be a resource for test team leaders to give to their new testers or test interns. To date, I haven?t seen a book that gives a better introduction to software testing with this amount of coverage. Ron Patton has written this book at a very understandable level and gives practical examples of every test type he discusses in the book. Plus, Patton uses examples that are accessible to most people, such as basic Windows utilities.

I like the simplicity and practicality of this book. There are no complex formulas or processes to confuse the reader that may be getting into testing for the first time. However, the important of process is discussed. I also have to say a big THANK YOU to Ron Patton for drawing the distinction between QA and testing! Finally, the breadth of coverage in Software Testing is super. Patton covers not only the most important topics, such as basic functional testing, but also attribute testing, such as usability and compatibility. He also covers web-based testing and test automation ? and as in all topics covered in the book, Patton knew when to stop. If you want to drill deeper on any of the topics in this book, there are other fine books that can take you there!

I love this book because it is practical, gives a good introduction to software testing, and has some things that even experienced testers will find of interest. This book is also a tool to communicate what testing and QA are all about. This is something that test organizations need as they make the message to management, developers and users. No test library should be without a copy of Software Testing by Ron Patton!

– – –

Check out the index of the Foundation Series posts for other introductory articles.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

11 thoughts on “Foundation Series: Black Box and White Box Software Testing

  1. Another advantage of black-box testing is that is keeps your “eye on the ball”. What ultimately matters is that the product fulfills the requirements, which are by definition independent of how the product is implemented. Paying sufficient attention to quality black-box testing forces your team to think in terms of the user experience, whereas you run the risk of neglecting what really matters if you perform only white-box testing.

  2. Discuss the reason why Blackbox and whitebox testing is not widely used in today’s software engineering business. If you disagree, discuss the reason

  3. Kasama, thanks for the question. Here’s my personal perspective, maybe other readers will share theirs…

    My personal experience has been over the last 10 years, and exclusively in enterprise software. In that realm, after working with about 40 different teams for fortune 100 (US) companies, I have never seen a group that didn’t use manual blackbox testing.

    Different groups have had different approaches – either scripted “acceptance tests” or “current feature validation tests” or a form of UAT (user acceptance test).

    Roughly half of those teams have also had automated black-box tests – running predefined scripts through a solution to “smoke test” it at a minimum, and as “regression tests” more commonly.

    Every group I’ve worked with would argue that they use manual whitebox testing – e.g. the developer tests the functionality of code before checking it in. Fewer than 1/4 of the teams have used automated white-box unit tests.

    So, my personal experience is that
    25% use automated graybox testing (white + black)
    25% use automated blackbox testing
    50% use manual blackbox testing

    I have not worked with a team that did no testing at all. Maybe some other readers would like to share their anecdotal data?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.