Category Archives: Foundation series

The foundation series articles provide an introduction to topics that are directly relevant to the other discussions at Tyner Blain. They provide background and context to people who want to read the other articles and appreciate the ideas in them. Topics generally cover business analysis, requirements, and product management.

Foundation Series: Functional Testing of Software

Functional testing class

Functional Testing, also referred to as System Testing of software is the practice of testing the completed software to confirm that it meets the requirements defined for the software. A functional test is typically a test of user interactions, but can also involve communication with external systems. We contrast functional testing with unit testing. We also show how functional testing provides different benefits than unit testing.

This is a relatively long post for a Foundation Series post, so sit back with some coffee and relax. This primer will be worth it if its a new topic. If you know this stuff already, check out the links to other articles that go into more depth on points we make.

An Application is a Series of Flows

We can think of an application from the perspective of a user, as a series of interactions, or flows through the user interface.

Application flow

People are not usually forced to follow a fixed set of instructions, or a predefined sequence of actions in an application. They can interact with controls in a random order, skip controls entirely, or otherwise do stuff that developers don’t expect.

Unit Tests are Whitebox Tests

Unit testing, as we detailed in our telephone example, provides targeted test coverage of specific areas of the code inside the application. Unit tests are written by developers, to allow them to test that the implementation that they created is behaving as they intended. Unit tests don’t implicitly provide the start-to-finish coverage that functional tests usually provide. Unit tests are whitebox tests that assure that a specific behavior intended by the developer is happening. A weakness of using unit tests alone is that they will not identify when the developer misinterpreted the requirements.

unit testing

Functional Tests are Blackbox Tests

A functional test, however, is designed without insight into how the implementation works. It is a blackbox test. A functional test represents a set of user interactions with the application. The concept behind a functional test is to validate something about the state of the application after a series of events. According to Aberro Software, 80% of all functional tests are performed manually. That means that the most common functional test involves a tester making selections in a series of controls, and then evaluating a condition. This evaluation is called an assertion. The tester asserts that the software is in a specific state (an output is created, a control is filtered in a specific way, a control is enabled, a navigation option is disabled, etc).

full functional testing

Good functional requirements are written as concisely as possible. A requirement that supports a particular use case might state that the user specifies A, B, and C, and the application responds with D. A functional test designed to validate that requirement will almost always mimic this most common flow of events. The script that the tester follows will be to specify A, then B, then C. The tester will then evaluate the assertion that D is true. If D is false, then the test has failed.

A functional test may not cover the entire set of likely user interactions, but rather a subset of them.

Targeted functional testing

One problem with this approach is that it does not account for a user specifying (A, B, X, C) or (A, C, B). These variations in order of operations might cause the underlying code to execute differently, and might uncover a bug. For a tester to get complete coverage of the requirement (A + B + C => D), he would have to create multiple scripts. This is expensive, tedious, and often redundant. But a tester has no way to know if the multiple scripts are redundant, or required.

Combining Unit Tests and Functional Tests

When we combine both unit testing and functional testing approaches, we are implementing what is called graybox testing (greybox testing). This is also referred to as layered testing. Graybox testing provides two types of feedback into the software development process. The unit tests provide feedback to the developer that her implementation is working as designed. The functional tests provide feedback to the tester that the application is working as required.

Layered testing

Graybox testing is the ideal approach for any software project, and is a key component of any continuous integration strategy. Continuous integration is a process where the software is compiled and tested every day throughout the release cycle – instead of waiting until the end of the cycle to test. Read this plan for implementing continuous integration if you want more details.

Automating Functional Tests

Automating unit testing is both straightforward, and relatively inexpensive. Automating functional testing is more expensive to set up, and much more expensive to maintain. Each functional test represents a script of specific actions. A tester (with programming skills) can utilize software packages like WinRunner to create scripts of actions followed by assertions. This represents an upfront cost of programming a script to match the application, in parallel with the development of the application – and it requires a tester with specialized skills to program the script.

The maintenance cost of automating functional tests is magnified in the early development stages of any application, and throughout the life of any application developed with an agile process. Whenever an element of the user interface is changed, every script that interacts with that element can be broken (depending on the nature of the change). These broken scripts have to be manually updated to reflect these ongoing changes. In periods of heavy interface churn, the cost of maintaining the test suite can quickly become overwhelming.

In the real world, apparently 80% of teams find that this overwhelming cost of automated testing outweighs even the high cost of manual functional testing.

Improved Automation of Functional Tests

We can reduce the maintenance cost of keeping automated scripts current with the user interface by abstracting the script-coding from the script-definition. This is referred to as keyword and table scripting. A set of objects are coded by the tester and given keywords. Each object represents an element in the user interface. Script behavior (sequence of interaction) is defined in terms of these keywords. Now, when a UI element is changed, the keyword-object is updated and all of the scripts that reference it are repaired.

This, however, does not address issues where one control is refactored into two controls, the adding or removing of controls, or changes in the desired flow of interaction. There is still a very large (albeit smaller) maintenance burden. And the applications that use this approach (such as QTP) can cost in the tens of thousands of dollars. Another reason to do functional testing manually.

Conclusion

Functional testing is important to validating requirements. It is an important element of assuring a level of software quality. And it is still expensive with the best of today’s proven solutions. Even with the high cost, it is much cheaper than the risk of delivering a solution with poor quality. Plan on having functional testing as a component of any process to achieve software product success.

– – –

Check out the index of the Foundation series posts which will be updated whenever new posts are added.

Foundation Series: Continuous Integration

Continuous Integration classroom

Continuous Integration

Continuous Integration is the software development and quality process where all team members merge their code and verify it frequently – at least daily. This verification project includes both an automated build process and automated testing. The main benefits of continuous integration come from risk-reduction and cost-reduction.

Integration has to happen. Making it continuous reduces its cost. There are also efficiencies for developers who can write better code faster when they are writing it in the context of the latest (most up-to-date) version of the code.

Risk is reduced in two ways. First, continuous integration causes fewer bugs to be created, thereby reducing risk. Second, when bugs are created, they are identified at the earliest possible moment (same day as their creation). This maximizes the time available to resolve them. No surprises at the end of the release cycle.

Merging Code

When a single software developer is writing code, she writes her code, saving frequently, and archiving it. But she is the only person working on the code. Other than the developer, no one cares how often she checks it in, as long as she can deliver the software on the release date.

When multiple developers work together, they depend upon each other. On any given day, different developers are writing different pieces of software – usually objects with today’s languages. These objects talk to each other, depend upon each other, or at least co-exist with each other in the completed software. This stuff is all “under the hood” for users, but imperative to understand when trying to manage a development process or team.

Why Merging Matters

After the developers go off into their offices and create their objects independently, some or all of the team members have to stop what they are doing, and integrate all of those objects back into a common code-base (aka ‘the software). When a developer fixes a bug in one or more objects, those fixes need to be incorporated back into the common code-base. With multiple developers, there are multiple elements that all have to be rolled back together again into the code. Developers often refer to this as merging or promoting into the trunk, or tip, or main branch.

Each change has a set of predicted effects on the rest of the software. These changes can be tested by the developer before integrating her code into the trunk. Each change also has a set of unpredicted effects on the rest of the software. And combinations of changes can ‘create’ effects that did not occur with either change individually. ‘Unpredicted effects’ is fancy-talk for ‘bugs’. The more changes we integrate into the trunk at a time, the more bugs we create. And this is an accelerating effect – the complexity of integration increases faster than the number of changes being integrated.

Increased Complexity Drives Higher Costs

As we increase the frequency of integration, we decrease the quantity of changes per integration. This decreases both the cost per integration and the total cost of integration. It is much cheaper to integrate five changes 100 times than to integrate 100 changes five times.

4 objects, 10 sources

10 objects, 30 sources

Each object and each connection in these diagrams represent a potential source of error. With 4 objects, we have 10 sources. With 7 objects, we have 30 sources of error. This represents an accelerating increase in the cost of integration.

accelerating increase

To minimize the costs, we need to minimize the number of objects being integrated. We do that by minimizing the time between integrations.

Overhead

The main resistance to frequent merging is the cost of testing. Testing involves building the merged code, running the tests, and evaluating the results. When the building and testing (the integrating) are automated, the cost of evaluating test results can be minimized. When test-evaluation is also automated, we can bypass test-result evaluation except when the system notifies us that something is broken.

Continuous integration is only feasible when the overhead of integrating (merging and verifying) is trivialized through automation.

Conclusion

Agile processes depend upon continuous integration, but any software development process is improved with continuous integration. This is one of the enablers of iterative development processes. It reduces the cost of quality (or allows us to achieve higher quality levels at the same cost). It also makes development more enjoyable because developers spend less time on fixing bugs and more time implementing solutions.

– – –

Check out the index of the Foundation series posts for other introductory articles.

Foundation Series: Basic PERT Estimate Tutorial

estimation classroom

PERT = Program Evaluation Review Technique

PERT is a technique for providing definitive estimates of how long it will take to complete tasks. We often estimate, or scope, the amount of time it will take us to complete a task or tasks. PERT allows us to provide not only an estimate, but a measure of how good the estimate is. Good estimates are a critical element in any software planning strategy. In this post, we will present an introduction to using PERT, explain how it works and how to interpret PERT estimates.

Continue reading Foundation Series: Basic PERT Estimate Tutorial

Foundation Series: Feature Driven Development (FDD) Explained

FDD classroom

Feature driven development (FDD) is one of several agile methodologies for developing software iteratively. Iterative development is the opposite of waterfall development.

In a nutshell

FDD is a process that begins with high level planning to define the scope of the project, which then moves into incremental delivery. Each increment of delivery involves a design phase and an implementation phase. The scope of each increment is a single feature. Extreme programming (XP) is a much better known agile methodology. XP is often described as an emergent design process, in that no one knows what the finished product is going to be until the product is finished. FDD, by comparison, defines the overall scope of the project at the beginning, but does not define the details.

Explained by analogy

Consider the writing of a mystery novel as an analogy.

  • Waterfall process. First, our author determines the major characters, the mystery, a detailed plot outline, and outlines for all of the subplots. Then she sketches out all of the minor characters. Finally she writes the novel, following her outline along the way. Immediately after typing ‘the end’ she sends the novel off to the editor. The editor replies with major change suggestions – it seems that the topic might have been in vogue two years ago, but it doesn’t sell very well today. And half the chapters are low-value rambling that are sure to lose the readers. Our author starts over.
  • Extreme programming: XP. Our author decides to write a mystery novel, puts a blank sheet of paper in the typewriter and starts typing. As she writes, she realizes that she’d much rather write a romance, so she edits the chapters she’s finished and keeps moving forward. She sends early drafts to her editor of every chapter as she finishes it. The editor suggests that she change the setting from the Alps to the Amazon, and she edits again. After finishing the book, she has a three-book historical fiction series set in the rain forest.
  • Feature driven development: FDD. Our author creates an outline for the story, gives names to the major characters and prepares to write chapter one. As she starts each chapter, she writes some details of the subplot, makes some notes about how the characters should develop, and begins writing. She sends her outline to the editor, as well as drafts of each chapter as she completes them. She splits her time between incorporating feedback on previous chapters and outlining/writing the current chapter. At the end of the book, she has a mystery with the same major characters that she expected – but they didn’t develop into exactly the people she expected, and she never would have predicted the sub-plots that created themselves as she wrote.

When we look at these approaches, we see that FDD tries to combine the best part of a waterfall process (good planning) with the best part of XP (continuous improvement through iteration).

More detail

There are five phases in an FDD process. The first three phases are planning phases and the last two phases are iterative phases (they are repeated for each iteration).

Planning phases:

  1. Develop an overall model. This is a representation of how the solution will work and what it will do. It is the high-level framework describing the big picture of how everything works together.
  2. Build feature list. This is the list of features needed to implement the high level view from phase 1.
  3. Plan. Create a rough plan of the entire project. Some proponents also talk about creating detailed plans per feature (as each feature is addressed).

Iterative phases (one feature per iteration)

  1. Design the feature. What Alan Cooper would call program design.
  2. Implement the feature. Writing code, testing, documentation.

Conclusion

There is little or no discussion about requirements in FDD. Starting with an overall model is great from a developer’s perspective. The challenge is in determining what to place in the model – what requirements are important to the users? How will users interact with the system? Good answers to these questions can make or break an overall model – and a faulty model will yield low-value software.

This approach to agile development can be very effective when augmented with the right requirements management process.

Learning more about FDD

– – –

Check out the index of the Foundation Series posts which will be updated whenever new posts are added.

Software testing series: Pairwise testing

testing equipment
Before we explain pairwise testing, let’s describe the problem it solves

Very large and complex systems can be very difficult and expensive to test. We inherit legacy systems with multiple man-years of development effort already in place.  These systems are in the field and of unknown quality. With these systems, there are frequently huge gaps in the requirements documentation. Pairwise testing provides a way to test these large, existing systems. And on many projects, we’re called in because there is a quality problem.

We are faced with the challenge of quickly improving, or at least quickly demonstrating momentum and improvement in the quality of this existing software. We may not have the time to go re-gather the requirements, document them, and validate them through testing before our sponsor pulls the plug (or gets fired). We’re therefore faced with the need to approach the problem with blackbox (or black box) testing techniques.

For a complex system, the amount of testing required can be overwhelming. Imaging a product with 20 controls in the user interface, each of which has 5 possible values. We would have to test 5^20 different combinations (95,367,431,640,625) to cover every possible set of user inputs.

The power of pairwise

With pairwise programming, we can achieve on the order of 90% coverage of our code in this example with 54 tests! The exact amount of coverage will vary from application to application, but analysis consistently puts the value in the neighborhood of 90%. The following are some results from pairwise.org.

We measured the coverage of combinatorial design test sets for 10 Unix commands: basename, cb, comm, crypt, sleep, sort, touch, tty, uniq, and wc. […] The pairwise tests gave over 90 percent block coverage.

Our initial trial of this was on a subset Nortel’s internal e-mail system where we able cover 97% of branches with less than 100 valid and invalid testcases, as opposed to 27 trillion exhaustive testcases.

[…] a set of 29 pair-wise AETG tests gave 90% block coverage for the UNIX sort command. We also compared pair-wise testing with random input testing and found that pair-wise testing gave better coverage.

Got our attention!

How does pairwise testing work?

Pairwise testing builds upon an understanding of the way bugs manifest in software. Usually, a bug is caused not by a single variable causing a bug, but by the unique combination of two variables causing a bug. For example, imagine a control that calculates and displays shipping charges in an eCommerce website. The website also calculates taxes for shipped products (when there is a store in the same state as the recipient, sales taxes are charged, otherwise, they are not). Both controls were implemented and tested and work great. However, when shipping to a customer in a state that charges taxes, the shipping calculation is incorrect. It is the interplay of the two variables that causes the bug to manifest.

If we test every unique combination of every pair of variables in the application, we will uncover all of these bugs. Studies have shown that the overwhelming majority of bugs are caused by the interplay of two variables. We can increase the number of combinations to look at every three, four, or more variables as well – this is called N-wise testing. Pairwise testing is N-wise testing where N=2.

How do we determine the set of tests to run?

There are several commercial and free software packages that will calculate the required pairwise test suite for a given set of variables, and some that will calculate N-wise tests as well. Our favorite is a public domain (free) software package called jenny, written by Bob Jenkins. jenny will calculate N-wise test suites, and its default mode is to calculate pairwise tests. jenny is a command line tool, written in C, and is very easy to use. To calculate the pairwise tests for our example (20 controls, each with 5 possible inputs), we simply type the following:

jenny 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 > output.txt

And jenny generates results that look like the following:

1a 2d 3c 4d 5c 6b 7c 8c 9a 10c 11b 12e 13b 14d 15a 16c 17a 18d 19a 20e
1b 2e 3a 4a 5d 6c 7b 8e 9d 10a 11e 12d 13c 14c 15c 16e 17c 18a 19d 20d
1c 2b 3e 4b 5e 6a 7a 8d 9e 10d 11d 12a 13e 14e 15b 16b 17e 18e 19b 20c
1d 2a 3d 4c 5a 6d 7d 8b 9b 10e 11c 12b 13d 14b 15d 16d 17d 18b 19e 20a
1e 2c 3b 4e 5b 6e 7e 8a 9c 10b 11a 12c 13a 14a 15e 16a 17b 18c 19c 20b
1a 2a 3c 4e 5e 6a 7b 8c 9d 10b 11b 12b 13e 14a 15d 16d 17c 18c 19b 20d […]

Where the numbers represent each of the 20 controls, and the letters represent each of the five possible selections.

What’s the catch?

There are two obvious catches. First, when you use a tool like jenny, we must run all of the tests that it identifies, we can’t pick and choose. Second, pairwise testing doesn’t find everything. What if our example bug before about taxes and shipping only manifested when the user is a first time customer? Pairwise testing would not catch it. We would need to use N-wise testing with N >= 3. Our experience has been that N=3 is effective for almost all bugs.

There is also a sneaky catch – test generators like jenny assume that the order of variables is irrelevant. Sometimes we are testing dynamic user interfaces, where the order of value selection in controls is relevant. There is a solution to this, and we will update this post with a link to that solution when it is available.

– – –

Check out the index of software testing series posts for more testing articles.

Foundation Series: CMMI Levels Explained

CMU classroom

CMMI is the initialism for Capability Maturity Model Integration.

CMMI is a numeric scale used to “rate” the maturity of a software development process or team. Maturity can be thought of like enlightenment. An immature process is not much different from the old “infinite monkeys” yarn – maybe we get it right, but probably not. A fully matured or enlightened process not only does it right, but improves itself over time.

The Software Engineering Institute (SEI) at Carnegie Mellon (Go Tartans! BSME90) created the CMM model for software engineering in the late 80’s and early 90’s. In an effort to consolidate multiple CMM models for different process areas, the SEI team created the CMMI in 2002. In this post, we will understand what each level represents.

Technically, the name of the model is the Capability Maturity Model Integration for Software Engineering, or SW-CMM, but in practice people just use CMM. The 645 page document can be found on the CMU SEI site.

Continue reading Foundation Series: CMMI Levels Explained

Foundation Series: User Experience Disciplines

requirements classroom

What the heck is UX?

UX, pronounced you-ex, is the shorthand for user-experience. It represents the science and art of tailoring the experience that users have with a product – in our case, software. UX is a relatively new term, rapidly overtaking HCI (human-computer interface) and CHI (computer-human interface) as the acronym du jour. In some circles it is known as human-factors engineering, applied to software design. There are several disciplines within this field, we’ll introduce each of them.

We talk about the different roles within this field in several posts throughout Tyner Blain. The following are introductory explanations for these roles.

Information Architecture (IA)

The study of information and it’s presentation to people. Also the study of how people interact with information. Many software packages allow users to manage complex information. Information can be presented in ways that make it easier for people to absorb and understand.

As a very simple example, imagine a website that allows you to research the cost of living in different cities in the USA. There are thousands of cities in the country. IA helps with designing a user interface that allows users to get information for a specific city. An IA specialist would recognize that cities can be organized by state. In fact, cities in different states can have the same name, like Springfield, Missouri and Springfield, Illinois. But two cities within the same state won’t have the same name. This insight can be applied to present a design where the user selects a state first, which then filters a list of the cities within that state.

A corporate internet may really be the combination of several different standalone websites – a company news bulletin or blog, an interface to the HR system, a download center for installing corporate-approved software, an email directory for the company, etc. IA specialists will determine how to organize all of these functions so that employees can intuitively find what they need and get as much benefit out of the site as possible.

Usability

The study of what makes software easy to use or hard to use. A usability specialist will look at the tasks that a user needs to perform, and analyze the most intuitive or efficient ways to perform them. Think of the sequence of steps that you take when adding a graph of data in Microsoft excel. There is a wizard that walks you through a series of questions in order to create the graph for you. A usability specialist determined the best sequence in which to ask and answer those questions.
Usability specialists will also make holistic assessments of how an application or suite of applications behave. This helps users gain competence or mastery of software more quickly. All of Microsoft’s applications use the same approach for opening and saving files (same menus, same shortcut keys, same dialogs, etc). This is the result of usability analysis.

A usability specialist will also be the person who determines how to make software great for novice and experts alike. This is critical to having successful software – the experts are the people who will promote your software for you, but they won’t become experts unless they survive the novice-user break-in period.

Graphic (or Visual) Design

Some people erroneously think of visual designers as the people who make software sexy. The can certainly do that, but graphic design is as much about creating emotions for the users, consistency of presentation, and establishing elements of brand as it is about sexy. This is what makes a Macintosh look like a Macintosh (while usability specialists make it great to use).

A graphic designer can create a set of consistent icons that make an application feel professional, and make the user feel whatever the designer wants. Graphic designers can make the user interface feel different enough to create a notion of uniqueness and branding (association of the images with the product or company), while also keeping them consistent enough with “everybody else” that users know what to do. Another technique is to create an affordance visually. An affordance is an image or element that suggests an action. A dial says “turn me” while a slider says “slide me.”

This can be very subtle and very powerful.

boring scrollbar

Think about scrollbars for a second. Most scrollbars have a pretty boring look. There are tiny up and down arrows at the top and bottom – which create an affordance that says “click on me and the window will move up (or down). That’s good design. There’s also a grey bar in the middle. In some user interfaces, the size of that bar is proportional to the amount of the content that is currently visible. This gives the user some insight into how much content is hidden – another good visual design. A user can also click and drag the grey bar up and down to move the contents of the window. There are no visible cues that this would work, a user would have to be shown that this works. Another example of “hidden” functionality is the ability to click in the light grey “background” of a scrollbar – it causes the contents of the window to page up or page down. Again, without training or an errant click, people would not know this.

cool scrollbar

If we make a tiny change to that scrollbar by adding a few lines in the center, we create a tactile effect – implying that the user can “grab” it with the mouse. This scrollbar screams “grab me”. Subtle, but powerful.

Interaction Design

Interaction Designers are a different breed.  They focus on the software at a higher level, using a goal-driven process to focus on the intent and objectives of the users.

– – –

Check out the index of the Foundation Series posts which will be updated whenever new posts are added.

Foundation Series: Unit Testing of Software

Requirements class students

What are unit tests?

monkey at keyboard

Testing software is more than just manually banging around (also called monkey testing) and trying to break different parts of the software application. Unit testing is testing a subset of the functionality of a piece of software. A unit test is different from a system test in that it provides information only about a particular subset of the software. In our previous Foundation series post on black box and white box testing, we used the inspections that come bundled with an oil change as examples of unit tests.

Unit tests don’t show us the whole picture.

A unit test only tells us about a specific piece of information. When working with a client who’s company makes telephone switches, who’s internal software development team did not use unit tests we discussed the following analogy:
Unit tests let us see very specific information, but not all of the information. Unit tests might show us the following:

bell

A bell that makes a nice sound when ringing.

dial

A dial that lets us enter numbers.
horn

A horn that lets us listen to information.

We learn a lot about the system from these “pictures” that the unit tests give us, but we don’t learn everything about the system.

phone

We knew (ahead of time) that we were inspecting a phone, and with our “unit tests” we now know that we can dial a phone number, listen to the person on the other end of the line, and hear when the phone is ringing. Since we know about phones, we realize that we aren’t “testing” everything. We don’t know if the phone can process sounds originating at our end. We don’t know if the phone will transmit signals back and forth to other phones. We don’t know if it is attached to the wall in a sturdy fashion.

Unit testing doesn’t seem like such a good idea – there’s so much we need to know that these unit tests don’t tell us. There are two approaches we can take. The first is to combine our unit tests with system tests which inspect the entire system – also called end to end tests. The second is to create enough unit tests to inspect all of the important aspects. With enough unit tests, we can characterize the system (and know that it is a working phone that meets all of our requirements).

old phone with unit tests

Software developers can identify which parts of their software need to be tested. In fact, this is a key principal of testing-driven development (TDD) – identify the tests, then write the code. When the tests pass, the code is done.

Why not use system tests?

The system test inspects (or at least exercises) everything in the software. It gives us a big picture view. Ultimately, our stakeholders care about one thing – does the software work? And for them, that means everything has to work. The intuitive way to test, then, is to have tests that test everything. System testing is also known as functional testing.
old phone

These comprehensive tests tell us everything we want to know. Why don’t we use them?

There is a downside to system testing. In the long run, it’s more expensive than unit testing. But the right way to approach continuous integration is to do both kinds of testing.

In our Software testing series post on blackbox and whitebox testing we discuss several tradeoffs associated with the different types of testing. For most organizations, the best answer is to do both kinds of testing – do some of each. This is known as greybox testing, or grey box testing.

System tests are more expensive, because they are more brittle and require more maintenance effort to keep the tests running. The more your software changes, the faster these costs add up. Furthermore, with Agile practices, where portions of the system are built and tested incrementally, with changes along the way, system tests can be debilitatingly expensive to maintain.

Because unit tests only inspect a subset of the software, they only incur maintenance costs when that subset is modified. Unit testing is done by the developers, who write tests to assure that sections of the software behave as designed. This is different from functional testing, that assures that the overall software meets the requirements.
There are more articles on software testing in our software testing series.
– – –

Check out the index of the Foundation series posts for other introductory articles.

Foundation Series: Black Box and White Box Software Testing

Blackbox tests and whitebox tests.

These terms get thrown about quite a bit. In a previous post, we referenced Marc Clifton’s advanced unit testing series. If you were already familiar with the domain, his article could immediately build on that background knowledge and extend it.Software testing can be most simply described as “for a given set of inputs into a software application, evaluate a set of outputs.”
Software testing is a cause-and-effect analysis.

Continue reading Foundation Series: Black Box and White Box Software Testing

Foundation Series: Structured Requirements

classroom

Karl Wiegers wrote the book on structured requirements – Software Requirements, 2nd Edition, Karl E. Wiegers.

If you are involved in managing requirements, you should own this book. Even if you don’t follow his approach to managing requirements, or don’t like how he deals with use cases, you should still read this book – at a minimum, you’ll know more about it than your pointy-haired boss who reads this blog, sees this post, and tells you that you must follow the Wiegers way.

He details his framework, tells you how to use it, and how to manage requirements in it. Karl also has a website, processimpact.com, chock full of resources.

Karl proposes that there are three distinct levels of requirements

  1. Business requirements – Goals of the business like “increase profits”, “improve branding”, “become dominant in a market”
  2. User requirements – Goals or tasks of the users of software like “create purchase order”, “find a book my wife would like”
  3. Functional requirements – Functionality that the software must include like “calculate profit-maximizing price” or “generate Sarbanes-Oxley compliance report”

He also classifies these requirements as either being functional or non-functional.

Functional requirements describe what the system must do.

  • Provide a history of transactions for auditing purposes
  • Enable users to listen to samples of the music on the CD

Non-functional requirements constrain how the system must do it.

  • Most relevant search results will be returned in under 5 seconds
  • System will be available 99% of the time between 9 AM and 7 PM Eastern time

Karl then presents these types of requirements with a structured classification. His structure shows different types of requirements driving other types of requirements. In the picture below, we would see that a business requirement (increase profits) drives a user requirement (define product prices) which drives a functional requirement (calculate profit-maximizing price).

Wiegers taxonomy of requirements

I believe a simplified version of this diagram (which is a simplified version of a diagram from page 9 of his book) makes it easier to introduce the concepts.

Simplified structural requirements taxonomy

In a presentation to a class at St. Edwards University last fall, I presented the following single slide.

Types of requirements slide

Summing it all up

Goals are achieved through use cases.

Use cases are enabled by functional requirements.

Functional requirements lead to design and implementation.

Non-functional requirements characterize how functional requirements must work.

Constraints restrict how functional requirements may be implemented.

[Update 2007/02/26]

I’ve refined my thinking about how structured requirements should be represented. In short, I feel that non-functional requirements are under-emphasized in the real world.  I proposed a modified view of structured requirements, designed to increase the level of attention given to non-functional requirements. I go into more detail in the article, Non-Functional Requirements Equal Rights Amendment. Here’s the diagram of the structure that I proposed in that article:

Better Structured Requirements Framework

– – –

Check out the index of the Foundation Series posts for other introductory articles.