Category Archives: Test Automation

Automated testing is critical to software development. Efficient development relies upon automated tests for speed of delivery, preventing regression bugs, and mitigating risks. These articles look specifically at how automation can be improved, and how test automation can be applied to improve the overall process.

Foundation Series: Continuous Integration

Continuous Integration classroom

Continuous Integration

Continuous Integration is the software development and quality process where all team members merge their code and verify it frequently – at least daily. This verification project includes both an automated build process and automated testing. The main benefits of continuous integration come from risk-reduction and cost-reduction.

Integration has to happen. Making it continuous reduces its cost. There are also efficiencies for developers who can write better code faster when they are writing it in the context of the latest (most up-to-date) version of the code.

Risk is reduced in two ways. First, continuous integration causes fewer bugs to be created, thereby reducing risk. Second, when bugs are created, they are identified at the earliest possible moment (same day as their creation). This maximizes the time available to resolve them. No surprises at the end of the release cycle.

Merging Code

When a single software developer is writing code, she writes her code, saving frequently, and archiving it. But she is the only person working on the code. Other than the developer, no one cares how often she checks it in, as long as she can deliver the software on the release date.

When multiple developers work together, they depend upon each other. On any given day, different developers are writing different pieces of software – usually objects with today’s languages. These objects talk to each other, depend upon each other, or at least co-exist with each other in the completed software. This stuff is all “under the hood” for users, but imperative to understand when trying to manage a development process or team.

Why Merging Matters

After the developers go off into their offices and create their objects independently, some or all of the team members have to stop what they are doing, and integrate all of those objects back into a common code-base (aka ‘the software). When a developer fixes a bug in one or more objects, those fixes need to be incorporated back into the common code-base. With multiple developers, there are multiple elements that all have to be rolled back together again into the code. Developers often refer to this as merging or promoting into the trunk, or tip, or main branch.

Each change has a set of predicted effects on the rest of the software. These changes can be tested by the developer before integrating her code into the trunk. Each change also has a set of unpredicted effects on the rest of the software. And combinations of changes can ‘create’ effects that did not occur with either change individually. ‘Unpredicted effects’ is fancy-talk for ‘bugs’. The more changes we integrate into the trunk at a time, the more bugs we create. And this is an accelerating effect – the complexity of integration increases faster than the number of changes being integrated.

Increased Complexity Drives Higher Costs

As we increase the frequency of integration, we decrease the quantity of changes per integration. This decreases both the cost per integration and the total cost of integration. It is much cheaper to integrate five changes 100 times than to integrate 100 changes five times.

4 objects, 10 sources

10 objects, 30 sources

Each object and each connection in these diagrams represent a potential source of error. With 4 objects, we have 10 sources. With 7 objects, we have 30 sources of error. This represents an accelerating increase in the cost of integration.

accelerating increase

To minimize the costs, we need to minimize the number of objects being integrated. We do that by minimizing the time between integrations.

Overhead

The main resistance to frequent merging is the cost of testing. Testing involves building the merged code, running the tests, and evaluating the results. When the building and testing (the integrating) are automated, the cost of evaluating test results can be minimized. When test-evaluation is also automated, we can bypass test-result evaluation except when the system notifies us that something is broken.

Continuous integration is only feasible when the overhead of integrating (merging and verifying) is trivialized through automation.

Conclusion

Agile processes depend upon continuous integration, but any software development process is improved with continuous integration. This is one of the enablers of iterative development processes. It reduces the cost of quality (or allows us to achieve higher quality levels at the same cost). It also makes development more enjoyable because developers spend less time on fixing bugs and more time implementing solutions.

– – –

Check out the index of the Foundation series posts for other introductory articles.

Market Segmentation or Senseless Mistake?

new coke

A grass roots campaign has been started by Peter Provost to get Microsoft to include unit testing support included with all versions of Visual Studio 2005 (VS). Currently, Microsoft is only including it with Visual Studio Team System (VSTS) versions of Visual Studio. This looks to be a great example of a killer feature in a product providing so much surprise and delight that people are demanding that it be universally available. This is also a great example of market segmentation by Microsoft. The irony is that there is an open source alternative that makes the opportunity cost very low, and yet people are still clamoring. Let’s see why.

Background

Visual Studio 2005 is a development environment for developing .NET applications. Microsoft offers several versions of the software – 8 in the 2005 packaging. Even for people familiar with the product, the market segmentation strategy can be pretty confusing. To oversimplify, each version offers more capability than the less-expensive version below it. Rob Caron provides the best explanation of the product definition strategy in his Hitchhiker’s guide to VSTS. He starts by explaining the Visual Studio 2003 packages, and then shows the evolution to the VS2005 appraoch, including the Team System versions.

Unit testing support is offered only in the four most expensive, most capable versions – the Team System versions. The petitioners argue that unit testing is critical to all developers, and should be included in every version of the product. Unit testing is a form of whitebox testing where developers create automated tests of their code.
Microsoft is implementing a classic market segmentation strategy with this approach.

Market Segments

Markets are not homogenous – we don’t sell products to clones. Everyone has a different set of criteria for purchasing software. They make different tradeoffs in terms of price versus performance, or cost versus capability. Imagine a market roughly divided into two populations – price sensitive people, and feature-driven people.

populations

The price-sensitive people like getting extra features, but will only pay marginally more for them. The feature-driven population is willing to pay a higher premium for added capabilities.

If we treat our potential customers as a homogenous market, we will make one of three mistakes:

  1. Price the product so that everyone buys it. If we set the price based on the price-sensitive population, we are leaving money on the table. The feature-driven people would gladly pay more for the features.
  2. Price the product so that only feature-driven people will buy it. We lose out on sales to the price-sensitive population, who won’t pay for the extra capabilities.
  3. Try and compromise. Nobody wins. We won’t get enough of the price-sensitive customers, and we’ll leave money on the table with the feature-driven customers.

The Good, The Better, and The Best

One way to serve all of the customers is with multiple products. As an example, imagine three versions of a washing machine. They all basically do the same thing, wash clothes. The manufacturers can put a stronger motor, fancier control panel, or better sound insulation on some versions of the same product, and sell them to different people for different prices.

Good Better Best

Most of the engineering costs apply to all three versions of the same product. The same is even more applicable to software. An easy way to do it would be to write the “best” software, and then disable some features to create the “better” and “good” versions. Many small software companies do this today, offering free and paid versions of their software. The free versions usually are missing features of the paid versions.

Microsoft has presumably identified several user profiles, and tailored a specific version of the software for each profile. Each version has different capabilities, and a different price.

Product Differentiation

Unit testing support, within the Visual Studio development environment, is absolutely a valuable capability. The growing response to the petition proves it. This is a great example of a surprise and delight feature (in Kano terms). In fact, some users find it to be so compelling that they want all users to get it “for free” as part of purchasing any version of Visual Studio.

This is one way that Microsoft is providing differentiation of the Team System versions of Visual Studio. There are other tools that may provide even more compelling reasons to get the Team System version.

Opportunity Cost

The odd thing is that NUnit is an open-source unit testing tool that can be plugged-in to all versions of Visual Studio. This means that there is a free tool for doing exactly what the petition is asking Microsoft to do. The cost of using NUnit is the time spent setting it up – I would imagine a few hours to figure it out and create an install document for the rest of the team. This is a surprisingly low-cost alternative. And it may even be the better alternative, as NUnit has a very active community, and there are many areas to find free support and help. The opportunity-cost logic applies to this situation (but in reverse). There is a low-cost alternative, so why spend the money on the extra features?

The other capabilities available in Team System provide much better differentiation, as they don’t have low-cost alternatives like NUnit.

Conclusion

This is a great example of using market segmentation to sell more software for more profit. Feature-driven people who want unit testing will pay more for it. People who are more price sensitive will still buy the versions without unit testing baked in, and will hopefully know about NUnit and bolt it on.

Good job Microsoft marketing.

Learn to Fly with Software Process Automation

flying squirrel

We can reach the next step in our software process evolution by automating much of our process. Flying squirrels evolved a technique* to quickly move from one tree to another without all the tedious climbing and dangerous running. Software teams that automate their processes achieve similar benefits. Automation allows us to increase efficiency while improving quality. And we spend less time on tedious and mundane tasks.

Benefits of process automation

Tim Kitchens has a great article at developer.* where he highlights the benefits of process automation. Here are our thoughts on the benefits he lists:

  • Repeatability. The first step in debugging software is the isolation of variables. A repeatable build process eliminates many variables, and likely many hours of wasted effort.
  • Reliability. A repeatable process eliminates the possibility of us introducing errors into our software by messing up a step in the build.
  • Efficiency. An automated task is faster than a manual task.
  • Testing. Reductions in overhead of building and testing allow us to test more frequently.
  • Versioning. The scripts that drive our build process are essentially self-documenting process documents. And tracking versions of the scripts provides us with precise records of the process used for prior builds. This documentation, and re-use of it can reduce the cost of running our projects at any CMMI level.
  • Leverage. We get much more efficient use of our experts’ time – they spend less effort on turn-the-crank processes and more effort on writing great software.

What and when should we automate?

The short answer is automate everything, unless there’s not enough ROI. We have to examine each process that we use to make a final decision – some automation will not make sense due to uncommon situations. Also, if we’re nearing the end of an existing project, there is less time to enjoy the benefits of automation, so we may not be able to justify the costs. We may be under pressure to deliver ROI in a short payback period. We would suggest exploring the automation of the following activities:

Automate the build process

Most people underestimate the benefits of an automated build. The obvious benefit is time savings during the normal build cycle. Imagine the build takes an hour, happens monthly, and usually happens twice per month. Two hours per month doesn’t seem like a lot of savings. However, chasing down a bug caused by the build process is at best expensive, and at worst nightmarishly expensive (because we aren’t looking in the right place to find the problem). Use an estimate of the probability of this happening to the expected value calculation for the savings.

The largest potential benefit of an automated build is changing the way we support our customers. Monthly builds aren’t scheduled because the business only wants updates once per month. They are scheduled at a monthly rate because that’s a balance someone has achieved between the cost-of-delivering and the cost-of-delaying a delivery. When we automate our delivery process, we dramatically reduce the cost of delivery, and can explore more frequent release schedules.

Automate unit testing

We significantly improve the efficiency of our team at delivering by shortening the feedback loop for developers. On a Utopian dev team, we would run our test suite as often as we compiled our code. Realistically, developers should run relevant automated whitebox tests every time they compile. They should run the suite of whitebox tests every time they promote code. And an automated process should run the full suite against the latest tip on a nightly basis (to catch oversights). It would be great if the check-in process initiated an automated test run and only allowed a promotion if all the tests passed.

Automate system and functional testing

End to end and blackbox tests should be automated next. These are the big picture tests, and should be run nightly on a dedicated box against the latest code base. We’ve had the most success with teams that used a nightly testing process, which sent an email with test results to the entire team whenever results changed. We’ve had the pleasure of working with a team that included performance testing on the nightly runs, and reported statistically significant improvement or degradation of performance.

Documentation

Generate tactical documentation whenever possible. Use javadoc or the equivalent to automatically generate well formatted and organized reference materials for future developers.

Marginally relevant reporting

If our team is asked to report metrics like lines of code, cyclomatic complexity, code coverage, etc. We should automate this. This work is the definition of tedium, while presenting tenuous value to the manager who requested it. If we can’t convince someone that they don’t want this data, we should at least eliminate the pain of creating it.

Code coverage statistics can provide better than nothing insight into how much testing is being done, or how much functionality is exercised by the test suite. But code coverage metrics have the danger of false precision. There’s no way to say that a project with 90% code coverage has higher quality than a project with 80% coverage.

Conclusion

Automation makes sense. We save time, increase quality, and ensure a more robust process. We also spend less time on turn-the-crank activities and more time creating differentiated software.

*Technically, they don’t fly – they fall. With style.

Software testing series: Pairwise testing

testing equipment
Before we explain pairwise testing, let’s describe the problem it solves

Very large and complex systems can be very difficult and expensive to test. We inherit legacy systems with multiple man-years of development effort already in place.  These systems are in the field and of unknown quality. With these systems, there are frequently huge gaps in the requirements documentation. Pairwise testing provides a way to test these large, existing systems. And on many projects, we’re called in because there is a quality problem.

We are faced with the challenge of quickly improving, or at least quickly demonstrating momentum and improvement in the quality of this existing software. We may not have the time to go re-gather the requirements, document them, and validate them through testing before our sponsor pulls the plug (or gets fired). We’re therefore faced with the need to approach the problem with blackbox (or black box) testing techniques.

For a complex system, the amount of testing required can be overwhelming. Imaging a product with 20 controls in the user interface, each of which has 5 possible values. We would have to test 5^20 different combinations (95,367,431,640,625) to cover every possible set of user inputs.

The power of pairwise

With pairwise programming, we can achieve on the order of 90% coverage of our code in this example with 54 tests! The exact amount of coverage will vary from application to application, but analysis consistently puts the value in the neighborhood of 90%. The following are some results from pairwise.org.

We measured the coverage of combinatorial design test sets for 10 Unix commands: basename, cb, comm, crypt, sleep, sort, touch, tty, uniq, and wc. […] The pairwise tests gave over 90 percent block coverage.

Our initial trial of this was on a subset Nortel’s internal e-mail system where we able cover 97% of branches with less than 100 valid and invalid testcases, as opposed to 27 trillion exhaustive testcases.

[…] a set of 29 pair-wise AETG tests gave 90% block coverage for the UNIX sort command. We also compared pair-wise testing with random input testing and found that pair-wise testing gave better coverage.

Got our attention!

How does pairwise testing work?

Pairwise testing builds upon an understanding of the way bugs manifest in software. Usually, a bug is caused not by a single variable causing a bug, but by the unique combination of two variables causing a bug. For example, imagine a control that calculates and displays shipping charges in an eCommerce website. The website also calculates taxes for shipped products (when there is a store in the same state as the recipient, sales taxes are charged, otherwise, they are not). Both controls were implemented and tested and work great. However, when shipping to a customer in a state that charges taxes, the shipping calculation is incorrect. It is the interplay of the two variables that causes the bug to manifest.

If we test every unique combination of every pair of variables in the application, we will uncover all of these bugs. Studies have shown that the overwhelming majority of bugs are caused by the interplay of two variables. We can increase the number of combinations to look at every three, four, or more variables as well – this is called N-wise testing. Pairwise testing is N-wise testing where N=2.

How do we determine the set of tests to run?

There are several commercial and free software packages that will calculate the required pairwise test suite for a given set of variables, and some that will calculate N-wise tests as well. Our favorite is a public domain (free) software package called jenny, written by Bob Jenkins. jenny will calculate N-wise test suites, and its default mode is to calculate pairwise tests. jenny is a command line tool, written in C, and is very easy to use. To calculate the pairwise tests for our example (20 controls, each with 5 possible inputs), we simply type the following:

jenny 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 > output.txt

And jenny generates results that look like the following:

1a 2d 3c 4d 5c 6b 7c 8c 9a 10c 11b 12e 13b 14d 15a 16c 17a 18d 19a 20e
1b 2e 3a 4a 5d 6c 7b 8e 9d 10a 11e 12d 13c 14c 15c 16e 17c 18a 19d 20d
1c 2b 3e 4b 5e 6a 7a 8d 9e 10d 11d 12a 13e 14e 15b 16b 17e 18e 19b 20c
1d 2a 3d 4c 5a 6d 7d 8b 9b 10e 11c 12b 13d 14b 15d 16d 17d 18b 19e 20a
1e 2c 3b 4e 5b 6e 7e 8a 9c 10b 11a 12c 13a 14a 15e 16a 17b 18c 19c 20b
1a 2a 3c 4e 5e 6a 7b 8c 9d 10b 11b 12b 13e 14a 15d 16d 17c 18c 19b 20d […]

Where the numbers represent each of the 20 controls, and the letters represent each of the five possible selections.

What’s the catch?

There are two obvious catches. First, when you use a tool like jenny, we must run all of the tests that it identifies, we can’t pick and choose. Second, pairwise testing doesn’t find everything. What if our example bug before about taxes and shipping only manifested when the user is a first time customer? Pairwise testing would not catch it. We would need to use N-wise testing with N >= 3. Our experience has been that N=3 is effective for almost all bugs.

There is also a sneaky catch – test generators like jenny assume that the order of variables is irrelevant. Sometimes we are testing dynamic user interfaces, where the order of value selection in controls is relevant. There is a solution to this, and we will update this post with a link to that solution when it is available.

– – –

Check out the index of software testing series posts for more testing articles.

Measuring the Cost of Quality: Software Testing Series

scale

Should we test our software? Should we test it more?

The answer to the first question is almost invariably yes. The answer to the second question is usually “I don’t know.”

We write a lot about the importance of testing. We have several other posts in our series on software testing. How do we know when we should do more automated testing?

Determining the costs is an ROI analysis. Kent Beck has a great position –

If testing costs more than not testing, then don’t test.

At first glance, the statement sounds trite, but it really is the right answer. If we don’t increase our profits by adding more testing, we shouldn’t do it. Kent is suggesting that we only increase the costs and overhead of testing to the point that there are offsetting benefits.

We need to compare the costs and benefits on both sides of the equation. We’ll start with a baseline of the status quo (keeping our current level of testing), and identify the benefits and costs of additional testing, relative to our current levels.

We should do more automated testing when the benefits outweigh the costs

We’ll limit our analysis to increasing the amount of automated testing, and exclude manual testing from our analysis. We will use the assumption that more testing now will reduce the number of introduced bugs in the future. This assumption will only hold true when developers have the ability to run the automated tests as part of their personal development process. We’ve written before about the sources of bugs in the software development process, and in other posts in this series we show how automated testing can prevent future bugs (unlike manual testing, which can only identify current bugs).

We are also assuming that developers are running whitebox unit tests and the testing team is running blackbox tests. We don’t believe that has an impact on this analysis, but it may be skewing our perspective.

Benefits

  • Reduced costs of bugs in the field. Bugs in the field can cause us to have “emergency releases” to fix them. They can increase the costs of (internal teams) using our software and working around the bugs. They can cause delayed sales. Bugs cause lost customers.
  • Reduced costs of catching future bugs. When developers can run a regression suite to validate that their code didn’t break anything before asking the testing team to test it, they can prevent introducing regression bugs. And thereby prevent the costs of finding, triaging, and managing those bugs.
  • Reduced costs of developing around existing bugs. Developers can debug new code faster when they can isolate it’s effects from other (buggy) code.
  • Reduced costs of testing around existing bugs. There is a saying – “What’s the bug behind the bug?” we’ve heard when testers are trying to validate a release. A bug is discovered, and the slack time in the schedule is used fixing that bug – then the code is resubmitted to test to confirm that the bug was fixed. Another bug was hiding behind it, and untestable because the first bug obfuscated the second bug. Addressing the second bug introduces unplanned testing costs. Preventing the first bug will reduce the costs of testing the latent bug.

Costs

Most of these increased costs are easy to measure once they are identified – they are straightforward tasks that can be measured as labor costs.

  • Cost of time spent creating additional tests.
  • Cost of time spent waiting for test results.
  • Cost of time spent analyzing test results.
  • Cost of time spent fixing discovered bugs.
  • Cost of incremental testing infrastructure. If we are in a situation where we have to increase our level of assets dedicated to testing (new server, database license, testing software licenses, etc) in order to increase the amount of automated testing, then this cost should be captured.

Conclusion

This is a good framework for making the decision to increase automated testing. By focusing on the efficiencies of our testing approaches and tools, we can reduce the costs of automated testing. This ultimately allows us to do more automated testing – shifting the pareto optimal point such that we can increase our incremental benefits by reducing our incremental costs.

Software development process example

double figure eight knot

We’ve presented an example of the software development process across several posts over the last two weeks. In this post we tie them all together, showing the steps in process order.

  1. A discussion of the concept of tagging. Context and background on tagging as a technology, with pros and cons.
  2. The top five problems with test automation suites. We’ve talked repeatedly about how test automation suites are better than purely manual testing. Here we look at the “second order” problems. What are the main problems with unit test automation? These represent our market requirements for the creation of a software product designed to improve unit test automation suite usage.
  3. Converting from MRD requirements to PRD requirements. The ideation / triage process of determining which requirements should be addressed in software.
  4. Creating informal use cases to support our requirements. We define the use cases that support the high-level requirements.
  5. Writing functional requirements to support the use cases. With a user-centric approach to software development, it is imperative that we build out our functional requirements in the context of use cases – we keep our eye on the ball with this focus on the user.
  6. Design elements that support our functional requirements. Without going into esoteric details about how to design test automation software, we discuss the elements of the design that relate to the application of tagging to addressing some of the larger market opportunities.
  7. Iterating and prototyping. We show the iterative process from PRD to design to users and back again. [Update 21 Feb – added this step. Thanks again, Deepak]

Let us know if you’d like to see a discussion of any of these or other steps in more detail by leaving a comment on this post. Thanks in advance!

Software Testing Series: Organizing a Test Suite with Tags Part Three

organizing into bins

Organizing a test suite with tags (part 3)

This is the third in a three-part post about using tags as a means to organize an automated unit test suite.

Part 3 of this post can be read as a standalone article. If it were, it would be titled Design elements of an automated unit test framework using tags. If you’re only reading this post and not parts 1 and 2, pretend that this is the title.

  • In part one of this post we developed an understanding of tagging as a technology, including its pros and cons.
  • In part two of this post we defined the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we will consider the key design associated with using tags as a mechanism for organization.

Setting expectations

We designed a custom unit test automation system based on the use of tags to organize automated tests for a client. That system isn’t the subject of this post, but it did provide us with context and insight into the problem we’ve addressed. In this post we won’t be presenting a completed design – there are many good tools out there already for automating unit tests. We will be talking about a subset of the design decisions that are associated with the use of tags as a mechanism to organize the unit testing within the suite. In Marc Clifton’s advanced unit testing articles, he walks readers through the creation of a test automation tool for C#. The concepts presented here can be incorporated into a design relatively easily, but the details of doing that are a little too off-topic for most of our readers.

We are writing about designing software that tests other software. To keep our language consistent and easy to follow in this post, we will use two terms. Tool represents the test automation software that we are designing. Application represents software being tested with the tool, or more specifically, with tests maintained within the tool.

Design approach

After identifying the use cases and functional requirements for the tool, we began iterating on screen designs, business (requirement) object models and architectural (implementation) object models. We created an object oriented analysis (OOA) diagram to represent the concept concisely.
OOA diagram

OOA diagram

An object oriented analysis diagram of the key relationships between scripts, inspections and tags.
A script is the embodiment of a user session in the application – it represents a set of actions that a user of the application would take. The user of the tool will create a script (as a separate file), and will create a reference to that script in the tool.

An inspection is an unit test of the application. The inspection evaluates a particular condition or makes a specific assertion about the state of the application (a properly filled out order is submitted when the user clicks “submit”) . The code that executes that inspection is represented outside of the tool. The user of the tool will create a reference to the inspection within the tool.

Inspections can be associated explicitly with any number of scripts (including none). An association between inspection 1 and script A is an instruction to the tool to run script A within the application, and evaluate inspection 1 against the script.

The processing of scripts and inspections is outside the scope of this document, but is covered in many other references, including Marc Clifton’s.

Any number of tags can be associated with each script. Tags could be used to represent different user actions (like deleting items from a shopping cart), different specific selections (user adds 1000 of an item to the shopping cart), different situations (shipping address does not match billing address), or any other relevant descriptor of the user session. A single script could have multiple tags.

Each inspection can be transitively associated with a set of scripts by explicitly associating it with one or more tags. By associating an inspection with a tag, we are instructing the tool to dynamically associate the inspection with all scripts that are associated with all of the identified tags. There are two benefits to this approach. First, this approach reduces the amount of time that a user of the tool must spend to associate an inspection with a set of relevant existing scripts. Second, this indirect mapping is utilized to assure that an inspection that is mapped to a tag will also automatically become associated with any future scripts that are added – as long as they have associations with the same tag or tags as the script. This reduces the cost of creating future mappings when scripts are added to the suite in the future.

We expect this design approach to provide significant labor savings in maintaining a test suite. We built our business case for this project upon that assumption. We also expect that this design approach will result in better testing coverage of the application by users of the tool. We did not incorporate that expectation into our cost benefit analysis when calculating the ROI of this project.

Followup

We will follow-up some months from now when we can evaluate data and draw conclusions from use of a tool built along similar lines for a client. Until then, we have confidence that it will work very well, but no tangible data.

Summary

  • In part one of this post we developed an understanding of tagging as a technology, including its pros and cons.
  • In part two of this post we defined the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we considered the key design elements associated with using tags as a mechanism for organization.

– – -Check out the index of software testing series posts for more articles.

Sample Use Case Examples

glasses

Sample informal use case examples

We talked about informal use cases a while ago in our use case series. Over a series of posts, we are demonstrating the process of defining a software product. The next step, and subject of this post, is the creation of informal use cases to support the defined goals for the software.

In part two of our post on using tags to organize automated tests, we identified several market requirements, and identified the ones that we would incorporate as software requirements.

These then become our goals in MRD to PRD requirements conversion– and in this post, we will show how we can create informal use cases to support those goals (aka high level requirements).

Where do use cases belong?

In our introduction to structured requirements, we showed a basic model for describing requirements. Once goals are identified, we articulate the use cases that enable those goals.

Structured requirements diagram

Our product requirements (goals) were defined to be

  1. Minimize time spent identifying broken and obsolete scripts.
  2. Minimize time spent removing obsolete scripts from the suite.
  3. Minimize the time spent managing the mapping of inspections to scripts.

The first step in creating informal use cases

The easiest way to start writing the use cases is to write the names of the use cases. Just write out the names (on paper, whiteboard, one-note – whatever works for you), using semi-descriptive prose. For example, for the third goal, we could identify the following two use cases.

  • Developer adds a new script and maps to existing inspections
  • Developer adds a new inspection and maps to existing scripts

Notice that in the names we have an actor and an action – this is a good consistent naming convention that makes it easy to understand exactly what we’re talking about. Now we can write the informal use case details for each of these.

Developer adds a new script and maps to existing inspections

The developer’s goal is to add her recently created script to the suite and associate it with a set of specific inspections, producing a set of test outputs. She may already have a set of inspections that she knows she wants to associate with the script. She is also opportunistic about leveraging other existing scripts to create new tests when they are relevant. She creates a reference to the script in the test framework. She then identifies the set of inspections that she wants to associate with the script and creates the associations. She reviews the set of associations, and either needs to change the set of associations, or is satisfied with the mappings. If she needs to add or remove some of her newly created mappings, she will identify the inspections that are to be modified and create those associations as well.

Developer adds a new inspection and maps to existing scripts

The developer’s goal is to add his recently created inspection to the suite and associate it with a set of existing scripts, producing a set of test outputs. He already knows, generally, what types of scripts he wants to map his inspection to. He does not have a specific set of scripts in mind for the mapping. The developer creates a reference to the inspection in the test framework. Then he reviews the existing scripts, and identifies the scripts to be mapped to his new inspection and creates the associations. He reviews the set of associations and either decides that he has too many, too few, or has the right amount. If he needs to modify the associations he does so and re-reviews, repeating until satisfied.

Two things to note about the process of creating the informal use cases above:

  1. This is very fast – these took about 10 minutes each to create. It is a very small amount of time that has to be invested prior to validating the use cases with the users. During and after validation, these use cases can be modified, eliminated or replaced with other use cases. Having something concrete with a minimum investment of time both helps drive good decisions and saves costs.
  2. These use cases are the result of iteration. We did not just type for ten minutes and move on. While writing the second use case, we noticed that we were not handling the possibility that the developer would initially create too many associations and need to go back and remove some of them. We updated both use cases to reflect this possibility.

More Example Use Cases

We occasionally add other example use cases, usually focused on a particular element of use case writing. Here are some that we’ve added since this article was originally published:

Summary

Our next step is to validate these use cases with the actors identified in the use cases (developers). Once that is complete, we will define the functional specs for the software that enable these use cases.

Software Testing Series: Organizing a Test Suite with Tags Part Two

organized typesetting letters

Organizing a test suite with tags (part 2)

This is the second in a three-part post about using tags as a means to organize an automated test suite.

Part 2 of this post can be read as a standalone article. If it were, it would be titled Top five problems with test automation suites. If you’re only reading this post and not parts 1 and 3, pretend that this is the title.

  • In part one of this post we developed an understanding of tagging as a technology, including it’s pros and cons.
  • In part two of this post we will define the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we will explore an approach to combining tagging with test suite organization.

What are the problem areas inherent in managing automated tests?

We start with identification of the problems or opportunities, before defining what the requirements will be. This is the same process we discussed in From MRD to PRD, applied to the test-automation space. The following are the top five problem areas we can identify about test automation suites.

  1. Maintaining the suite becomes too expensive. Once we have a suite in place, we have to maintain it. As the size of the suite grows, the amount of maintenance of existing test grows. It grows in proportion to the number of tests in the suite and the rate of change of the underlying software being tested. As the amount of maintenance grows, so does its expense.
  2. Developers will never start using the suite. Change is bad. Well, for many people it is. Asking someone with a full time, salaried job to take on additional responsibilities has to be done correctly. There is absolutely a risk that people won’t start using the suite. Since this project is focusing on iterative development of an already deployed tool, already in use, this problem really doesn’t apply.
  3. Developers will stop using the suite. Developers avoid tedium. They’re smart. They want to avoid unneccesary work, menial work, and irrelevant work. If the developers perceive the test suite in any of these ways, we’re doomed – they will stop using it.
  4. Not testing the right stuff. A test suite that doesn’t test the right areas of the software is worse than not having one at all – because it gives you a false sense of confidence.
  5. Test suite becomes less effective over time. An initially effective suite can grow less effective over time as the underlying software changes. Individual tests become irrelevant as they become impossible to reproduce with the application – perhaps the user interface has changed. If test design was linked to the heaviest usage patterns, and those patterns change, then coverage of the new heavy usage parts of the suite will be reduced – and the effectiveness of the suite will be reduced.

Which problems should we address with software?

With limited resources, we need to make sure that we focus our software efforts on those problems where software can have the most impact on solving the problem. We’ll start by identifying

  1. Maintaining the suite becomes too expensive. There are three approaches to solving this problem – reduce the required maintenance, make the required maintenance more efficient, and reduce the cost of the labor that maintains the solution. Labor cost reductions may very well be the most effective general way to solve this problem, but given the real world project constraints for the project behind this post, we aren’t exploring that option. This is a candidate for the software solution.
  2. Developers will never start using the suite. Make them want to use it, or make them use it. We believe you want to make them want to use it – both by evangelizing the benefits and by quickly crossing the suck threshold so that users get positive feedback. For this project, we have taken that approach, although it’s true that there is also a mandate from the dev team’s managers that we must make sure they use it. With process and education approaches that have proven effective, this is not a target of the current software solution.
  3. Developers will stop using the suite. The looming mandate will assure that developers won’t go AWOL on the suite. But if they can present a compelling reason to their managers, there is a risk that they will decide to stop using it. This is a candidate for the software solution.
  4. Not testing the right stuff. Test suite planning is a science unto itself. We will keep in mind “ways to make test suite planning easier” as a candidate for the software solution, but we aren’t otherwise targeting this for the current software solution.
  5. Test suite becomes less effective over time. Tests can grow irrelevant over time when the software they test is constantly changing (as in this project). This problem has been addressed to a large extent by using whitebox unit tests in the test suite. We are not targeting this as part of the current software solution.

Reminder

  • In part one of this post we developed an understanding of tagging as a technology, including it’s pros and cons.
  • In part two of this post we defined the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we will explore an approach to combining tagging with test suite organization.

– – –

Check out the index of software testing series posts for more articles.

Software Testing Series: Organizing a Test Suite with Tags Part One

organized typeset letters

Organizing a test suite with tags

Tagging is a method of organizing information that is pushing into the mainstream now through the success of sites like Flickr and Del.icio.us, and blogging software like WordPress. We can apply this idea to managing our automated test suites. An automated test suite is a critical component of any continuous integration process.
First steps first…

This post is a follow-up to our previous case study on incorporating unit testing into an existing team’s development environment. The case study is based on a real solution that has already started reaping rewards for our client, and is gaining momentum. We’re now looking at making it easier for the development team to maintain this test suite, and proposing some extensions – including a form of tagging.

  • In part one of this post we develop an understanding of tagging as a technology, including it’s pros and cons.
  • In part two of this post we will define the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we will explore an approach to combining tagging with test suite organization.

Understanding tagging

Tagging allows users to define their own categories for describing the items that they care about. The technical term for a free-form approach to labeling items is folksonomy as opposed to taxonomy, which is a classical categorization approach.

The Wikipedia presents the following definitions for folksonomy and taxonomy:

Folksonomy, a portmanteau word combining “folk” and “taxonomy,” refers to the collaborative but unsophisticated way in which information is being categorized on the web. Instead of using a centralized form of classification, users are encouraged to assign freely chosen keywords (called tags) to pieces of information or data, a process known as tagging. Examples of web services that use tagging include those designed to allow users to publish and share photographs, personal libraries, bookmarks, social software generally, and most blog software, which permits authors to assign tags to each entry.

Taxonomy (from Greek verb tassein = “to classify” and nomos = law, science, cf “economy”) may refer to:

  • the science of classifying living things (see alpha taxonomy)
  • a classification

Initially taxonomy was only the science of classifying living organisms, but later the word was applied in a wider sense, and may also refer to either a classification of things, or the principles underlying the classification. Almost anything, animate objects, inanimate objects, places, and events, may be classified according to some taxonomic scheme.

There is debate about the value of tags

Several people have voiced concerns that tagging is simply a bad idea, with some compelling arguments. The Net Takeaway has a post and links to previous posts.

Look, if I am looking for something specific, then I type those terms in. Say I use a search engine. If I am looking for a phrase, I use quotes and type in all the words (up to 10 for most engines) and I get hits with that phrase.

But usually, I want stuff “like” or “similar” to my words. […]

But that’s now how tagging systems work. Instead, you have to know the terms up front to find anything.[…] Note that every popular “tagging” system, to date, has been for consumer fun stuff (flickr, etc.) and not for real knowledge management.

There are some good rebuttals in the comment thread as well, providing insight into what is intended by tagging, suggestions on how to use it, and alternative comparisons with search. If you only read one, read comment #3. Then go back and read the rest of them.

Benjamin Booth writes, in The Present Failure of Tagging, about the challenges. Benjamin’s approach is “I like tagging, how do we make it work the way we want?”

The general problem can be seen as the task of 1) externalizing knowledge-retrieval ‘landmarks’ when encountering information you want to store (in some context) and then, 2) being able to quickly find these landmarks when trying to recall the information later on, potentially in a completely different context from the one in which you created the landmark in the first place.

Near the end of his post he begins:

We need refactoring for tagging.[…]

Benjamin makes reference to the concept mapping tool from IHMC that we talked about previously.

Rashmi Sinha has an outsanding article where she applies the science of cognitive psychology to the issues with tagging usability.

Cognitively, we are equipped to handle making category decisions. So, why do we find this so difficult, especially in the digital realm – to put our email into folders, categorize our bookmarks, sort our documents. Here are some factors that lead to what I call the “post-activation analysis paralysis”.

Our conclusions about tagging

After reading the linked posts and their discussion threads, we are pretty well versed about the pros and cons of tagging.

Pros

  • Tagging eliminates the “How do I organize this?” analysis paralysis that happens when trying to start organizing
  • Tagging allows for a dynamic classification system that grows over time. If we make a bad decision early, we can grow out of it.
  • The “free association” approach to tagging that exists in the digital world today is consistent with the way our brains function when storing information.
  • There are a bunch of smart people working on tagging right now, so there is plenty of opportunity to leverage their good ideas.

Cons

  • Tagging makes retrieval of information difficult. If we don’t know how we previously tagged something, it can be hard to find it later.
  • The existing (digital) approaches to tagging don’t provide an analog for the way our brains function when continuously updating and refining our “free associations” as we learn.
  • People rooted in traditional taxonomy-based classification systems struggle with the concept of tagging. This probably characterizes 90% of people, so gaining mindshare outside of the technorati will be difficult, and user adoption could be a challenge.

Reminder:

  • In part one of this post we develop an understanding of tagging as a technology, including it’s pros and cons.
  • In part two of this post we will define the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we will explore an approach to combining tagging with test suite organization.

– – –

Check out the index of software testing series posts for more articles.