Category Archives: Lists

Many articles at Tyner Blain are lists of tips, techniques, or good ideas related to software development and requirements management.

The 8 Goals of Use Cases

Climber with goal

Why do we write use cases?

We write use cases for the same reasons that people use our software – to achieve goals. In our case, we want to assure that we are creating the right software. By looking at this high level goal in more detail, we can make decisions that drive the best possible use case creation. Let’s apply our product management skills to writing better use cases by writing an MRD for use cases.

This article can be used as a guide to develop a process for defining, documenting and gathering use cases. It can be used to define a template for use cases, and it can be used to define specifications for a use case management system. We will start with a market analysis and a vision statement, and then create our market requirements.

Market Analysis

Bad requirements are further detailed as the following:

  • Requirements that are overlooked cause us to fail to meet expectations and fail to deliver value.
  • Requirements that are incorrect cause us to incorrectly address problems and fail to deliver value.
  • Requirements that are poorly communicated cause us to implement incorrectly, failing to address problems and deliver value.
  • Requirements that are low-value cause us to spend time and money on problems that don’t maximize value.

Through the course of any long term project, requirements will change. This happens more when we use iteration and prototyping to accelerate stakeholder feedback cycles. But that’s a good thing, because the changes result in better requirements. Agile development methodologies like feature driven development supercharge this phenomenon.

Vision Statement

We will improve our ability to write and manage use cases so that we may maximize their impact on

  • Writing and maintaining great requirements
  • to improve our ability to deliver the right functionality
  • and ultimately achieve software product success

Market Requirements

With an understanding of the market problems and a guiding vision, we will document the market requirements for writing better use cases. The market requirements have an explicit scope – they specify which and how much of the market problems we intend to address.

When we use the phrase ‘our use cases‘, we are really saying ‘our use cases and our approach to managing the use cases.’ We’re using shorthand to improve the readability.

Prioritization of the requirements is denoted with (H) (M) or (L) prepending the requirement, representing high, medium and low priority requirements, respectively.

Requirements that are overlooked

Requirements that are incorrect

Requirements that are poorly communicated

Requirements that are low-value

Conclusion

These goals define why we write use cases as part of software development. We do it to improve our ability to write the right software to solve our customer’s problems. We also write use cases to help us manage requirements changes and set delivery expectations with our stakeholders.

Writing Good Requirements – The Big Ten Rules

logo

Pragmatic Marketing has a training seminar called Requirements That Work. In support of that, they provide a list of 8 characteristics of good requirements. We change one and add two more to round it out to The Big Ten Rules. Combine this with Michael’s ten tips for writing MRDs, and we’ve got a good handle on how to create a great MRD.

Pragmatic’s List (1-8) + Two Three Four More

  1. Necessary Valuable
  2. Concise
  3. Design Free
  4. Attainable
  5. Complete
  6. Consistent
  7. Unambiguous
  8. Verifiable
  9. Atomic
  10. Passionate
  11. Correct
  12. Stylish

Looking at each rule of writing good requirements…

1. Valuable

Updated for 2009

Writing valuable requirements is important.  It doesn’t matter how well your teams execute if they are off building the wrong products / capabilities / features.  The right products and capabilities are the ones that have relevant value.

  • Valuable requirements solve problems in your market.
  • Valuable requirements support your business strategy.
  • Valuable requirements solve problems for your users.
  • Valuable requirements meet your buyers’ criteria.
  • Valuable requirements don’t over-solve the problems.

Valuable Requirements

Pragmatic uses necessary as a criteria of good requirements. We believe that valuable requirements are good requirements. When prioritizing requirements, we should do the must-have requirements first. But other valuable requirements are critically important – even if they aren’t mandatory. Prioritization and release scheduling should stress necessary requirements first, and then the most valuable requirements next.

Requirements that can differentiate our product from the competition are by definition not necessary – or the competition would have done it already.

[Update: Our detailed article on Writing Valuable Requirements ]

2. Concise

Updated for 2009

Concise requirements give your team a useful, easy to read and easy to change understanding of what must be done.  Great requirements exist to do three things:

  1. Identify the problems that need to be solved.
  2. Explain why those problems are worth solving.
  3. Define when those problems are solved.

Concise Requirements

Easy to read and understand. If only it were that easy. For whom is it easy to read? A market requirements document (MRD) is written for several different people on the team. It provides a vision of what problems our product solves. It provides clarification to the implementation team. It also sets expectations with stakeholders. Different people on the team have different domains of expertise – we have to write requirements that are easily understood by all of them.

[Update: Our detailed article on Writing Concise Requirements]

3. Design Free

Updated for 2009

Design-Free requirements are important for two reasons, and hard for two other reasons.

Design-free requirements are hard because you “know what you want” when you should be documenting “why you want it.” Writing design-free requirements can be hard when you don’t trust your development team to “do the right thing” even though it is not your job to design the solution.

Design-Free Requirements

Generally, a requirement should not specifiy any of the implementation choices. From a product manager’s perspective the requirement is the ‘what’ and the spec is the ‘how’. To a system designer or architect or lead developer, the requirement serves as a ‘why’.

[Update: Our detailed article on Writing Design-Free Requirements]

4. Attainable

Updated for 2009

Unless you live in a world filled with unicorns and rainbows, writing realistic requirements is critical. When you set unattainable goals, the best result you can hope for is a frustrated engineering team. Write requirements that are attainable, and your team will surprise you with what they can achieve.

Attainable Requirements

The requirement must be realistically achievable. Barbara and Steve make great points about understanding the cost of implementing something as expressed in the requirements. As we pointed out in our thoughts about good requirements management, there is an optimal tradeoff between costs and benefits for any given company or project. We can formally approach this using the techniques identified for more is better requirements using a Kano framework. In short, the investment must have an ROI that exceeds the opportunity costs by the hurdle rate.

Looking at cost-benefit tradeoffs also supports the argument that valuable should replace necessary.

[Update: Our detailed article on Writing Attainable Requirements]

5. Complete

Updated for 2010

You give your requirements to the engineering team, and they look complete. The team builds your product, you launch it and the market soundly rejects it. Why? Because your requirements weren’t complete – they didn’t actually solve the problem that needed to be solved.

Complete Requirements

Simply put, if the requirement is implented as written, the market need is completely addressed. No additional requirements are required. When writing a specification, we may use decomposition to break individual requirements into more manageable, less abstract criteria.

[Update: Our detailed article on Writing Complete Requirements]

6. Consistent

Updated for 2010

Consistency in writing requirements is important on two levels – strategic and tactical. Tactically, you need to write your requirements with grammatical consistency, so that potentially ambiguous statements will be interpreted similarly. You also need to write requirements that are logically consistent, so that you avoid “impossible” requirements and gaps of unspecified meaning. Strategically, your requirements need to reflect a focus on markets and problems that are consistent with your business objectives and the vision your company is manifesting

Consistent Requirements

Pragmatic Marketing highlights that the requirement must be logically consistent with the other requirements in the document – no overlaps, no contradictions, no duplications. This is certainly the most important point of consistency.

There is also benefit to consistent writing in an MRD. We can use templates to provide a consistent framework, but more importantly the prose needs to be consistent. This consistency makes it easier on the readers.

[Update: Our detailed aricle on Writing Consistent Requirements]

7. Unambiguous

Updated for 2010

Writing unambiguous requirements is about understanding what is written, and what is read. Without a clear understanding of your market, you can’t write unambiguously. Even when you understand your market, you risk writing something that is ambiguous to your readers. Documenting requirements is about communication. Don’t break this rule, or you’ve wasted all the energy you spent understanding your requirements.

Writing Unambiguous Requirements

A great requirement has a single interpretation. A good requirement has a single reasonable interpretation. As part of our development process, we will use listening skills like active listening to make sure that our engineering team understands the requirement we intended to write. The better the requirements, the less expensive and risky this communication process will be. Writing unambiguously is critically important when using outsourcing models that limit our interactions with other team members.

[Update: Our detailed article on Writing Unambiguous Requirements]

8. Verifiable

Updated for 2010

Writing Verifiable Requirements should be a rule that does not need to be written. Everyone reading this has seen or created requirements that can not be verified. The primary reason for writing requirements is to communicate to the team what they need to accomplish. If you can’t verify that what the team delivered is acceptable, neither can the team. This may be the most obvious of the rules of writing requirements – but it is ignored every day.

Verifiable Requirements

We use a process that starts with market requirements, and then decomposes them into software requirement specifications. the market requirements must be written in a way that we can verify that the associated requirements specification will meet the market need.

[Update: Our detailed article on Writing Verifiable Requirements]

9. Atomic

Updated for 2010

Each requirement you write represents a single market need, that you either satisfy or fail to satisfy. A well written requirement is independently deliverable and represents an incremental increase in the value of your software. That is the definition of an atomic requirement. Read on to see why atomic requirements are important.

Atomic Requirements

Every requirement should be a single requirement. If we can say “Half of this requirement is implemented” then this needs to be two or more requirements. If a requirement read “Sales reps can manage their client list and generate custom reports” it expresses two atomic ideas (list management and report generation). Those ideas need to be separated

[Update: Our detailed article on Writing Atomic Requirements]

10. Passionate

Nothing great has been born from complacency, lethargy or mediocrity. When we are defining requirements, we must be passionate about what we’re doing. If we’re just going through the motions, it shows up in the writing. If we aren’t excited about a requirement, the problem is either with us or with the requirement. Either way, it won’t inspire the rest of the team to do something great.

[Update: Our detailed article on Writing Passionate Requirements]

11. Correct

[Update: Added 30 Oct 2006]

In addition to all of the analyses above, a requirement must actually further the objective it supports, and must be neccessary to meeting that objective, given a particular approach to solving the problem.

Our detailed article on Writing Correct Requirements.
12. Stylish

[Update: Added 5 Jan 2007]

Style can also be the difference between a well-crafted requirement and one that is hard to read. And style, applied consistently across requirements makes it easier to view them as a whole, and identify gaps and inconsistencies. This ease of comprehension matters when trying to achieve correct, consistent, complete requirements.

Our detailed article on Writing Stylish Requirements.

Summary

There isn’t really anything to summarize – this article is one big summary.

I would like to take a moment to thank everyone who has been subscribing, reading, commenting, and sharing Tyner Blain. We started this site six months ago and I am continually surprised, flattered, and thankful for all of the readers and support we have.

Thank you very much!

Scott

MRD Writing Tips – Ten from Michael Shrivathsan

handoff

Michael wrote five (and another five) tips on writing a market requirements document (MRD). Michael has written a good set of tips with detailed explanations and anecdotes. We’ll extend the conversation here…

Michael’s list:

1. Write From User’s Perspective
2. Use Screen Shots
3. Write Using Simple Language
4. Use Templates – But With Care
5. Prioritize Requirements
6. Specify What & Why – But Not How
7. Cover Non-Functional Requirements
8. Review & Update
9. Define Target Market & Positioning
10. Include a Glossary

We have re-organized these tips into three general areas of guidance and provide our thoughts.

  1. Good communication (tips: 1,2,3,4,10)
  2. Good requirements (tips: 5,6,7,9)
  3. Good requirements management (tips: 8)

Good communication

The purpose of the document is to communicate. Writing for our audience is essential to communication. Using templates to create a predictable structure and flow makes the content easier to absorb. Using clear, unambiguous language (no contrived terms or jargon), makes the ideas easier to grasp and remember. Providing a glossary of terms to provide domain context for the readers is essential for many projects.

A software developer who is an expert at load-balancing a web server may not have any idea how to calculate incremental margin for sales beyond forecast. “Incremental margin” may seem like jargon, but it is a standard term (within the management accounting domain). Use that as your rule of thumb – is the term company-specific, or would anyone in the domain understand it. If the former, its jargon. The latter should have a glossary entry.

A picture is worth a thousand words, certainly. It isn’t clear that using screenshots or mockups is the best way to capture requirements in an MRD. Because they are so powerful, it is impossible to get the intent, without being influenced by the form. This is an area where a lot of smart people agree to disagree. We’ve written about the dangers of using screenshots or other implementation cues that can be interpreted as ‘how’. As Michael points out, often ‘how’ is the appropriate way to communicate with other members of the team – it depends on the team. Barbara Nelson, a product management instructor for Pragmatic Marketing, stresses that the product management role is strategic, and getting into details isn’t. It may be required, but if the product manager is doing it, who’s doing the strategic work?

On the flip side, Alan Cooper is promoting Interaction Design. In an interaction design process someone is responsible at a level analogous to an MRD for determining both what and how. This is basically an interpretation that the how is an element of the what, not just an implementation detail. Combining interaction design with classical structured requirements might look like this amalgam.

Good requirements

Prioritization is the most important element in the document (other than the ideas being prioritized). One goal of an MRD is to communicate the vision of the software. Understanding what and why, with the context of which is more important, is the mechanism of communication to the engineering team. This message provides the framework in which they operate.

MRDs are also the right focal point for driving much of the outbound communication. The vision in an MRD identifies which white papers should be created. The (market) positioning provides guidance to sales people in individual account positioning. (Sales) demos should highlight the most valuable capabilities of the product. Roadmaps and the multi-release schedule are driven by prioritization. Schedules may be managed in timeboxes and delivered in use-case-sized chunks, but the driving prioritization is in the MRD.

Good requirements management

MRDs are not carved in stone and aren’t found in a cave high in the desert. They are not an end, they are a means. A means to communicate a vision. One thing that the Agile proponents clearly have right is that change happens. The market is a moving target. A vision for dominating that market better move with it. And the resulting MRD is going to change.

Product Managers are also not infallible. Left to our own devices, we will write some horrible requirements. We require feedback from the implementation teams in order to write great requirements. A great example is documenting what Kano would define as a “more is better” requirement.

Profit maximizing

The optimal point is not the revenue-maximizing feature, it is the profit-maximizing feature. We’re driven by ROI. We need an understanding of the cost function to combine with the price function to result in a profit function. We must solicit and respond to that feedback from our engineers.

This is the level of execution expertise that can differentiate our teams, and our products.

Summary

An MRD is critical to capturing product strategy information. Capturing the right information is critical. The goals of capturing that information are to disseminate the ideas, and for the team to collaborate on them. Techniques that make it easier for our target audience to read the MRD are important. And having a good approach to managing the document as it evolves is what can set us apart from other teams, or make us more competitive.

Ten Essential Practices of Continuous Integration

Rubber chicken

Martin Fowler has identified the key process elements of making Continuous Integration work. You could even argue that they are the elements that define Continuous Integration (done correctly). We include his list and our thoughts below:

  1. Maintain a Single Source Repository
  2. Automate the Build
  3. Make Your Build Self-Testing
  4. Everyone Commits Every Day
  5. Every Commit Should Build the Mainline on an Integration Machine
  6. Keep the Build Fast
  7. Test in a Clone of the Production Environment
  8. Make it Easy for Anyone to Get the Latest Executable
  9. Everyone can see what’s happening
  10. Automate Deployment

For background information, check out the Foundation Series on Continuous Integration article.

1. Maintain a Single Source Repository

The smartest people I know use subversion when they have been able to make the choice themselves. Aside from being open source, it provides two key differentiated benefits relative to “everything else”.

  • Atomic commits: The ability to check in everything or nothing, so you don’t risk breaking the build with a partial check-in.
  • Overall project versioning: Allows you to track changes in source file directory hierarchies, file renaming, etc. Each version is of the entire project, not of a single file.

2. Automate the Build

Fowler sums it up perfectly:

“…anyone should be able to bring in a virgin machine, check the sources out of the repository, issue a single command, and have a running system on their machine”

3. Make Your Build Self-Testing

Include automated testing as part of the build.

4. Everyone Commits Every Day

More frequently when possible. This is the minimum.

5. Every Commit Should Build the Mainline on an Integration Machine

A seperate, dedicated machine does a daily (or more frequent) build and full test suite run autonomously. The “build and test” model above relies on people to kick off the build when they commit. A scheduled task on a seperate machine provides a safety net for human error (oversight). Alternately, companies like Calavista can make this foolproof by automatically triggering an automated build as part of every commit. With Calavista’s devEdge, if the developer “commits”, what really happens is that the automated build/test cycle is triggered with his new code, and it gets promoted only if all the tests pass.

6. Keep the Build Fast

Tests can take a long time, builds should only take ten minutes. Fowler suggests a strategy of staged builds to address the 10-minute threshold. Run unit tests against the 10-minute build, and run the full suite in parallel or series.

Another option is to use statistical sampling of tests to get a “10 minute answer” while the full suite is kicked off in parallel.

7. Test in a Clone of the Production Environment

Eliminate even more variables. Make sure the tests are running against a clone of the production environment. Teams that are pushing the envelope today use virtual machines (VMs) to quickly create cloned production environments, install the software and run the tests.

8. Make it Easy for Anyone to Get the Latest Executable

Make sure everyone knows where the latest build can be found. Probably a good idea to keep recent builds in the same place too, in case a problem sneaks through the process temporarily (like a memory leak or other obscure, not-yet-tested situation).

9. Everyone can see what’s happening

Visibility! eMail the team when builds start/finish, including success/failure information. Put a rubber chicken on the desk of the person currently running the build (don’t ask – just read Fowler’s post). Ring a desk bell when the build passes. Have fun with it.

10. Automate Deployment

Make deployment into production as easy as running the build. Since tests are run against a production clone, and are already automated, this presents minor incremental effort.

Conclusion

Martin presents a great list. In addition to the above, we would suggest

  • Generate test-results documents per requirement. For each build, identify which requirements pass, fail, or are untested. The most relevant information for communicating outside of the team is status of previous requirements (did our regression tests pass?) and current requirements (are we almost done with this timebox?).

Targeted Communication – Three Tips

cliff notes

Most guides to writing an executive summary miss the key point: The job of the executive summary is to sell, not to describe.

This from Guy Kawasaki’s recent post, The Art of the Executive Summary. Guy’s article is structured towards pitching an idea to a potential investor. We’re going to apply the same rationale to the communication that is key to successful product development – communication from the team, to stakeholders and sponsors.

Two types of communication – tactical and strategic

For this post, we’ll assume that we are part of a team delivering a new software product for our company. We have users, marketing, product management, development, quality, and SMEs (subject matter experts). We are all working together to deliver great software. We communicate with each other as part of executing on our tasks. This is tactical execution.

horse with blinders

James Shore posted Two Kinds of Documentation last month, where he presents an Agile perspective on documentation. The two types of documentation he identifies are ‘get work done’ documentation and ‘enable future work’ documentation. The purpose of these documents (or more precisely, the communication they represent) is to help our team execute either now or in the future. Internal communication is tactical communication.

We also communicate with people outside of our team. We communicate to set expectations with customers, users, and clients.We communicate with sponsors, customers, and others who fund our software development. Without these channels of strategic communication, we won’t have a project, or worse, won’t have a customer when we’re done. External communication is strategic communication.

Tailoring strategic communication

Guy’s quote is very insightful – there is only one reason for presenting to the executive – to get funding. Why? Because the executive has only one reason for listening to us – to decide if we should get funding. All of our strategic communication should focus on one goal. – The goal of our audience.

Here are three tips on providing targeted communication to people external to the team.

    1. It’s the economy, stupid! When President Clinton ran for office the first time, this was one of his slogans. He capitalized on the fact that his opponent was busy talking about what his opponent thought was important. Then Governor Clinton talked about what his audience thought was important. This is the hardest thing to remember, especially when we’re passionate about what we’re creating. We need to run our communication through the “so what” filter. If it isn’t important to our audience, don’t make them listen to it (or read it).
    2. No habla Ingles. Once we identify what our audience cares about, we have to make sure that we can communicate that information in a language that they understand. In one of our earliest posts, Intimate Domains, we talk about how people are so rooted in their areas of expertise that they almost speak different languages. If we can’t modulate our signal to a band-passed frequency*, we might as well be speaking jargon gibberish.
    3. Brevity. Clear. Concise. Contextual.

    [Update 25 Apr 06]

    We have added a post, Targeted Communication – Status Reporting as a detailed example of targeted communication.
    – –

    *If we can’t get the message across in terms our listener understands,...

Top ten tips for preventing innovation

image of lock on door

At a recent presentation in Austin by Seilevel about the goals and methods of requirements gathering, a member of the audience asked “What can we do with our requirements to assure innovation?” That’s a tough question with an easy answer – nothing.

What if the question had been “What can we do to prevent innovation?” That’s a better question with a lot of answers.

Struggling with too much innovation?

Yes, people have been innovating since fire and the wheel it’s a curse we’ve inherited. In modern times, much of that innovation has happened inside companies. 3M had the post-it note, Lockheed had the skunkworks that created the SR71. Google allows their employees to dedicate 20% of their time to whatever interests them – and Google’s employees innovate a lot.

Most companies do a good job of providing incremental improvements to existing products and processes. What are those few who struggle with innovation doing wrong?

Companies with track records of innovation have flawed processes.

  • They fail to screen out likely innovaters in their hiring process.
  • They mismanage their employees, who end up innovating when they should be towing the line.
  • They inadvertantly reward innovation instead of mediocrity with recognition and compensation.
  • They create opportunities to innovate and their employees drive Mack trucks through these loopholes.

Here is some guidance about how to fix those problems:

Top ten tips for preventing innovation

  1. Hire employees looking for safety in their roles. Innovation happens when people stretch outside their comfort zones – don’t let them stretch! Find people who primarily want security and a nine-to-five role, stay away from those troublemakers who want to “change the world.”
  2. Hire incompetent employees. What better way to prevent innovation than to have people who have to focus just to do the bare minimum? For extra safety, try and find someone who can take credit for other people’s work and hide their own incompetence – these people are easier to promote, which will become important later. If we are forced to hire someone who is competent, it’s critical that we make sure that they only have one area of expertise. People with more than one area of expertise, switch-hitters, just cause trouble by talking to people on other teams.
  3. Keep salaries below the 75th percentile. Innovators know their value – and when they aren’t applying for jobs with intrinsic utility to them, they are commanding higher salaries. If we keep our salaries low, there’s much less risk of one of these innovators sneaking into our organization. As a bonus, we’ll save a fortune!
  4. Read The Ten Faces of Innovation by Tom Kelley of IDEO. He focuses on the types of people and organizational behavior that encourage innovation. The writing style is very clever – Mr. Kelley writes as if he were trying to encourage innovation – what a riot! He identifies ten personas that contribute to innovation. Put those ten faces on the wall in HR like an FBI most-wanted poster and coach HR to screen those people out.
  5. Treat employees like garbage. Yell at them. Whenever possible, call them at midnight to yell at them some more. They work for us. If they get uppity, make them work on the weekends. Make them dig holes and fill them back up again. Threaten them – especially when they need the job. If you can’t yell, at least be condescending in public forums. Remember we are smarter than they are. Punks.
  6. Reward conservative and marginal successes. The old rule of thumb for office politics was “It takes ten good projects to recover from one bad project.” Stick to it! If we punish people for mistakes when they ‘swing for the fences’, and reward them for marginal and safe projects, they will quickly get the idea. This is the most subtle of all the tips – but don’t worry – people will figure out the reward system and shy away from those risky projects. This technique has the added benefit of propogating itself up and down the management hierarchy. Many organizations get lucky, and do this one accidentally. Wish we were all so lucky!
  7. Micromanage. We’ve been promoted because we understand their jobs so well that we could do them in our sleep. Whatever those pesky little people think, it’s wrong. We know what we want, we know how we want it (not like that, you fool!). Every day we should make sure they do things exactly like we want. Even things like using the right font in their emails can be important. If anything slips thru unmanaged, then we aren’t doing our jobs. Of course, if we have a good boss, he’ll tell us exactly how to manage them.
  8. Only create customer-requested features. Let our customers tell us what to do. Lucky for us – customers don’t have big ideas, they keep us focused on what we’re doing. Don’t let them whine about their other problems – that’s not why we’re talking to them. We just want to know if they like the idea of animated buttons on all the dialogs. Stay away from the unhappy customers – if we aren’t getting the job done now, well, we don’t really care what they say (they are existing customers, we need new customers). We’re here to solve our problems. Oh – and don’t second guess the customer. If they say they want the menu items in alphabetical order, well, that’s what they want. The customer is always right. If Henry Ford had listened, think of how fast horses would be today. Even better, appoint a user-representative, then we don’t have to talk to the customers at all.
  9. Make performance reviews easy. Create some easy-to-measure metrics (like # of sick-days taken, # of powerpoint slides created, # of meetings attended), and use those for performance reviews. People always gravitate toward the metric. We can run the reviews with a minimum of effort, giving us more time to tell them how to do their jobs. Just an hour a year. Some managers can give feedback in 15 minutes.
  10. Build a kingdom. When we have information, that means we have power. With that power, we can grow our organization. The more people we have, the more important we are. We need to make sure that those other teams don’t get our information. They might apply it in ways that we didn’t intend. While we’re at it – make sure our people don’t find out what we know. Not only will it protect us from them, but it will keep them from accidentally discovering a more important problem, or an alternate way to apply what they already know to a new problem domain.

Software Testing Series: Organizing a Test Suite with Tags Part Two

organized typesetting letters

Organizing a test suite with tags (part 2)

This is the second in a three-part post about using tags as a means to organize an automated test suite.

Part 2 of this post can be read as a standalone article. If it were, it would be titled Top five problems with test automation suites. If you’re only reading this post and not parts 1 and 3, pretend that this is the title.

  • In part one of this post we developed an understanding of tagging as a technology, including it’s pros and cons.
  • In part two of this post we will define the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we will explore an approach to combining tagging with test suite organization.

What are the problem areas inherent in managing automated tests?

We start with identification of the problems or opportunities, before defining what the requirements will be. This is the same process we discussed in From MRD to PRD, applied to the test-automation space. The following are the top five problem areas we can identify about test automation suites.

  1. Maintaining the suite becomes too expensive. Once we have a suite in place, we have to maintain it. As the size of the suite grows, the amount of maintenance of existing test grows. It grows in proportion to the number of tests in the suite and the rate of change of the underlying software being tested. As the amount of maintenance grows, so does its expense.
  2. Developers will never start using the suite. Change is bad. Well, for many people it is. Asking someone with a full time, salaried job to take on additional responsibilities has to be done correctly. There is absolutely a risk that people won’t start using the suite. Since this project is focusing on iterative development of an already deployed tool, already in use, this problem really doesn’t apply.
  3. Developers will stop using the suite. Developers avoid tedium. They’re smart. They want to avoid unneccesary work, menial work, and irrelevant work. If the developers perceive the test suite in any of these ways, we’re doomed – they will stop using it.
  4. Not testing the right stuff. A test suite that doesn’t test the right areas of the software is worse than not having one at all – because it gives you a false sense of confidence.
  5. Test suite becomes less effective over time. An initially effective suite can grow less effective over time as the underlying software changes. Individual tests become irrelevant as they become impossible to reproduce with the application – perhaps the user interface has changed. If test design was linked to the heaviest usage patterns, and those patterns change, then coverage of the new heavy usage parts of the suite will be reduced – and the effectiveness of the suite will be reduced.

Which problems should we address with software?

With limited resources, we need to make sure that we focus our software efforts on those problems where software can have the most impact on solving the problem. We’ll start by identifying

  1. Maintaining the suite becomes too expensive. There are three approaches to solving this problem – reduce the required maintenance, make the required maintenance more efficient, and reduce the cost of the labor that maintains the solution. Labor cost reductions may very well be the most effective general way to solve this problem, but given the real world project constraints for the project behind this post, we aren’t exploring that option. This is a candidate for the software solution.
  2. Developers will never start using the suite. Make them want to use it, or make them use it. We believe you want to make them want to use it – both by evangelizing the benefits and by quickly crossing the suck threshold so that users get positive feedback. For this project, we have taken that approach, although it’s true that there is also a mandate from the dev team’s managers that we must make sure they use it. With process and education approaches that have proven effective, this is not a target of the current software solution.
  3. Developers will stop using the suite. The looming mandate will assure that developers won’t go AWOL on the suite. But if they can present a compelling reason to their managers, there is a risk that they will decide to stop using it. This is a candidate for the software solution.
  4. Not testing the right stuff. Test suite planning is a science unto itself. We will keep in mind “ways to make test suite planning easier” as a candidate for the software solution, but we aren’t otherwise targeting this for the current software solution.
  5. Test suite becomes less effective over time. Tests can grow irrelevant over time when the software they test is constantly changing (as in this project). This problem has been addressed to a large extent by using whitebox unit tests in the test suite. We are not targeting this as part of the current software solution.

Reminder

  • In part one of this post we developed an understanding of tagging as a technology, including it’s pros and cons.
  • In part two of this post we defined the top five opportunities and problems inherent in the organization of automated test suites.
  • In part three of this post we will explore an approach to combining tagging with test suite organization.

– – –

Check out the index of software testing series posts for more articles.

Five Measures of Product Manager Performance

American idol judges

Joy posted a really good article last week at Seilevel’s requirements defined blog, Measuring product manager performance on internal system products. Her post is a followup to an extensive and heated debate that happened last fall on the Austin PMM forum. It’s a great forum to subscribe to – a lot of experienced people with strong opinions and steamer trunks full of anecdotal data – and they don’t all agree. You get to see the coin from all three sides* with this group – it’s awesome.

Continue reading Five Measures of Product Manager Performance

Top Ten Use Case Mistakes

broken glasses
The top ten use case mistakes

We’re reiterating the top five use case mistakes from Top five use case blunders and adding five more. For details on the first five, go back to that post.

There’s also a poll at the end of this post – vote for the worst mistake.

  1. Inconsistency.
  2. Incorrectness.
  3. Wrong priorities.
  4. Implementation cues.
  5. Broken traceability.
  6. Unanticipated error conditions. The error conditions are explicitly called out in a formal use case as exception courses. When we fail to think about how things can go wrong, we take a bad situation (an error) and make it worse by leaving our users with no reasonable way to deal with the errors.
  7. Overlooking system responses. When people use computers, the computers respond. It is a cause and effect relationship – and ideally one that is predictable and comfortable to the user. Reading a use case should be like watching a tennis match, with activities being performed alternately by the user and the system. “The user does X, the system does Y, the user does Z…”
  8. Undefined actors. Novice and expert users have different ways of using an application. Different design tradeoffs will be made to accomodate for these users. Understanding the domain of the user can also be important. Imagine a calculator application – the use case of “get a quick answer to a calculation while doing something else” will be very different for a loan application officer than it will be for a research scientist.
  9. Impractical use cases. We have to remember to validate with our developers that they can implement the use cases, given the current project constraints. As a former co-worker is fond of saying – “It’s software – we can do anything” which is true. But considering the skills of the currently staffed team, the budget and timeline for the project, and the relative priority of the use case is prudent.
  10. Out of scope use cases. If we don’t define the system boundaries, or scope of our effort, we risk wasting a lot of time and money documenting irrelevant processes. Starting with the specious argument – although our user has to drive to the office in order to perform her job, we don’t include her commute in the scope of our solution. An online fantasy sports league application would certainly include a use case for picking players for individual teams – it may or may not include researching player-statistics. Knowing where the boundary is will prevent us from defining and building undesired or unnecessary functionality.

More discussion on common use case mistakes

I liked this article by Susan Lily on use case pitfalls. Susan goes into more detail on out of scope use cases(#10 above), where she talks about defining the system boundary in UML use case diagrams as a means of helping to avoid out of scope use cases. She also encourages using a standard template for use cases (Inconsistency – #1) and proposes a minimum set of criteria for creating your own templates. She provides a good argument against CRUD use cases – in a nutshell, they do not represent primary user goals (but rather tertiary goals).

At one point she proposes a compromise of including low-fidelity screen mockups in use cases as a means to make them easier to understand and more efficient to communicate. I disagree with her here – this is at best a slippery slope, and more likely the use case equivalent of my requirements documentation mistake. Because images can be so powerful – even the simplest screen design elements will provide design guidance (Implementation cues – #4) to the developers – IMHO, it is unavoidable.

We’ve added a new feature to Tyner Blain – polls on individual posts! We’re going back and adding polls to the most popular posts, and including them in many of the new ones. Each poll can have up to 7 entries – if an item isn’t displayed, hover over the up or down arrows and the list will scroll. If the text for an entry appears truncated, hover over it with the mouse and the text will scroll. Vote early and vote often, and thanks for your vote!

Poll: The worst use case mistake is

If you selected ‘Other – not on the list’ please add a comment and tell us why!

Top five presentation tips

microphone

From Start to End has a great post, Some tips on presentations. Very little we can add here – check it out.

Our top five presentation tips (our first four picks are from the list behind the link)

  1. Know your audience. A key preparation – you have to have a goal for a presentation. Are you convincing, educating or inspiring people? What do those people care about (and what do they already know?)? Also – do you actually know the people in the audience?
  2. Revise and rewrite. Editing is the best thing ever. When we first put ideas down, it’s generally from our point of view. Validate that the content is targeted at the audience.
  3. Minimize the text on the slide. Eyecharts distract from the presenter. People read ahead – the slide content should provide cues for you to speak, and for your audience to remember. If we need a bunch of text to support our point, we include it in a handout.
  4. One idea per slide. Focus!
  5. Include supporting slides. We’re already simplifying the content we present to maximize the impact of the ideas, which means that there is more content somewhere, but we haven’t shown it. Often someone in the audience (generally interested person, micro-manager, dude-trying-to-look-smart) will ask drill down questions – “Where did you get that data?” “Isn’t that diagram overly simplified?”. Adding those supporting slides (created in previous presentations, or prior to revision) after a blank slide (with the title “End of presentation”) to the deck. Don’t plan on showing these slides, just have them at the ready.

The best advice I know about preparing content for a presentation: Plan the formal part of the presentation to share 2/3 of what you want to tell the audience. Draw that last third out through engaging conversation and informal asides during the formal presentation.