Monthly Archives: December 2005

Readability and Requirements


Thanks to the download squad for pointing me at the Juicy Studio: Readability Test! You can go to Juicy Studio’s site, and calculate the reading level of any URL.

You can also try the Readability Grader at Jellymetrics, for a modern take on it.

Of the multiple analyses provided, the Gunning Fog index is the easiest result to read – it is a proxy for the number of years of schooling required to read something.

Some Gunning Fog index results…
Tyner Blain: 11.38 (The Wall Street Journal is 11, The Gaurdian is 14)
The Dilbert blog: 8.77
Joel on Software: 11.05

What makes this interesting is that you can apply the same algorithm ((avg # of words per sentance) + (% of words with 3 or more syllables)) * 0.4 to any document. For example, Tyner Blain’s blog (prior to this post) had 11.17 words per sentance, and 17.27% “hard” words, yielding 11.376 as a Gunning Fog index. There are other indeces as well, designed to provide different insights into the writing.

It’s important to note that these are mathematical anayses, and provide no insight into comprehensibility. If you follow the link to everything you ever wanted to know about readability tests, you would find that this sort of analysis is generally discouraged today. These formulaic studies are superficial measures of the text. They do not provide any insight into the difficulty of the vocabulary, ease of interpretation by non-native speakers of the language, or comprehensibility in general.

If this is a bad test, why are you telling me about it?

Good question. The statistics can identify if a draft is wordy or dense – raising a red flag that content should be considered for revision. While we can’t use this test to say “writing is good”, we can use it to say “writing might be bad”. When we are receiving feedback that our writing is too hard to read, this type of analysis can give us feedback about how effective our editing has been.

The goal of writing a requirement is not pedantic accuracy, it’s effective communication. In addition to crossing domain-boundaries with the different audiences that consume our requirements, we often are crossing language barriers and varying educational levels. It’s hard enough conveying concepts that presume contextual knowledge, our readers shouldn’t have to parse the text repeatedly.

CRUDdy use cases and Shakespeare


CRUD (Create, Retrieve, Update, Delete) is an acronym used to refer to a set of mundane, important, and indirect (if not implicit) requirements or use cases. To create a report on orders, you have to first create the orders and retrieve them. Further, the ability to update (edit) and delete the orders is probably also important. Another description of the CRUD pattern is here.

Continue reading CRUDdy use cases and Shakespeare

Why We Should Invest in Requirements Management


Need to convince someone in your management chain why they should invest in managing requirements? There are some great arguments in this post by sudhakar –

Agile RUP for Product Development : Best Practices of Requirements Collection .

There is a lot more in the post, but the key high level “Why should we invest in managing our software projects?” answers are summarized from several research reports:

  • The Standish group reports that over 80% of projects are unsuccessful
    either because they are over budget, late, missing function, or a
    combination. (
    http://www.standishgroup.com/sample_research/chaos_1994_1.php)
  • 53 percent of projects cost 189 percent of their original estimate.
  • Average schedule overruns may be as high as 100%
  • Over 40% to 60% of defects in software are caused by poor requirements
    definition.
  • About One-Quarter of all projects are cancelled.
  • Customers do not use 20% of their product features.

And the key lower-level reasons why requirements affect the success of a project:

  1. Better requirements enable better design and architecture decisions.
  2. Better requirements reduce iterations in the implementation, because the dev team isn’t operating on incomplete information.
  3. Better requirements reduce (bad) assumptions by the developers, resulting in higher quality.
  4. Better requirements improve the testability of the application, both mitigating risk and enabling automated testing (reducing maintenance costs).

There’s more stuff in the post, and it’s a good read. Check it out.

Managing requirements conversations


In Documents vs. Conversations, on the Pyre blog, Greg Wilson does that thing that we so rarely do – he takes a step back, and thinks from an entirely different perspective about managing requirements. He proposes the idea of managing requirements as conversations, instead of as documents.

Greg makes the point that the reason for managing requirements is not to create a document, but rather to communicate. He points out that no one can have an omniscient awareness of the entire spec, and therefore people operate based on an understanding of a subset of the spec at a given time. And since the evolution of the spec happens by conversation, he proposes a conversation-centric view of the requirements. Greg points to the successful examples of open source software development teams driving feature delivery from conversations.

Jerry Aubin of Seilevel, posts about context in their blog, Requirements Defined. The focus of Jerry’s post is on the conversations that drive the creation of use cases.

Conversations are most relevant while a spec is being defined. After the spec is defined, the conversation is less relevant – determining the genesis of a requirement is less important than determining the relevance of a requirement (i.e. which use case does it support, how does it help to achieve ROI, when will it be delivered). The primary beneficiary of a conversational analysis is the requirements manager, who benefits only in that it provides organizational context, which helps when defining a subset of the specification.

As Greg pointed out, we interpret subsets of the specification at a point in time. Writing of the specification also happens piece-meal. Once a portion of the spec has been written, the conversation that occurred starts to immediately lose its relevance. When internal teams are responsible for distilling the inputs from multiple conversations with multiple customers, then conversational models will help the product managers, but only until the spec is written – then the conversation becomes an (infrequent) reference.

When managing requirements for customer-specific software, a conversational model will have even less benefit. A discrete set of customers must approve a set of requirements or a statement of work. In these cases a document-centric model will be more effective.

There is definitely value in helping a product or program manager to manage and mine multiple conversation threads as part of forumulating a spec. Changing the tools we use to manage the resultant specs would be a step backwards, imho. Once the evolving spec is actionable, the consumers of the spec will still benefit from a contextual presentation. I would suggest using a tool like OneNote to collect and organize the information that comes from multiple sources (email, IM, hallway conversations, whiteboard discussions, etc). The custom tagging and search capabilities due to be introduced in Windows Vista could also be leveraged to make conversation management more effective.

Use case series: UML 2.0 use case diagrams

The UML way to organize and manage use cases.

Pros

  • Provides a high level view of the use cases in a system, solution, or application.
  • Clearly shows which actors perform which use cases, and how use cases combine to form business processes

Cons

  • Presents an “inside-out” view of the sytem. This description reflects “what it is” not “why it is” – and it is easy to lose sight of why a particular use case is important.
  • Poor communication tool when speaking to users and stakeholders about why and when the system will do what it will do.
  • Time consuming to create and maintain

Instead of duplicating the explanation and summary work already done by Chris at grillcheese.blogspot.com, I’ll point you to his post, Introduction to UML-2 use case diagrams. Agile modeling has a detailed post on UML-2 use case diagrams.
There are ultimately four pieces of information you want to know about use cases. UML diagrams will show you two of them.

  1. Which actors perform a particular use case? UML diagrams show this.
  2. Which use cases are combined to create a business process? UML diagrams show this.
  3. When is a use case scheduled for availability? UML diagrams do not show this.
  4. Why are we doing a particular use case? UML diagrams do not show this.

Knowing that we can’t answer all 4 questions with a single communication tool, here’s what we should do:
(1&3) Create a matrix view of use cases versus actors to show which actors perform each use case, and when they will be available.
use case matrix
(2) Create a UML 2.0 use case diagram if you find that the benefits for your communication outweigh the costs of maintaining the diagrams. In projects I’ve worked on in the past, a simple flow chart with use case names have been used. These simple charts can be made in a fraction of the time, are more easily scannable, and present information more densely. If you are managing requirements with a tool that automatically generates the diagrams, then do it – but don’t spend a lot of time on them. A flowchart takes almost no time to draw, and communicates the information just as effectively (and more succinctly). Suggestion – use the flow chart.
(4) Ultimately, UML diagrams (often referred to as “use case cartoons”) focus your attention on what you are building, at the expense of losing focus on why you are building it. Create a mapping or maintain links (traceability) from use cases to goals.

The why of the use case is the most important information. Don’t let use case cartoons distract you from it.

Poll: Which use case format do you use?

If you answered ‘Other’ please comment and let us know what you use!
Quick links to posts in this series

Top Five Use Case Blunders

We all make mistakes. When we mess up use cases, it is most likely one of these five things.

  1. Inconsistency. All the use cases we write need to be at the same level (macro-scopic, not microscopic or telescopic) and use the same format (narrative versus list, formal versus informal). Managing sub-use cases versus use cases is part of getting the level right.
  2. Incorrectness. We must communicate with the users and business analysts to validate that we’ve captured the right steps and information. My experience has been that the alternate course and preconditions are toughest to get right (see formal use case).
  3. Wrong priorities. We have to implement the right use cases, in the right order. The most important use cases need to be implemented first. Importance is driven by ROI. Balance this with implementation dependencies and change management constraints.
  4. Implementation cues. We should not specify the implementation in our use cases. Avoid using language that implies implementation choices.
  5. Broken traceability. Use cases enable goals (and therefore ROI). Functional requirements support use cases. We have to maintain the links from goals to use cases to functional requirements, or changes to any of them will not propagate their impacts.
  6. We’ve added to this listTop ten use case mistakes – which now includes items 6-10.

On the bright side – each of these blunders is very easy to identify, and easy to correct.

If you’re new to this blog, you may want to look at the use case series.

ObRelatedLink:
Also, here’s a nice article from IBM with some lower level techniques to improve your use case writing. These tips are along the lines of “write in the active voice” and “use a template” and other tactical guidelines.

[Update] Be sure to check out the full list for details of the other five blunders.

Communicating a delivery schedule with use cases

Use cases are a great tool for establishing the scope of a project. They provide a framework for defining what needs to be implemented (if it doesn’t support a use case, we don’t need to implement it). They also set expectations with stakeholders (here’s what you can do with the system).

Many teams fail to leverage use cases to communicate a delivery schedule.

Quick tangent on why incremental delivery is good

When we deliver a project incrementally, we have to think about it in terms of delivering the features needed to support atomic use cases. If we deliver three of the four features required for a particular use case, then users can’t perform the use case. If users don’t perform the use case, then we won’t generate the return on the investment. Use cases are the key to scheduling incremental delivery.Note that a product release roadmap may be driving a high level schedule of functionality, and that roadmap will be driving the schedule of completed (enabled) use cases.

Communicating schedule with use cases

Setting expectations with users and other stakeholders requires that we communicate the schedule of delivery for our software. Stakeholders don’t think or care directly about when feature X is enabled. Feature X is implicitly worthless. When a user can perform a particular use case, which depends upon feature X – that’s what they care about. The executive who funded the project cares about when the business will see a financial benefit from using the software, which is achieved through a set of use cases, which in turn depend upon feature X. Why do so many project, product, and program managers try and communicate scheduling to this audience in terms of features? We should speak the language of the listener.

When talking about the schedule to a user, we should talk about the schedule of particular use cases.

On a large project last year, we were managing requirements using Caliber RM, where we maintained use cases and functional specs with tracing between them. This project was for a very large enterprise software deployment, and it had over 50 use cases with about a dozen different primary actors (classes of users of the system, like regional salespeople, financial accounting people, etc). We were also managing the project across multiple quarterly releases. We needed to set expectations with multiple stakeholders at different levels of the organization, and we had a global development team working on the project from multiple locations.

We needed to present a clear and concise report that would show which use cases were enabled in each release of the software. We were maintaining all of the data in Caliber RM, and the dynamics of the project were such that we were regularly reprioritizing and rescheduling features for upcoming releases. We also wanted to spend our time managing the project, not the reporting on the project, so I built some automation to generate a table showing the schedule for use cases.

The information you want to show in the report is easily understood in a matrix, where each row represents a use case, each column represents a particular actor, and each cell represents the release in which the actor can begin performing a particular use case. This view of the information also helped us in planning rollout and user training schedules. A table of this information could look something like this:

With the schedule, we were able to easily reconcile interdependence of use cases in particular workflows (for example, creating an order is valueless without the ability to approve an order). We were able to plan the rollout to users (corporate accounting would not use the system before Q3). Our development managers were able to view a high level schedule when planning deliveries and our consultants were able to plan for anticipated loading of the servers.

Best of all, we were able to set expectations and provide a roadmap for our stakeholders, who needed a simple way to grasp the very complex product roadmap for delivery. Our communication with executives was at a higher level – reviewing schedules at the goal level, but interacting with different subject matter experts and other mid-level managers often included this view, which was very helpful.

When creating packaged software for consumers or businesses, the same approach is invaluable for planning and can be used to generate excitement in the users. While developed for enterprise software delivery, it is easily applied to shrink-wrapped and hosted software deployments with multiple customers.

Use case series: Informal Use Case

The informal use case is the tool of the Agile Requirements Manager. It is a paragraph describing the user’s goals and steps. Also referred to as a basic use case.

Pros:

  • Easy to create – quick development, iteration, and collaboration. This enables a rapid approach to documenting use cases, and minimizes the cost of developing the use cases.
  • When done correctly, yields the most bang for the buck of any use case approach.

Cons:

  • Challenging to be rigorous – the short format makes it difficult to capture all the relevant information (and difficult to avoid capturing irrelevant information).
  • Lack of consistent structure – can be transition from use case to use case, since the format is free-form
  • Capturing the right level of content for your team can be tricky.

Note that the paragraph format can also be replaced by a numbered series of steps – the key differentiator of this form relative to a formal use case is the lack of structured fields for “everything else” about a use case (preconditions, assumptions, etc).

An example of the informal use case format in the wild, in direct contrast to a formal format for the same use case.

[Update 2007/01/20: Download our free informal use case template today]

Rosenberg and Scott published a series of articles about incorporating use cases into their ICONIX software development process – the first article is here – Driving Design with Use Cases free subscription. They describe a “semi-formal” use case format, which is between informal and formal. They also describe ICONIX as a process that lives in the space between RUP (Rational Unified Process) and XP (Extreme Programming). Their process is a UML-centric approach to system representation, which incorporates the use case information into a structured and larger framework.

The rest of the articles in the series are:

Driving Design: The Process Domain

Top Ten Use Case Mistakes

Successful Robustness Analysis

Sequence Diagrams, One Step at a Time

The goal in this agile approach is to be “just barely good enough.”

That does range an interesting question – is good enough good enough? And how do we define it? There are several factors that weigh into making this decision.

  • Domain expertise of the current team, and are there any switch-hitters?
  • Amount of time the current team has spent working together (and how well they know each other).
  • Geographic and temporal displacement of team members (are we working through emails and document repositories, or are we scribbling on white-boards together”)
  • Language barriers, pedants and mavens – the personalities on our team

The bottom line is that it all comes down to communication. If brevity is inhibiting our ability to be unambiguous, we should use a semi-formal or formal format for our use cases. If project schedule requires, and our team enables rapid iteration, we should use informal structure for our use cases.
Quick links to posts in this series

Stay away from my users!

“You don’t need to talk to the users – I’m the user representative.”

“But talking to the users will help us make the software better!”

“Talk to the hand…”

When we hear this, we cringe. And thus begins a discussion on which the fate of our project depends…

We’ve dealt with user representatives who believed that they knew better than the users. We’ve dealt with people afraid to let consultants talk to the users, because the consultants might mis-set expectations and create bad will when the development team fails to deliver. We’ve dealt with over-protective information-hogs, who don’t want to telegraph their moves, for risk that they might lose control of the project, or lose credit for the project to someone else.  How do we get past these barriers?

The problem is lack of trust

All of these archetypes have something in common – a lack of trust. In themselves, their organizations, and/or with us. This is where good consulting makes all the difference in the world. If we fail to get past the user representative, we will never deliver what the users truly need.

Occasionally, we run into the “not enough time” and “it’s too expensive” arguments against getting time with the users. In our post about why incremental delivery is good, we touch on the cost dynamics of developing software.  In our post, where bugs come from, we talk about how there are orders-of-magnitude benefits to avoiding mistakes while gathering requirements relative to discovering them after deploying software. When we can’t get input from the users we increase the likelihood of gathering the wrong requirements. And we won’t know that they are wrong until we get feedback from the users. The later this happens in the process, the more it costs to fix. When half of all enterprise software projects fail, the last thing we want to do is make it more likely that we’ll build the wrong stuff.

Building trust

Building trust is the key to getting around the roadblocks and getting to the users.  Louis Rosenfeld wrote a great post last year – Why Should We Be Trusted?. Make sure to check out the comment thread too – especially the “four step plan” proposed by Jason on July 5th. He adds more details, but his “things N” are

  • Be clear about what [we] do
  • Be critical of ourselves
  • Do it
  • Share

Dale Carnegie teaches that we can never win an argument. He also teaches that we can persuade. Depending on who we’re talking to, that may require logos, pathos, or ethos. We can present logical arguments or anecdotal data. We can apply our gravitas to be compelling.

Every conversation will be different, but always with the same goal – engender sufficient trust to get access to the users.

Use Case Series: Formal Use Case

This is the classic use case as described by someone who talks about Software Engineering. All of the training classes (other than Agile classes) that I’ve been to teach formal use case development as a component in a system of requirements management. Here’s an introductory article on how to read a formal use case.

Pros:

  • Detailed information about use cases, making it easy to estimate the cost of implementation.
  • Thorough coverage of the use cases is influenced by the use of a template.
  • Easy for readers to absorb. The structure of the document makes scanning easy, and also helps targeted lookups when a reader needs a specific piece of information.
  • Consistency with other use cases is much easier to assure when using a template.

Cons:

  • Expensive to maintain. Mapping a “use case” to the template requires some effort. Since formal use cases contain more (explicit) information than other use cases, there is more content to document, validate and modify. More content equals more cost (of maintenance).

The formal, or classic use case, is a tool used to gather structured information about how users will use the software. Formal use cases are gathered in a template, which structures the information. While this structure can vary from company to company, or even project to project, there are a few common and critical sections to the structure. The two main benefits of having the structure are that it helps with thoroughness (much harder to leave a field blank than it is to forget about it), and it helps with scanning by readers of the document.

Some common elements in a use case template:

  • Actor – The person or people who will perform the steps of this use case.
  • Preconditions – A description of the relevant and non-trivial state(s) of the system prior to the use case starting.
  • Normal course – A description of the use case itself. This description can either be in narrative form, or a numbered list (1..N) of specific user steps. When a use case (such as “User approves/rejects customer requests”) has more than one way that a user can accomplish the needed steps, the most common way is shown here – only a single path is shown.
  • Alternate courses – Descriptions of alternatives to, or deviations from the normal course. For example, the most common course might be to view the oldest unaddressed customer requests. An alternate course may be to view the unaddressed requests from the largest customers.
  • Exception courses – Descriptions of what the user will experience when something goes wrong.
  • Post-conditions – Description of the affected portions of the state of the system after the use case has completed.
  • Frequency of use – An estimate of how often a particular use case will be exercised.
  • Assumptions – Any assumptions that are implicit in the definition of the use case.

The formal use case can be considered as a contract, in that if the preconditions are met prior to initiating the use case, the post-conditions will be met after its completion. This contract provides a great framework for defining the functional requirements required to support the use case. Rodolfo Ruiz posts a good approach and insights on pre and post-conditions in Some thoughts on use cases.

Quick links to posts in this series

Some references

Here are some examples of use case templates in use in the real world

From process impact .

From Alistair Cockburn.
Very detailed explanation from Alistair Cockburn on how to develop a formal use case.

A detailed document from the HL7 Working group on how to approach the process of writing formal use cases.

From Harry Nieboer.

Related blog post

Why use use cases? from the Carnegie Quality blog