Monthly Archives: January 2007

CMMI Levels and RMM Level 4 – Traced Requirements

Book 4 in a stack

Background

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure. In this article, we will look at the definition of RMM level 4. W also look at the mapping from RMM level 4 to various CMMI levels.

CMMI to RMM Mapping

(larger version)

In the previous article in the series, we looked at how RMM level 3 – structured requirements processes map to CMMI levels.

RMM Level 4 – Traced Requirements

RMM level 4 builds upon the previous three levels – structured, organized, and documented requirements. With an organization of structured requirements, we can overlay the notion of tracing between requirements.

Consider the structured requirements approach we adapted from Karl Wiegers.

structured requirements relationships

In this structured approach, there is a notion of dependence.

  • Goals depend upon use cases in order to be achieved.
  • Goals depend upon non-functional requirements to define their achievability.
  • Use cases depend upon functional requirements to enable their execution.
  • Use cases depend upon non-functional requirements to characterize their effectiveness.
  • Functional requirements depend upon designs, which depend upon implementation.

This structure of dependency represents the real-world reliance of one artifact on another. In an ongoing software development project, we can be making changes to any of these elements. Those changes can impact other elements.

As an example, we could change a use case. The goal that depends upon that use case might be affected. Our changes may affect the functional and non-functional requirements upon which the use case depends.

Traceability allows us to say this use case relies on those requirements. It represents relationships between specific artifacts. We can use traceability to reduce the effort (and errors) associated with propogating changes through the dependency network.

We can also use traceability to enable interesting aggregations of reporting information. For example, we could identify the percentage of completion of a given use case – by looking at the percentage completion of all implementation elements that support all design elements upon which the use case depends. Other analogous relationships can be created to meet other reporting objectives.

We can also use traceability to validate completeness (IBM uses the word “coverage” in their article) of our specification. We can review a goal, and ask the question: “Are all of the use cases required to achieve this goal defined?” We can also validate in the other direction: “Are all of these use cases required to achieve that goal?” We covered this specific example in our article, Completeness Validation With Use Cases. This also applies to the completeness validation of other artifacts in the requirements hierarchy.

Mapping CMMI Levels to RMM Level 4

In our diagram, we show the following mappings for RMM level 4:

  • CMMI level 0 – No Entry
  • CMMI level 1 – No Entry
  • CMMI level 2 – Requirements Should Be Traced
  • CMMI level 3 – Requirements Should Be Traced
  • CMMI level 4 – Requirements Must Be Traced
  • CMMI level 5 – Requirements Must Be Traced

For CMMI Level 0 and CMMI Level 1 – when our process is unmanaged, and unstructured, traceability does not provide value – it creates confusion.
For CMMI Level 2 and CMMI Level 3 – A valuable process must include organization of documented of requirements. Those documents should also be structured and traced.
For CMMI Level 4 and CMMI Level 5 – Being able to quantify the performance of our process, and improve our process based on that feedback both require an element of instrumentation and insight into our techniques and tools. Attempting to do that meaningfully without additional structure and traceability will provide limited benefit.

From a CMMI Level Perspective

The previous analysis basically looked at the “RMM level 4″ column in our grid, and inspected the relative decisions for each “CMMI level” row. Now we will look at it by reviewing the CMMI levels, and seeing how they map to the RMM level.

A quick review of the same chart (so you don’t have to scroll up and down):

CMMI to RMM Mapping
(larger version)

At CMMI level 1, we don’t address traceability We would focus on reaching CMMI level 2 before reaching RMM level 4.

At CMMI level 2 and CMMI level 3, we require that the documentation be organized. A managed process without some form of organization and consistent documentation is a poorly managed process. We also suggest that an RMM level of at least 3, and ideally 4 be adopted.

At CMMI levels 4 and 5 we are measuring and improving on our process. We require traceability as a key component to our quantified analysis and instrumentation.

Summary

  • RMM level 4 specifies that requirements documents are organized, structured, and traceable.
  • CMMI level 2 specifies that there is a managed process – in our case, one for managing requirements, and it should involve structure and traceability as components that simplify that management.
  • A process must be at RMM level 4 before it can reach CMMI level 4.

Check out the next article, CMMI Levels and RMM Level 5 or  take our One Minute Survey on CMMI and RMM Levels.

CMMI Levels and RMM Level 3 – Structured Requirements

3rd book in a stack of 5 books

Background

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure. In this article, we will look at the definition of RMM level 3. We also question the language used and reinterpret some of what IBM suggests. Finally, we look at the mapping from RMM level 3 to various CMMI levels.

CMMI to RMM Mapping

(larger version)

In the previous article in the series, we looked at how RMM level 2 – organized requirements documents processes map to CMMI levels.

RMM Level 3 – Structured Requirements

RMM level 1 requires us to document our requirements. RMM level 2 requires us to organize that documentation and use consistent formatting for the documents. RMM level 3 introduces the concept of using structured requirements, as well as the idea of requirements having attributes. The first notion relates to the relationships between requirements, and the second is a way of apply structure to the requirements so that we can reason about them more effectively.

Structured Requirements

The first thing that the IBM team identifies is the need to identify different types of requirements. Avoid all of the naming bugaboos, and consider the notion of identifying different structures of artifacts in the software development process. We have a series of elements of information that we need to understand and articulate in order to travel from an identified market need to a delivered software product.

There are many different approaches to documenting requirements. We all struggle to agree on particular naming conventions. We use different requirements documents to represent different parts of the flow.

In Alphabet Soup – Requirements Documents, we use the following diagram to try and summarize the stages of decomposition.

requirements continuum

This is the first level of decomposition – requirements (MRD or PRD) versus specification (SRS or FRS).

There’s another level of detail in the structuring of requirements, built on the work that Karl Wiegers has done. It looks at the artifacts in more detail – and I believe is what the IBM team had in mind when they defined RMM level 3. Here’s the version of the diagram that we developed in our article on non-functional requirements, and then reference as part of our introduction to structured requirements. You can read more about this approach in those articles.

structured requirements framework

We’ve also done some exploration of how to marry interaction design with structured requirements. The approach of starting with a user-centric perspective has a lot of benefits, and we believe there is a way to combine those benefits with the benefits inherent in a structured approach to requirements documentation. Here’s the diagram we created that shows how we adapt our structured approach to an interaction design context.

interaction design and structured requirements framework

Regardless of the approach you take, the element that is relevant to having an RMM level 3 requirements process is the notion that different documents represent different types of requirements/constraints/designs.

Requirement Structures

The IBM team also talks about having attributes as part of requirements. Unfortunately, this is a little bit of the “if you have a hammer, everything looks like a nail” syndrome. What their suggestion implies is

  1. You have a notion of objects, and you use objects to represent requirements artifacts.
  2. You apply the concept of attributes to structure the elements of information within those artifacts.
  3. You have some means (human or machine) to reason about those attributes in a way that provides distinct value relative to reasoning about the artifacts.

Their approach is unfortunate, if only because it appears presumptive, and perhaps biased. I believe we can restructure their language into something design-agnostic that achieves the same objectives.

We propose that there are two relevant benefits that could be addressed with the attributes-approach they suggest:

  1. Being able to manage and reason about the meta-data of a requirement artifact has value. Meta-data are pieces of data that describe the data. For example, who is the author of the document? When was it last edited? What is its priority? To which other requirements is it related? Being able to track, edit, and view this information allows us to make decisions about how and when to use the document. It helps us plan activities and investments that are looking at the process as a whole – combining information about all of the requirements to make high level decisions.
  2. Structuring information within a requirement artifact has value. Artifacts can be free-form text. That text can be organized into sections and lists and tables. That type of organization is helpful to humans who read it. It allows us to organize our content so that it is easier to read the requirements. Most good business writing has these elements – where the organization of information is suited to the content and its intended use. To be at RMM level 3, we must also be at RMM level 2, which requires consistent formatting. Combining that consistency with structure makes it easier for people to read (and saves time when writing) requirements. There is also benefit to using a structure that can be read by machines as well as humans. When information has structure, it introduces the possibility of machine-reasoning, just as it improves human-consumption. While machine-reasoning about elements of requirements documents is not a criterion of achieving RMM level 3, the IBM article implies that this benefit exists. And it does exist. Without going off on a tangent, we can at least easily envision the generation of a report based upon the status of all requirements scheduled for a given release. This report can be created without formally structuring the information, but it is easier to create when we can reason about the structure of the information.

I think this is exactly what IBM intended, and they just used an unfortunately symbolic wordattributes. The same criticism has been applied to much of our writing about the way we use the word requirement.

Mapping CMMI Levels to RMM Level 3

In our diagram, we show the following mappings for RMM level 3:

  • CMMI level 0 – No Entry
  • CMMI level 1 – Requirements Should Be Structured
  • CMMI level 2 – Requirements Should Be Structured
  • CMMI level 3 – Requirements Should Be Structured
  • CMMI level 4 – Requirements Must Be Structured
  • CMMI level 5 – Requirements Must Be Structured

For CMMI Level 0 – when our process is so ad-hoc that documentation of requirements is questionable, discussions about how we organize and structure the requirements documents are irrelevant.

For CMMI Level 1 through CMMI Level 3 – A valuable process must include documentation of requirements. Those documents really should be organized and structured. Structure is essentially organization at the next level of detail, and it is worth doing.

For CMMI Level 4 and CMMI Level 5 – Being able to quantify the performance of our process, and improve our process based on that feedback both require an element of instrumentation and insight into our techniques and tools. Attempting to do that meaningfully without additional structure will provide limited benefit.

From a CMMI Level Perspective

The previous analysis basically looked at the “RMM level 3″ column in our grid, and inspected the relative decisions for each “CMMI level” row. Now we will look at it by reviewing the CMMI levels, and seeing how they map to the RMM level.

A quick review of the same chart (so you don’t have to scroll up and down):

CMMI to RMM Mapping
(larger version)

At CMMI level 1, we require that requirements be written. We suggest that they be organized and structured.

At CMMI level 2 and CMMI level 3, we require that the documentation be organized. A managed process without some form of organization and consistent documentation is a poorly managed process. We also suggest that an RMM level of at least 3, and ideally 4 be adopted.

At CMMI levels 4 and 5 we are measuring and improving on our process. We’ll address the higher CMMI levels in more detail as this series of articles continues.

Summary

  • RMM level 3 specifies that requirements documents are organized, and structured.
  • CMMI level 2 specifies that there is a managed process – in our case, one for managing requirements.
  • A process must be at RMM level 2 and should be at level 3 or 4 to be at CMMI level 2 or CMMI level 3.
  • A process should be at RMM level 3 if it is at CMMI level 1.
  • A process should be at RMM level 4 if it is at CMMI level 2.

Note that this implies that we would spend the extra effort to get to CMMI 3 before we would try and reach RMM level 5.

Check out the next article, CMMI Levels and RMM Level 4 or take our One Minute Survey on CMMI and RMM Levels.

CMMI Levels and RMM Level 2 – Organized Requirements

Second in a stack of five books

Background

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure. In this article, we will look at the definition of RMM level 2. We also cover the tradeoffs and benefits of the practices it requires. Finally, we look at the mapping from RMM level 2 to various CMMI levels.

CMMI to RMM Mapping

(larger version)

In the previous article in the series, we looked at how RMM level 1 – written requirements processes map to CMMI levels.

RMM Level 2 – Organized Requirements

RMM level 1 requires us to document our requirements, but doesn’t talk about how we document them. We can use emails, store information in databases, spreadsheets, and screenshots. But without an over-arching organization. In RMM level 2, we have to organize our requirements documents, we have to use consistent formatting, and we have to deal with administrative issues like security and version control.

The Case For Organizing Requirements

There are two main drivers for organizing our requirements:

  • The need to consume those requirements.
  • The need to change those requirements over time.

Consuming Requirements

Requirements are written not just to organize our thoughts, but to provide direction for the team. They are a form of targeted communication. The set the scope for software delivery, provide guidance in making prioritization decisions, and provide insight into what we will deliver – helping manage the expecations of our customers.

Organizing our requirements makes it easier to consume them. When we ask people to review our requirements, they will have more confidence, and experience less frustration, if they are consistently looking for documents in the same location.

This location could be a document repository, a shared drive on the network, a website, a portal site, or even organized in a file cabinet (?!). The point is that they are always in the same place. As the quantity of our requirements grows, they should also be organized in a logical way within that location. At RMM level 2, any organization is valid – as long as it is consistent, it will provide value.

Changing Requirements Over Time

While documenting the requirements provides benefit, the disorganization comes at a cost. Requirements change over time. Our requirements documentation should change over time as well.

The biggest complaint with waterfall projects is that our understanding of the requirements does not change over time. Requirements are a moving target. With a waterfall project, we define a set of requirements, and then kick off the project – sort of a fire and forget model. Months or years later, we deliver a product – and it will probably match our documented requirements. While we were happily developing against the requirements we documented, the actual requirements have changed. There’s a very high risk that what we deliver will not meet the evolved needs.

  • The Standish group reports that over 80% of projects are unsuccessful
    either because they are over budget, late, missing function, or a
    combination. (http://www.standishgroup.com/sample_research/chaos_1994_1.php)
  • 53 percent of projects cost 189 percent of their original estimate.
  • Average schedule overruns may be as high as 100%
  • Over 40% to 60% of defects in software are caused by poor requirements
    definition.
  • About One-Quarter of all projects are cancelled.
  • Customers do not use 20% of their product features.

Why We Should Invest in Requirements Management

While poorly documented requirements are certainly a factor in the statistics above, another factor is no-longer-relevant requirements. These likely play out as features that are missing, features that are not used, and project cancellation (due to lack of relevance, or lack of ROI).

When we don’t organize our requirements, then changing them becomes more expensive – we have to find them, modify them, and notify people that they’ve been changed. It also becomes difficult to know if this document is the latest document. Organization addresses this problem.

Why Avoid Organization?

cluttered inbox

Organization does come at a cost. We have to spend the time (and possibly money) to set up the repository. We have to spend time to determine how we want to organize our requirements. And we have to spend some time putting the documents into our organized repository.

We identified the benefits of getting data from an organized location. What if we don’t do that very often? If our requirements approvers never have to review the documents, then they don’t benefit from the effort we spent organizing the documents. Perhaps we just have a meeting where get verbal approvals, or route them all with an email (Microsoft Outlook lets you put voting buttons on emails) for approval.

If we’re using a waterfall style project where we document the requirements once, and never change them, then each person on the implementation team can just print out a copy and refer to it when they need it. Again, no benefit from organization.

We all recognize the costs of both of these approaches, but they do avoid a little bit of busy work. It’s possible, however unwise, that some teams will take this approach, and thereby not benefit from organization. Those teams might operate best at RMM level 1.

The Case For Consistently Formatting Requirements

By using consistent formatting, we make it easier for someone to read multiple requirements. They can more easily compare and contrast the documented requirements. They don’t have to spend cycles re-learning how to read each requirements document. Once they become familiar with the format, they can ignore it, and spend time on the content of the document.

When we talk about consistent requirements, we are generally talking about the logical consistency of the statements within and across requirements – but the consistency of formatting also has value. This formatting consistency is what RMM level 2 requires.

Avoiding Consistency

We save some time in training by not requiring people to write consistently. However, the time we save is probably completely absorbed by the time people spend thinking about how to structure the requirements while writing them. And we lose the benefits that come from reading requirements that use a consistent format.

Requirements Administration

The final element identified in RMM level 2 is the administrative perspective. A focus on security, access, and version control is what the IBM team identifies as the relevant administrative issues.

Security and access are identified as elements that engender trust in the documentation. We may be too agile or too trusting, but we don’t see those factors as being particularly relevant to trust. They are certainly valuable when it comes to protecting against unwanted distribution of the information – but we are not generally concerned with people modifying the documents in unacceptable ways. We’ll grudgingly admit that it is possible that a developer will open a requirements document and delete a requirement that he feels is inappropriate, or rewrite it so that it matches his implementation. We just don’t think that it is a practical concern.

Version control, however, is very important. The biggest trust issue we have is in being able to trust that we are reviewing the latest version of a requirements document. Version control provides us with that benefit. It also allows us to undo any untoward modifications of the document. At a minimum, version control should consist of the persistence of previous versions of files. This can be handled by using unique names for each version of the file, by storing copies of the file on a regular basis as backups, or by using version control.

Subversion is the best version control system (VCS) we know of. If implementing a new VCS, we suggest using subversion. It is open source, easy to administer, and best-of-breed.

Mapping CMMI Levels to RMM Level 2

In our diagram, we show the following mappings for RMM Level 2:

  • CMMI level 0 – No Entry
  • CMMI level 1 – Requirements Should Be Organized
  • CMMI level 2 – Requirements Must Be Organized
  • CMMI level 3 – Requirements Must Be Organized
  • CMMI level 4 – Requirements Must Be Organized
  • CMMI level 5 – Requirements Must Be Organized

For CMMI Level 0 – when our process is so ad-hoc that documentation of requirements is questionable, discussions about how we organize the requirements documents are irrelevant. We’re talking about icing and candles when we don’t even know if we have a cake.

For CMMI Level 1 – A valuable process must include documentation of requirements. Those documents really should be organized. The benefits of versioning alone should make this an easy decision. Placing the documents in known locations, and having them be written in a consistent format is valuable too.

For CMMI Level 2 and higher – When we talk about a managed process, we are talking about bringing order to the chaos. Centralizing the requirements in a repository, versioning the documents, and using consistent formatting all bring order.

Imagine a managed requirements process that does everything with the exception of applying consistent formatting to our documents. Perhaps we have various authors of our requirements documents, and they write inconsistently. There’s value in doing all of this, but it would be CMMI level 2, RMM level 1. Only with all three elements (consistent location, consistent formatting, and versioned documents) would the process be both CMMI level 2 and RMM level 2.

We would definitely focus on moving from RMM level 1 to RMM level 2 before we would try and standardize our process across our company. That standardization would be the move from CMMI level 2 to CMMI level 3. Based on that perspective, we believe that an RMM level 2 process rating is a mandatory element of all CMMI levels above CMMI level 1.

From a CMMI Level Perspective

The previous analysis basically looked at the “RMM level 2″ column in our grid, and inspected the relative decisions for each “CMMI level” row. Now we will look at it by reviewing the CMMI levels, and seeing how they map to the RMM level.

A quick review of the same chart (so you don’t have to scroll up and down):

CMMI to RMM Mapping
(larger version)

At CMMI level 1, we require that requirements be written. We suggest that they be organized and structured.

At CMMI level 2, we require that the documentation be organized. A managed process without some form of organization and consistent documentation is a poorly managed process.

At CMMI level 3, we are standardizing our approach across our company. And at CMMI levels 4 and 5 we are measuring and improving on our process. We’ll address the higher CMMI levels in more detail as this series of articles continues.

Summary

  • RMM level 2 specifies that requirements documents are organized.
  • CMMI level 2 specifies that there is a managed process – in our case, one for managing requirements.
  • A process must be at RMM level 2 to be at CMMI level 2.
  • A process should be at RMM level 3 if it is at CMMI level 1.

Note that this implies that we would spend the extra effort to get to CMMI 3 before we would try and reach RMM level 5.

Check out the next article, CMMI Levels and RMM Level 3 or  take our One Minute Survey on CMMI and RMM Levels.

Flashback: A Year Ago This Week on Tyner Blain [2006-01-27]

mirror

From MRD to PRD – The Key To Defining A Spec

They key to writing a great spec is knowing how to specify software that mets our customers’ needs.
It can be a daunting task. First, we have to define what our customer needs. High level requirements are just requirements that are too vague or high-level to be directly actionable. “We must reduce our cost […]

Top 10 Use Case Mistakes

The top ten use case mistakes
We’re reiterating the top five use case mistakes from Top five use case blunders and adding five more. For details on the first five, go back to that post.
There’s also a poll at the end of this post – vote for the worst mistake.

  • Inconsistency.
  • Incorrectness.
  • Wrong priorities.
  • Implementation cues.
  • Broken traceability.
  • Unanticipated error […]

Top 5 Ways To Be A Better Listener

Great listening skills yield great requirements
The better you are at listening, the more people will want to tell you.
If you’ve ever watched The Actor’s Studio, you’ve heard over and over that the most important skill in acting is listening. A marriage counselor will tell you that step one in solving your problems is to […]

CMMI Levels and RMM Level 1 – Written Requirements

First book in stack

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure. In this article, we will look at the definition of RMM level 1. We also cover the tradeoffs and benefits of the practices it requires. Finally, we look at the mapping from RMM level 1 to various CMMI levels.

Background

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure.

CMMI to RMM Mapping
(larger version)

RMM Level 1 – Written Requirements

Level 1 of the requirements maturity model is defined at a high level as simply having written requirements. IBM defines written requirements as persistent documentation. They point out that post-it notes and whiteboards don’t count. Email discussions, word documents, spreadsheets and presentations all count.

IBM presents an argument of tradeoffs – as long as the cost of documenting requirements is exceeded by the benefits, it makes sense to write the requirements. They point out three benefits of having written requirements:

  1. A contract is explicit, or implicit in the requirements. The documented requirements can be used to manage the customer’s expectations, and can also be used to validate that what was promised was delivered.
  2. Clear direction for the implementation team can be found in the requirements documents.
  3. New team members can rely on the documented requirements as a means to get up to speed.

While we strongly agree with the first two points – we think the third one is a bit of a stretch. While having requirements documentation does help people get up to speed, it isn’t a first-order benefit. Videotape a couple 1-hr presentations. One presentation discussing the goals of the project, and one discussing (and whiteboarding) the architectural approach of the solution. Put these on the server and let new people watch them. Much more cost-effective at helping people get up to speed. [Note – I’m pretty sure that I heard Alistair Cockburn suggest this approach or something like it in a podcast interview, so the credit for the idea is his, not ours.]

We would also add that documenting requirements is all but worthless if we don’t use the documents as tools to support conversation with our team members. Incremental delivery is a process that is dependent upon feedback. We must get feedback from stakeholders, and from the implementation team.

Stakeholders will verify the correctness and validate the completeness of the requirements.

The implementation team will provide feedback about the clarity, verifiability, and feasibility of the requirements as written.
Requirements need to be written to support verification. The QA team and stakeholders are responsible for verifying that what was delivered is what was expected. Technically, the delivery must match the requirements – but the requirements should match the expectations of the customer.

One Step Above Chaos

I like that the IBM guys name level zero as “Chaos.” I’ve worked as a developer on projects without requirements. It is chaos. There’s a reason we write requirements. They set expectations. And theres a reason why we review and approve the requirements. It’s essentially a form of structured active listening.

Mapping to the CMMI Levels

In our diagram, we show the following mappings for RMM Level 1:

  • CMMI level 0 – Requirements Should Be Written
  • CMMI level 1 – Requirements Must Be Written
  • CMMI level 2 – Requirements Must Be Written
  • CMMI level 3 – Requirements Must Be Written
  • CMMI level 4 – Requirements Must Be Written
  • CMMI level 5 – Requirements Must Be Written

For CMMI level 0 – even if we don’t have a formal process, we really should be writing our requirements – and using those documents to manage expectations, provide feedback (that we’re doing the right stuff), and scope and focus our efforts.

For CMMI levels 1 and higher – all of the measured CMMI levels require that we have a defined process. Even with the disorganization of a team operating at CMMI level 1, we still need to have a process defined. And a requirements management process that doesn’t involve documenting the requirements isn’t worth very much at all.

Note that documentation might be in the form of prototypes, wireframes, and JAD Session notes. No one is saying that they have to be documented in any particular way. In fact, at RMM level 1, they aren’t in a consistent format, and don’t use a structured requirements framework. Consistent formatting is an element of RMM level 2. And RMM level 3 is focused on structured requirements.

The requirements documents may be scattered through a series of email debates, collaborative databases, and files on network share drives. That’s fine for RMM level 1 – in fact, it is part of the definition of RMM level 1. Organized requirements are a characteristic of RMM level 2.

Remember – CMMI Levels only represent how a process is implemented – they don’t characterize the effectiveness of any one process.

From a CMMI Level Perspective

The previous analysis basically looked at the “RMM level 1” column in our grid, and inspected the relative decisions for each “CMMI level” row. Now we will look at it by reviewing the CMMI levels, and seeing how they map to the RMM level.

A quick review of the same chart (so you don’t have to scroll up and down):

CMMI to RMM Mapping
(larger version)

At CMMI level 1, we require that requirements be written. We suggest that they be organized and structured.

At CMMI level 2, we require that the documentation be organized. A managed process without some form of organization and consistent documentation is a poorly managed process.

At CMMI level 3, we are standardizing our approach across our company. And at CMMI levels 4 and 5 we are measuring and improving on our process. We’ll address the higher CMMI levels in more detail as this series of articles continues.

Summary

  • RMM level 1 specifies that requirements are documented.
  • CMMI .evel 1 specifies that there is a process – in our case, one for managing requirements.
  • A process must be at RMM level 1 to be at CMMI level 1.
  • A process should be at RMM level 2 or 3 if it is at CMMI level 1.

Note that this implies that we would spend the extra effort to get to CMMI 2 before we would try and reach RMM level 4.

Check out the next article, CMMI Levels and RMM Level 2 or  take our One Minute Survey on CMMI and RMM Levels.

CMMI Levels and Requirements Management Maturity Introduction

Five Levels

Welcome Readers of the Carnival of Enterprise Architecture! We hope you enjoy this series of articles!

CMMI (Capability Maturity Model Integration) is a description of the level of enlightenment of a process. It is essentially a measure of the quality and capability of a process. There are five categories, into one of which every process will fall. IBM took a similar approach to defining the requirements management process. In this series of posts, we will marry the two frameworks.

Background on CMMI Levels

We wrote an introduction to CMMI levels last March. In our article, we identified that there are five CMMI levels. Technically, there are six CMMI levels, when you include level zero. Level 0 is “undefined” by the CMMI, and represents an ad hoc process, or a lack of process.

CMMI Levels

  • CMMI Level 0. Undefined. No real process.
  • CMMI Level 1. Performed. A process is defined, but disorganized.
  • CMMI Level 2. Managed. A defined process is managed.
  • CMMI Level 3. Defined. A managed process is standardized across the company.
  • CMMI Level 4. Quantitatively Measured. The performance of a standardized process is measured.
  • CMMI Level 5. Optimizing. Performance measurement is used as a feedback loop to improve the process.

Take CMMI Levels With A Grain of Salt

Salt Shaker

Just knowing the CMMI Level of a process is not enough to know if the process is any good. By the same token, choosing a particular CMMI level, and meeting the technical requirements of that level are not enough to assure a good process.

Backgroundon RMM Levels

The folks at IBM wrote an article in 2003, where they defined five levels of maturity for requirements management processes. All five of the requirements management maturity (RMM) levels all build on the previous level, with increasing capability.

  • RMM Level 0. Chaos. No persistent documentation of requirements.
  • RMM Level 1. Written Requirements. Writing requirements documents (not emails and whiteboards).
  • RMM Level 2. Organized Requirements. Colocation, versioning, consistent formatting.
  • RMM Level 3. Structured Requirements. Defining types of requirements and their relationships.
  • RMM Level 4. Traced Requirements. Explicitly mapping the support-network of requirements.
  • RMM Level 5. Integrated Requirements. Integrating with the development environment and change management.

What IBM Didn’t Do

They didn’t map their framework back into the CMMI framework (known as CMM at the time) except for the following comment in the introduction of their article:

Those familiar with the CMM (Capability Maturity Model) from the Software Engineering Institute (SEI) will note some similarities to our parallel model, which has no direct relationship to the CMM save one: Achieving Level Five of the RMM will assuredly help an organization get to at least Level Three of the CMM.

IBM put together a great framework for describing elements of increasingly capable requirements management processes.

That is what the SEI tried to do when they developed the CMMI. Why couldn’t the IBM team just map their framework into the CMMI framework?

The problem is there is a mismatch between the two frameworks.

  • The RMM framework describes steps and elements of a requirements management process. Each step adds a level of capability to the process. It might be more aptly named the requirements management capability framework.
  • The CMMI framework describes the strategic capabilities (maturity) of how a process is applied, without assessing the tactical capabilities of the process itself.

The SEI recognized that the analysis of the tactical capabilities of any process would be different for every process, and left it to others to perform that work. This is almost what the IBM team did. We’re going to take a crack at it here.

Mapping RMM Levels to CMMI Levels

This is the first in a series of articles that will present a mapping of RMM levels to CMMI levels. We like using CMMI as a means to evaluate our internal processes, notwithstanding the challenges we mentioned earlier. We also like the framework that IBM presented for describing requirements management processes.

Shoot First, Ask Questions Later

There’s a lot more to write about this than we can put into a single article. We’re going to tackle this as a series. Even so, we put together an initial draft of how we think this will ultimately work out. We’ll share that here now. But we reserve the right to fix it when we find problems as we (and you!) put more effort into it.

CMMI to RMM Mapping
(larger version)

Articles In This Series

Failing to Plan is Planning to Fail

basketball
From Bobby Knight, paraphrased by Mark Cuban, via Marcus Ting-A-Kee:

Everyone has got the will to win, it’s only those with the will to prepare, that do win.

Great Advice!

Great Advice Applied

  • Make Your Meetings 60% More Effective. A major component of more effective meetings is the up-front preparation. By planning the meeting in advance, we can make sure we know our goals and how we will meet them.
  • Crossing The Desert With Bad Project Planning. When teams over-work to meet an interim milestone, they risk burnout before the project is done. Better planning will prevent this burnout – and prevention is a lot easier than recovery.
  • Rolling-Wave Project Planning. With rolling-wave project planning, you maintain a detailed plan for a few weeks out from wherever you are, and a high-level plan beyond that point. The plan is revisited as each incremental delivery is completed.
  • Preparing For Requirements Gathering Interviews. Identify the unknowns and the questions and approaches that you’ll use in an interview to gather requirements. It is ok (even good) for the session to feel unscripted and free-flowing. But it better be flowing in the intended direction.

Differentiate Your Product – Circumvent Comparisons

pencils

Look Ma! Me Too! The temptation to compete against a checklist can be overwhelming. When we have a competitor who provides 100 of this or 200 of that, it might seem smart to offer 200 of this and 300 of that. We’ll be better off if we focus instead on creating the other thing. The best way to compete is to valuably differentiate our product, not outdo our competition.

More is better features are just that – more is better. But more of the same old thing is worth a whole lot less than some of something else.

Continue reading Differentiate Your Product – Circumvent Comparisons

How to Write Good Use Case Names – 7 Tips

seven

The first step in writing the use cases for a project is to define the scope of the project. One way to do that is to list the use case names that define all of the user goals that are in scope. To do that, you need to know how to write good use case names. Good use case names also serve as a great reference and provide context and understanding throughout the life of the project.

Goals of Use Case Naming

Use case names are also known as use case titles. When creating names, we have a set of goals:

  • Clearly indicate the user goal represented by the use case.
  • Avoid specifying the design of the system.
  • Make people want to read the use case, not dread reading it.
  • Allow for evolution of use cases across releases.
  • Define the scope of the project.
  • Write consistently

Common Use Case Mistakes

We identified the top ten use case mistakes in a couple of articles about a year ago. They still hold true today:

From Top Five Use Case Blunders:

  • Inconsistency
  • Incorrectness
  • Wrong Priorities
  • Implementation Cues
  • Broken Traceability

From Top Ten Use Case Mistakes:

  • Unanticipated Error Conditions
  • Overlooking System Responses
  • Undefined Actors
  • Impractical Use Cases
  • Out of Scope Use Cases

Writing good use case names will help avoid errors in consistency, implementation cues, scope management, and traceability. They will also help us make people want to read the use cases. Think of the use case name as the headline of a magazine article – does it make you want to read it, or avoid it?

Good use case names also serve as reminders of what a particular use case does. Weeks after we’ve written a use case, a quick scan of the title will remind us of what the use case represents. On a large project with dozens of use cases, this is invaluable.

Tips For Writing Good Use Case Names

Here are the best practices we’ve adopted, and some we’ve collected from around the internet.

  • Good Use Case Names Reflect User Goals. A good use case name reflects the goal of the user (or external system). A name like “Process Invoices” doesn’t tell us what’s being done – is it collections, organization, auditing, or some other function? A more insightful name would be “Collect Late Payments From Customers.” The goal in this example is to collect payments from delinquent customers. The second name does a much better job of defining what the user is trying to do when they perform the use case.
  • Good Use Case Names are As Short As Possible. Some people suggest 5 words, or even two words. There are just too many examples that make setting specific word-count limits impractical. In the previous example, “Collect Late Payments From Customers,” which words would you remove without losing meaning? This name is as short as we can make it without losing clarity. This short name is better than “Collect Late Payments From Customers Who Are Past-Due.”
  • Good Use Case Names Use Meaningful Verbs. Usually people will suggest that we should prefer strong verbs to weak verbs. That is effective advice for general writing. For writing use cases, we can be more specific. A meaningless verb is one that, while indicating action, does not specify the action with enough detail. “Process the Order” can be improved with a more meaningful verb. “Validate the Ordered Items” makes it much more clear what the user is trying to achieve.
  • Good Use Case Names Use An Active Voice. A call to action is a hallmark of good writing. Using an active voice will inspire action more than a passive voice. “Calculate Profitability” is more inspiring than “Profitability is Calculated.”
  • Good Use Case Names Use The Present Tense. “Create New Account” is in the present tense. “New Account Was Created” is in the past tense. The present tense implies what the user is trying to do, not something that has already been done.
  • Good Use Case Names Don’t Identify The Actor. Some people prefer to name the actor in the use case, because it is more specific. We like the idea of using evolutionary use cases to manage the delivery of functionality across releases. When we do this, we are often releasing the first version of the use case for one actor, and the next version for another actor. For example, “Rank Employee Performance” might be our use case. In the first release, we want to enable the functionality for supervisors – who can rank their direct employees. In the second release, we want to add the ability for managers to rank the employees that report to multiple supervisors. We prefer having two versions of the same use case over having two use cases (Rank Direct/Indirect Employee Performance).
  • Good Use Case Names Are Consistent. We should always apply the same set of rules across all of our use case names. Inconsistent application of the names will create a sense of discord for our readers. Consistent names will make it more comfortable for readers, and provide a sense of cohesion for the overall project.

Other References

Summary

Good use case names take very little effort once we are used to writing them with a consistent style, following the tips listed above. Good names also provide benefits down the road when reviewing and reading the use cases. Good names are short, clear, and stylish. They also make us want to read the use cases, and easily jog our memories about the user’s goals.

Flashback: A Year Ago This Week on Tyner Blain [2006-01-20]

mirror

Brainstorming – Making Something Out of Everything

brainstorm

Here are some details about how to facilitate a general brainstorming session with a group of people in 5 easy steps (and then another 5 easy steps).

Prioritizing Requirements – Three Techniques

identical

The less we know about our client’s business, the more the requirements appear to be equivalent. We’ll talk about three different approaches to prioritizing requirements.

  1. Classical. Let stakeholders assign priority to the requirements.
  2. Exhaustive. Explore every nuance of prioritization and its application to requirements.
  3. Value-based. Let ROI drive the decisions. (hint: this is the best one – scroll down if you’re in a real hurry)
  4. [bonus]. A look at how 37signals prioritizes features for their products.

Foundation Series: Unit Testing Software

Unit Testing Class

Testing software is more than just manually banging around (also called monkey testing) and trying to break different parts of the software application. Unit testing is testing a subset of the functionality of a piece of software. A unit test is different from a system test in that it provides information only about a particular subset of the software.