Category Archives: Process Improvement

Many articles at Tyner Blain focus on improving the software development process. These articles can address improvement of any aspect of the process, and often overlap with other categories in the site.

CMMI Levels and RMM Level 4 – Traced Requirements

Book 4 in a stack

Background

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure. In this article, we will look at the definition of RMM level 4. W also look at the mapping from RMM level 4 to various CMMI levels.

CMMI to RMM Mapping

(larger version)

In the previous article in the series, we looked at how RMM level 3 – structured requirements processes map to CMMI levels.

RMM Level 4 – Traced Requirements

RMM level 4 builds upon the previous three levels – structured, organized, and documented requirements. With an organization of structured requirements, we can overlay the notion of tracing between requirements.

Consider the structured requirements approach we adapted from Karl Wiegers.

structured requirements relationships

In this structured approach, there is a notion of dependence.

  • Goals depend upon use cases in order to be achieved.
  • Goals depend upon non-functional requirements to define their achievability.
  • Use cases depend upon functional requirements to enable their execution.
  • Use cases depend upon non-functional requirements to characterize their effectiveness.
  • Functional requirements depend upon designs, which depend upon implementation.

This structure of dependency represents the real-world reliance of one artifact on another. In an ongoing software development project, we can be making changes to any of these elements. Those changes can impact other elements.

As an example, we could change a use case. The goal that depends upon that use case might be affected. Our changes may affect the functional and non-functional requirements upon which the use case depends.

Traceability allows us to say this use case relies on those requirements. It represents relationships between specific artifacts. We can use traceability to reduce the effort (and errors) associated with propogating changes through the dependency network.

We can also use traceability to enable interesting aggregations of reporting information. For example, we could identify the percentage of completion of a given use case – by looking at the percentage completion of all implementation elements that support all design elements upon which the use case depends. Other analogous relationships can be created to meet other reporting objectives.

We can also use traceability to validate completeness (IBM uses the word “coverage” in their article) of our specification. We can review a goal, and ask the question: “Are all of the use cases required to achieve this goal defined?” We can also validate in the other direction: “Are all of these use cases required to achieve that goal?” We covered this specific example in our article, Completeness Validation With Use Cases. This also applies to the completeness validation of other artifacts in the requirements hierarchy.

Mapping CMMI Levels to RMM Level 4

In our diagram, we show the following mappings for RMM level 4:

  • CMMI level 0 – No Entry
  • CMMI level 1 – No Entry
  • CMMI level 2 – Requirements Should Be Traced
  • CMMI level 3 – Requirements Should Be Traced
  • CMMI level 4 – Requirements Must Be Traced
  • CMMI level 5 – Requirements Must Be Traced

For CMMI Level 0 and CMMI Level 1 – when our process is unmanaged, and unstructured, traceability does not provide value – it creates confusion.
For CMMI Level 2 and CMMI Level 3 – A valuable process must include organization of documented of requirements. Those documents should also be structured and traced.
For CMMI Level 4 and CMMI Level 5 – Being able to quantify the performance of our process, and improve our process based on that feedback both require an element of instrumentation and insight into our techniques and tools. Attempting to do that meaningfully without additional structure and traceability will provide limited benefit.

From a CMMI Level Perspective

The previous analysis basically looked at the “RMM level 4″ column in our grid, and inspected the relative decisions for each “CMMI level” row. Now we will look at it by reviewing the CMMI levels, and seeing how they map to the RMM level.

A quick review of the same chart (so you don’t have to scroll up and down):

CMMI to RMM Mapping
(larger version)

At CMMI level 1, we don’t address traceability We would focus on reaching CMMI level 2 before reaching RMM level 4.

At CMMI level 2 and CMMI level 3, we require that the documentation be organized. A managed process without some form of organization and consistent documentation is a poorly managed process. We also suggest that an RMM level of at least 3, and ideally 4 be adopted.

At CMMI levels 4 and 5 we are measuring and improving on our process. We require traceability as a key component to our quantified analysis and instrumentation.

Summary

  • RMM level 4 specifies that requirements documents are organized, structured, and traceable.
  • CMMI level 2 specifies that there is a managed process – in our case, one for managing requirements, and it should involve structure and traceability as components that simplify that management.
  • A process must be at RMM level 4 before it can reach CMMI level 4.

Check out the next article, CMMI Levels and RMM Level 5 or  take our One Minute Survey on CMMI and RMM Levels.

CMMI Levels and RMM Level 3 – Structured Requirements

3rd book in a stack of 5 books

Background

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure. In this article, we will look at the definition of RMM level 3. We also question the language used and reinterpret some of what IBM suggests. Finally, we look at the mapping from RMM level 3 to various CMMI levels.

CMMI to RMM Mapping

(larger version)

In the previous article in the series, we looked at how RMM level 2 – organized requirements documents processes map to CMMI levels.

RMM Level 3 – Structured Requirements

RMM level 1 requires us to document our requirements. RMM level 2 requires us to organize that documentation and use consistent formatting for the documents. RMM level 3 introduces the concept of using structured requirements, as well as the idea of requirements having attributes. The first notion relates to the relationships between requirements, and the second is a way of apply structure to the requirements so that we can reason about them more effectively.

Structured Requirements

The first thing that the IBM team identifies is the need to identify different types of requirements. Avoid all of the naming bugaboos, and consider the notion of identifying different structures of artifacts in the software development process. We have a series of elements of information that we need to understand and articulate in order to travel from an identified market need to a delivered software product.

There are many different approaches to documenting requirements. We all struggle to agree on particular naming conventions. We use different requirements documents to represent different parts of the flow.

In Alphabet Soup – Requirements Documents, we use the following diagram to try and summarize the stages of decomposition.

requirements continuum

This is the first level of decomposition – requirements (MRD or PRD) versus specification (SRS or FRS).

There’s another level of detail in the structuring of requirements, built on the work that Karl Wiegers has done. It looks at the artifacts in more detail – and I believe is what the IBM team had in mind when they defined RMM level 3. Here’s the version of the diagram that we developed in our article on non-functional requirements, and then reference as part of our introduction to structured requirements. You can read more about this approach in those articles.

structured requirements framework

We’ve also done some exploration of how to marry interaction design with structured requirements. The approach of starting with a user-centric perspective has a lot of benefits, and we believe there is a way to combine those benefits with the benefits inherent in a structured approach to requirements documentation. Here’s the diagram we created that shows how we adapt our structured approach to an interaction design context.

interaction design and structured requirements framework

Regardless of the approach you take, the element that is relevant to having an RMM level 3 requirements process is the notion that different documents represent different types of requirements/constraints/designs.

Requirement Structures

The IBM team also talks about having attributes as part of requirements. Unfortunately, this is a little bit of the “if you have a hammer, everything looks like a nail” syndrome. What their suggestion implies is

  1. You have a notion of objects, and you use objects to represent requirements artifacts.
  2. You apply the concept of attributes to structure the elements of information within those artifacts.
  3. You have some means (human or machine) to reason about those attributes in a way that provides distinct value relative to reasoning about the artifacts.

Their approach is unfortunate, if only because it appears presumptive, and perhaps biased. I believe we can restructure their language into something design-agnostic that achieves the same objectives.

We propose that there are two relevant benefits that could be addressed with the attributes-approach they suggest:

  1. Being able to manage and reason about the meta-data of a requirement artifact has value. Meta-data are pieces of data that describe the data. For example, who is the author of the document? When was it last edited? What is its priority? To which other requirements is it related? Being able to track, edit, and view this information allows us to make decisions about how and when to use the document. It helps us plan activities and investments that are looking at the process as a whole – combining information about all of the requirements to make high level decisions.
  2. Structuring information within a requirement artifact has value. Artifacts can be free-form text. That text can be organized into sections and lists and tables. That type of organization is helpful to humans who read it. It allows us to organize our content so that it is easier to read the requirements. Most good business writing has these elements – where the organization of information is suited to the content and its intended use. To be at RMM level 3, we must also be at RMM level 2, which requires consistent formatting. Combining that consistency with structure makes it easier for people to read (and saves time when writing) requirements. There is also benefit to using a structure that can be read by machines as well as humans. When information has structure, it introduces the possibility of machine-reasoning, just as it improves human-consumption. While machine-reasoning about elements of requirements documents is not a criterion of achieving RMM level 3, the IBM article implies that this benefit exists. And it does exist. Without going off on a tangent, we can at least easily envision the generation of a report based upon the status of all requirements scheduled for a given release. This report can be created without formally structuring the information, but it is easier to create when we can reason about the structure of the information.

I think this is exactly what IBM intended, and they just used an unfortunately symbolic wordattributes. The same criticism has been applied to much of our writing about the way we use the word requirement.

Mapping CMMI Levels to RMM Level 3

In our diagram, we show the following mappings for RMM level 3:

  • CMMI level 0 – No Entry
  • CMMI level 1 – Requirements Should Be Structured
  • CMMI level 2 – Requirements Should Be Structured
  • CMMI level 3 – Requirements Should Be Structured
  • CMMI level 4 – Requirements Must Be Structured
  • CMMI level 5 – Requirements Must Be Structured

For CMMI Level 0 – when our process is so ad-hoc that documentation of requirements is questionable, discussions about how we organize and structure the requirements documents are irrelevant.

For CMMI Level 1 through CMMI Level 3 – A valuable process must include documentation of requirements. Those documents really should be organized and structured. Structure is essentially organization at the next level of detail, and it is worth doing.

For CMMI Level 4 and CMMI Level 5 – Being able to quantify the performance of our process, and improve our process based on that feedback both require an element of instrumentation and insight into our techniques and tools. Attempting to do that meaningfully without additional structure will provide limited benefit.

From a CMMI Level Perspective

The previous analysis basically looked at the “RMM level 3″ column in our grid, and inspected the relative decisions for each “CMMI level” row. Now we will look at it by reviewing the CMMI levels, and seeing how they map to the RMM level.

A quick review of the same chart (so you don’t have to scroll up and down):

CMMI to RMM Mapping
(larger version)

At CMMI level 1, we require that requirements be written. We suggest that they be organized and structured.

At CMMI level 2 and CMMI level 3, we require that the documentation be organized. A managed process without some form of organization and consistent documentation is a poorly managed process. We also suggest that an RMM level of at least 3, and ideally 4 be adopted.

At CMMI levels 4 and 5 we are measuring and improving on our process. We’ll address the higher CMMI levels in more detail as this series of articles continues.

Summary

  • RMM level 3 specifies that requirements documents are organized, and structured.
  • CMMI level 2 specifies that there is a managed process – in our case, one for managing requirements.
  • A process must be at RMM level 2 and should be at level 3 or 4 to be at CMMI level 2 or CMMI level 3.
  • A process should be at RMM level 3 if it is at CMMI level 1.
  • A process should be at RMM level 4 if it is at CMMI level 2.

Note that this implies that we would spend the extra effort to get to CMMI 3 before we would try and reach RMM level 5.

Check out the next article, CMMI Levels and RMM Level 4 or take our One Minute Survey on CMMI and RMM Levels.

CMMI Levels and RMM Level 2 – Organized Requirements

Second in a stack of five books

Background

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure. In this article, we will look at the definition of RMM level 2. We also cover the tradeoffs and benefits of the practices it requires. Finally, we look at the mapping from RMM level 2 to various CMMI levels.

CMMI to RMM Mapping

(larger version)

In the previous article in the series, we looked at how RMM level 1 – written requirements processes map to CMMI levels.

RMM Level 2 – Organized Requirements

RMM level 1 requires us to document our requirements, but doesn’t talk about how we document them. We can use emails, store information in databases, spreadsheets, and screenshots. But without an over-arching organization. In RMM level 2, we have to organize our requirements documents, we have to use consistent formatting, and we have to deal with administrative issues like security and version control.

The Case For Organizing Requirements

There are two main drivers for organizing our requirements:

  • The need to consume those requirements.
  • The need to change those requirements over time.

Consuming Requirements

Requirements are written not just to organize our thoughts, but to provide direction for the team. They are a form of targeted communication. The set the scope for software delivery, provide guidance in making prioritization decisions, and provide insight into what we will deliver – helping manage the expecations of our customers.

Organizing our requirements makes it easier to consume them. When we ask people to review our requirements, they will have more confidence, and experience less frustration, if they are consistently looking for documents in the same location.

This location could be a document repository, a shared drive on the network, a website, a portal site, or even organized in a file cabinet (?!). The point is that they are always in the same place. As the quantity of our requirements grows, they should also be organized in a logical way within that location. At RMM level 2, any organization is valid – as long as it is consistent, it will provide value.

Changing Requirements Over Time

While documenting the requirements provides benefit, the disorganization comes at a cost. Requirements change over time. Our requirements documentation should change over time as well.

The biggest complaint with waterfall projects is that our understanding of the requirements does not change over time. Requirements are a moving target. With a waterfall project, we define a set of requirements, and then kick off the project – sort of a fire and forget model. Months or years later, we deliver a product – and it will probably match our documented requirements. While we were happily developing against the requirements we documented, the actual requirements have changed. There’s a very high risk that what we deliver will not meet the evolved needs.

  • The Standish group reports that over 80% of projects are unsuccessful
    either because they are over budget, late, missing function, or a
    combination. (http://www.standishgroup.com/sample_research/chaos_1994_1.php)
  • 53 percent of projects cost 189 percent of their original estimate.
  • Average schedule overruns may be as high as 100%
  • Over 40% to 60% of defects in software are caused by poor requirements
    definition.
  • About One-Quarter of all projects are cancelled.
  • Customers do not use 20% of their product features.

Why We Should Invest in Requirements Management

While poorly documented requirements are certainly a factor in the statistics above, another factor is no-longer-relevant requirements. These likely play out as features that are missing, features that are not used, and project cancellation (due to lack of relevance, or lack of ROI).

When we don’t organize our requirements, then changing them becomes more expensive – we have to find them, modify them, and notify people that they’ve been changed. It also becomes difficult to know if this document is the latest document. Organization addresses this problem.

Why Avoid Organization?

cluttered inbox

Organization does come at a cost. We have to spend the time (and possibly money) to set up the repository. We have to spend time to determine how we want to organize our requirements. And we have to spend some time putting the documents into our organized repository.

We identified the benefits of getting data from an organized location. What if we don’t do that very often? If our requirements approvers never have to review the documents, then they don’t benefit from the effort we spent organizing the documents. Perhaps we just have a meeting where get verbal approvals, or route them all with an email (Microsoft Outlook lets you put voting buttons on emails) for approval.

If we’re using a waterfall style project where we document the requirements once, and never change them, then each person on the implementation team can just print out a copy and refer to it when they need it. Again, no benefit from organization.

We all recognize the costs of both of these approaches, but they do avoid a little bit of busy work. It’s possible, however unwise, that some teams will take this approach, and thereby not benefit from organization. Those teams might operate best at RMM level 1.

The Case For Consistently Formatting Requirements

By using consistent formatting, we make it easier for someone to read multiple requirements. They can more easily compare and contrast the documented requirements. They don’t have to spend cycles re-learning how to read each requirements document. Once they become familiar with the format, they can ignore it, and spend time on the content of the document.

When we talk about consistent requirements, we are generally talking about the logical consistency of the statements within and across requirements – but the consistency of formatting also has value. This formatting consistency is what RMM level 2 requires.

Avoiding Consistency

We save some time in training by not requiring people to write consistently. However, the time we save is probably completely absorbed by the time people spend thinking about how to structure the requirements while writing them. And we lose the benefits that come from reading requirements that use a consistent format.

Requirements Administration

The final element identified in RMM level 2 is the administrative perspective. A focus on security, access, and version control is what the IBM team identifies as the relevant administrative issues.

Security and access are identified as elements that engender trust in the documentation. We may be too agile or too trusting, but we don’t see those factors as being particularly relevant to trust. They are certainly valuable when it comes to protecting against unwanted distribution of the information – but we are not generally concerned with people modifying the documents in unacceptable ways. We’ll grudgingly admit that it is possible that a developer will open a requirements document and delete a requirement that he feels is inappropriate, or rewrite it so that it matches his implementation. We just don’t think that it is a practical concern.

Version control, however, is very important. The biggest trust issue we have is in being able to trust that we are reviewing the latest version of a requirements document. Version control provides us with that benefit. It also allows us to undo any untoward modifications of the document. At a minimum, version control should consist of the persistence of previous versions of files. This can be handled by using unique names for each version of the file, by storing copies of the file on a regular basis as backups, or by using version control.

Subversion is the best version control system (VCS) we know of. If implementing a new VCS, we suggest using subversion. It is open source, easy to administer, and best-of-breed.

Mapping CMMI Levels to RMM Level 2

In our diagram, we show the following mappings for RMM Level 2:

  • CMMI level 0 – No Entry
  • CMMI level 1 – Requirements Should Be Organized
  • CMMI level 2 – Requirements Must Be Organized
  • CMMI level 3 – Requirements Must Be Organized
  • CMMI level 4 – Requirements Must Be Organized
  • CMMI level 5 – Requirements Must Be Organized

For CMMI Level 0 – when our process is so ad-hoc that documentation of requirements is questionable, discussions about how we organize the requirements documents are irrelevant. We’re talking about icing and candles when we don’t even know if we have a cake.

For CMMI Level 1 – A valuable process must include documentation of requirements. Those documents really should be organized. The benefits of versioning alone should make this an easy decision. Placing the documents in known locations, and having them be written in a consistent format is valuable too.

For CMMI Level 2 and higher – When we talk about a managed process, we are talking about bringing order to the chaos. Centralizing the requirements in a repository, versioning the documents, and using consistent formatting all bring order.

Imagine a managed requirements process that does everything with the exception of applying consistent formatting to our documents. Perhaps we have various authors of our requirements documents, and they write inconsistently. There’s value in doing all of this, but it would be CMMI level 2, RMM level 1. Only with all three elements (consistent location, consistent formatting, and versioned documents) would the process be both CMMI level 2 and RMM level 2.

We would definitely focus on moving from RMM level 1 to RMM level 2 before we would try and standardize our process across our company. That standardization would be the move from CMMI level 2 to CMMI level 3. Based on that perspective, we believe that an RMM level 2 process rating is a mandatory element of all CMMI levels above CMMI level 1.

From a CMMI Level Perspective

The previous analysis basically looked at the “RMM level 2″ column in our grid, and inspected the relative decisions for each “CMMI level” row. Now we will look at it by reviewing the CMMI levels, and seeing how they map to the RMM level.

A quick review of the same chart (so you don’t have to scroll up and down):

CMMI to RMM Mapping
(larger version)

At CMMI level 1, we require that requirements be written. We suggest that they be organized and structured.

At CMMI level 2, we require that the documentation be organized. A managed process without some form of organization and consistent documentation is a poorly managed process.

At CMMI level 3, we are standardizing our approach across our company. And at CMMI levels 4 and 5 we are measuring and improving on our process. We’ll address the higher CMMI levels in more detail as this series of articles continues.

Summary

  • RMM level 2 specifies that requirements documents are organized.
  • CMMI level 2 specifies that there is a managed process – in our case, one for managing requirements.
  • A process must be at RMM level 2 to be at CMMI level 2.
  • A process should be at RMM level 3 if it is at CMMI level 1.

Note that this implies that we would spend the extra effort to get to CMMI 3 before we would try and reach RMM level 5.

Check out the next article, CMMI Levels and RMM Level 3 or  take our One Minute Survey on CMMI and RMM Levels.

CMMI Levels and RMM Level 1 – Written Requirements

First book in stack

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure. In this article, we will look at the definition of RMM level 1. We also cover the tradeoffs and benefits of the practices it requires. Finally, we look at the mapping from RMM level 1 to various CMMI levels.

Background

In our introduction to mapping RMM levels to CMMI levels, we presented background info on CMMI, introduced the IBM article on RMM levels, and posted an initial mapping structure.

CMMI to RMM Mapping
(larger version)

RMM Level 1 – Written Requirements

Level 1 of the requirements maturity model is defined at a high level as simply having written requirements. IBM defines written requirements as persistent documentation. They point out that post-it notes and whiteboards don’t count. Email discussions, word documents, spreadsheets and presentations all count.

IBM presents an argument of tradeoffs – as long as the cost of documenting requirements is exceeded by the benefits, it makes sense to write the requirements. They point out three benefits of having written requirements:

  1. A contract is explicit, or implicit in the requirements. The documented requirements can be used to manage the customer’s expectations, and can also be used to validate that what was promised was delivered.
  2. Clear direction for the implementation team can be found in the requirements documents.
  3. New team members can rely on the documented requirements as a means to get up to speed.

While we strongly agree with the first two points – we think the third one is a bit of a stretch. While having requirements documentation does help people get up to speed, it isn’t a first-order benefit. Videotape a couple 1-hr presentations. One presentation discussing the goals of the project, and one discussing (and whiteboarding) the architectural approach of the solution. Put these on the server and let new people watch them. Much more cost-effective at helping people get up to speed. [Note – I’m pretty sure that I heard Alistair Cockburn suggest this approach or something like it in a podcast interview, so the credit for the idea is his, not ours.]

We would also add that documenting requirements is all but worthless if we don’t use the documents as tools to support conversation with our team members. Incremental delivery is a process that is dependent upon feedback. We must get feedback from stakeholders, and from the implementation team.

Stakeholders will verify the correctness and validate the completeness of the requirements.

The implementation team will provide feedback about the clarity, verifiability, and feasibility of the requirements as written.
Requirements need to be written to support verification. The QA team and stakeholders are responsible for verifying that what was delivered is what was expected. Technically, the delivery must match the requirements – but the requirements should match the expectations of the customer.

One Step Above Chaos

I like that the IBM guys name level zero as “Chaos.” I’ve worked as a developer on projects without requirements. It is chaos. There’s a reason we write requirements. They set expectations. And theres a reason why we review and approve the requirements. It’s essentially a form of structured active listening.

Mapping to the CMMI Levels

In our diagram, we show the following mappings for RMM Level 1:

  • CMMI level 0 – Requirements Should Be Written
  • CMMI level 1 – Requirements Must Be Written
  • CMMI level 2 – Requirements Must Be Written
  • CMMI level 3 – Requirements Must Be Written
  • CMMI level 4 – Requirements Must Be Written
  • CMMI level 5 – Requirements Must Be Written

For CMMI level 0 – even if we don’t have a formal process, we really should be writing our requirements – and using those documents to manage expectations, provide feedback (that we’re doing the right stuff), and scope and focus our efforts.

For CMMI levels 1 and higher – all of the measured CMMI levels require that we have a defined process. Even with the disorganization of a team operating at CMMI level 1, we still need to have a process defined. And a requirements management process that doesn’t involve documenting the requirements isn’t worth very much at all.

Note that documentation might be in the form of prototypes, wireframes, and JAD Session notes. No one is saying that they have to be documented in any particular way. In fact, at RMM level 1, they aren’t in a consistent format, and don’t use a structured requirements framework. Consistent formatting is an element of RMM level 2. And RMM level 3 is focused on structured requirements.

The requirements documents may be scattered through a series of email debates, collaborative databases, and files on network share drives. That’s fine for RMM level 1 – in fact, it is part of the definition of RMM level 1. Organized requirements are a characteristic of RMM level 2.

Remember – CMMI Levels only represent how a process is implemented – they don’t characterize the effectiveness of any one process.

From a CMMI Level Perspective

The previous analysis basically looked at the “RMM level 1” column in our grid, and inspected the relative decisions for each “CMMI level” row. Now we will look at it by reviewing the CMMI levels, and seeing how they map to the RMM level.

A quick review of the same chart (so you don’t have to scroll up and down):

CMMI to RMM Mapping
(larger version)

At CMMI level 1, we require that requirements be written. We suggest that they be organized and structured.

At CMMI level 2, we require that the documentation be organized. A managed process without some form of organization and consistent documentation is a poorly managed process.

At CMMI level 3, we are standardizing our approach across our company. And at CMMI levels 4 and 5 we are measuring and improving on our process. We’ll address the higher CMMI levels in more detail as this series of articles continues.

Summary

  • RMM level 1 specifies that requirements are documented.
  • CMMI .evel 1 specifies that there is a process – in our case, one for managing requirements.
  • A process must be at RMM level 1 to be at CMMI level 1.
  • A process should be at RMM level 2 or 3 if it is at CMMI level 1.

Note that this implies that we would spend the extra effort to get to CMMI 2 before we would try and reach RMM level 4.

Check out the next article, CMMI Levels and RMM Level 2 or  take our One Minute Survey on CMMI and RMM Levels.

CMMI Levels and Requirements Management Maturity Introduction

Five Levels

Welcome Readers of the Carnival of Enterprise Architecture! We hope you enjoy this series of articles!

CMMI (Capability Maturity Model Integration) is a description of the level of enlightenment of a process. It is essentially a measure of the quality and capability of a process. There are five categories, into one of which every process will fall. IBM took a similar approach to defining the requirements management process. In this series of posts, we will marry the two frameworks.

Background on CMMI Levels

We wrote an introduction to CMMI levels last March. In our article, we identified that there are five CMMI levels. Technically, there are six CMMI levels, when you include level zero. Level 0 is “undefined” by the CMMI, and represents an ad hoc process, or a lack of process.

CMMI Levels

  • CMMI Level 0. Undefined. No real process.
  • CMMI Level 1. Performed. A process is defined, but disorganized.
  • CMMI Level 2. Managed. A defined process is managed.
  • CMMI Level 3. Defined. A managed process is standardized across the company.
  • CMMI Level 4. Quantitatively Measured. The performance of a standardized process is measured.
  • CMMI Level 5. Optimizing. Performance measurement is used as a feedback loop to improve the process.

Take CMMI Levels With A Grain of Salt

Salt Shaker

Just knowing the CMMI Level of a process is not enough to know if the process is any good. By the same token, choosing a particular CMMI level, and meeting the technical requirements of that level are not enough to assure a good process.

Backgroundon RMM Levels

The folks at IBM wrote an article in 2003, where they defined five levels of maturity for requirements management processes. All five of the requirements management maturity (RMM) levels all build on the previous level, with increasing capability.

  • RMM Level 0. Chaos. No persistent documentation of requirements.
  • RMM Level 1. Written Requirements. Writing requirements documents (not emails and whiteboards).
  • RMM Level 2. Organized Requirements. Colocation, versioning, consistent formatting.
  • RMM Level 3. Structured Requirements. Defining types of requirements and their relationships.
  • RMM Level 4. Traced Requirements. Explicitly mapping the support-network of requirements.
  • RMM Level 5. Integrated Requirements. Integrating with the development environment and change management.

What IBM Didn’t Do

They didn’t map their framework back into the CMMI framework (known as CMM at the time) except for the following comment in the introduction of their article:

Those familiar with the CMM (Capability Maturity Model) from the Software Engineering Institute (SEI) will note some similarities to our parallel model, which has no direct relationship to the CMM save one: Achieving Level Five of the RMM will assuredly help an organization get to at least Level Three of the CMM.

IBM put together a great framework for describing elements of increasingly capable requirements management processes.

That is what the SEI tried to do when they developed the CMMI. Why couldn’t the IBM team just map their framework into the CMMI framework?

The problem is there is a mismatch between the two frameworks.

  • The RMM framework describes steps and elements of a requirements management process. Each step adds a level of capability to the process. It might be more aptly named the requirements management capability framework.
  • The CMMI framework describes the strategic capabilities (maturity) of how a process is applied, without assessing the tactical capabilities of the process itself.

The SEI recognized that the analysis of the tactical capabilities of any process would be different for every process, and left it to others to perform that work. This is almost what the IBM team did. We’re going to take a crack at it here.

Mapping RMM Levels to CMMI Levels

This is the first in a series of articles that will present a mapping of RMM levels to CMMI levels. We like using CMMI as a means to evaluate our internal processes, notwithstanding the challenges we mentioned earlier. We also like the framework that IBM presented for describing requirements management processes.

Shoot First, Ask Questions Later

There’s a lot more to write about this than we can put into a single article. We’re going to tackle this as a series. Even so, we put together an initial draft of how we think this will ultimately work out. We’ll share that here now. But we reserve the right to fix it when we find problems as we (and you!) put more effort into it.

CMMI to RMM Mapping
(larger version)

Articles In This Series

Code Debt: Neither A Borrower…

Loan Application

Code Debt is the debt we incur when we write sloppy code. We might do this to rush something out the door, with the plan to refactor later. Agile methodologies focus on delivering functionality quickly. They also invoke a mantra of refactoring – “make it better next release.” This can create pressure to “get it done” that overwhelms the objective of “get it done right.” Taking on code debt like this is about as smart as using one credit card to pay off another one.

Using Timeboxes to Visualize Pressure

In our timeboxing tutorial, we introduced a framework for managing releases and the amount of work we do within each release. We can apply the same framework to visualize the pressure that makes us consider borrowing against our future to take on a code debt.
Here’s a quick summary of the concepts from that article.

Each unit of functionality that we deliver in a product or release is made up of two elements – the time it takes to make it, and the time it takes to make it right. We can think of this as function and quality respectively. We;ve drawn them as puzzle pieces to show that they are (or at least should be) locked together. When a developer gives an estimate, it should be for the combination of function and quality – not quality alone.

function and quality

A timebox represents the amount of time and money we allocate for a given release. It basically determines the size of the box that we can fill up with our units.

timebox

In our article, we talked about the options we have to try and get more “stuff” delivered in a single timebox. One obviously bad option is to decouple the quality from the functionality in order to deliver more functionality.

plan for bad quality

Several people essentially said “No one would ever plan on bad quality – that’s a bad suggestion!”

We agree – it is a bad idea. We were merely pointing out that it is something people consider. Especially managers who’ve never read The Tipping Point, and don’t know about “broken windows.” “Broken windows” in this case is a reference to the downward pressure that is applied to all future development efforts by forcing developers to work on a code base that has a lot of “low quality code.” The idea is that if we skip the quality part enough times, we will eventually stop caring at all.

We also agree that rational people won’t make a habit out of using this approach. However, there is another way we could find ourselves in this situation.

What if our estimates were wrong? In the middle of the timebox, we suddenly find ourselves without enough time / people to finish.

missed estimates

In the highlighted region of the diagram, we can see that the feature on the left took longer than we expected – and it pushed out the red/orange feature. There isn’t enough room in our timebox to get it all done.

Guilty

We’ve even suggested that maybe the right thing to do is to take a loan against ourselves and incur a code debt.

There are times when incurring a temporary code-debt is pragmatically more useful than delaying a release or a feature in order to polish the code.

Software Product Delivery – 20 Rules?

I’ve also argued against code-debt as a long term strategy, in the comments on another post.

I definitely agree that code-debt builds over time and gets more and more expensive. Plus there’s the risk of the whole ‘broken windows’ problem that Malcolm Gladwell outlines in The Tipping Point. I visualize it like the graph for a cascading diode – you can build up more and more code-debt (or design-debt, as you describe in the link), with its associated performance penalty, until you reach the ‘critical voltage’ and flood the diode. At this point, it becomes impractical to refactor – rewriting becomes more cost effective.

The Agile Dragon

So, we’ve stated that it might be a pragmatically good idea in the short run, and definitely a bad idea in the long run. But how do we crystalize the message that it is risky? With another good analogy.

Another Good Analogy

We owe our thanks to Mishkin for this extension of the “code debt” analogy that brings perfect clarity to the issue.

Last night I was thinking more about the analogy of technical debt. In this analogy, design and quality flaws in a team’s work become a “debt” that must eventually be paid back.

[…]

In other words, every time someone asks a team to let quality slide, they are asking the team (and the organization) to take on debt with an unknown interest rate. Which is lunacy.

Technical Debt, Mishkin Berteig

Thanks Mishkin!

Conclusion

Technical debt is very risky. Maybe it is the right call in the short run (but assume it isn’t). It is never the right call in the long run.

Crossing The Desert With Bad Project Planning

Johanna Rothman recently wrote an article with a poignant introduction: “A project team focuses on an interim milestone, works like the devil to meet that milestone. They meet the milestone, look up, and realize they’re not at the end of the project–they still have to finish the darn thing. They’re living the Crossing the Desert syndrome.” Fixing it isn’t enough – how do we prevent it from happening?

Unrealistic Scheduling

Johanna points out that when a team has to put in overtime (what we might call heroic efforts) to achieve an interim milestone in a project, the remaining project schedule is suspect. Johanna reccommends revisiting the remaining schedule, and we agree.

Recovery

Johanna also highlights the way to recover – give the people on the project a break. Have them move back to 40 hour weeks if they were working overtime. Force them to take the weekends off if they weren’t already. In short, restore a rational schedule. If people worked through vacations or holidays, require them to take the time off.

Its been said that it takes two weeks of down time to recover from work-burnout. Extending the desert analogy that Johanna credits to Jack Nevison of Oak Associates, we’ve reached the oasis in the middle of the desert, and we need to stay there long enough to rest and rehydrate.

Unfortunately, we’re still in the middle of the desert, and the rush to reach the oasis presumably had merit. If a project schedule is so intense that we have created the need for recovery in the middle of the project, the likelihood of having two weeks to recover is low. Whatever drove us to push to reach the oasis is still driving us, so resting for too long will only make it worse. And we’re still in the middle of the desert. We need to focus on prevention.

Prevention

irrigation to prevent desert

We can’t always prevent the impetus to work long hours on a project. We can manage our execution in a way that reduces the chances of feeling like we’re crossing a desert.

  • Improved estimation of tasks. Entire books are written about this topic, we won’t try and summarize ways to do this in a single blog post.
  • Realistic effort allocation. When scheduling how many hours a day a developer can be “on task”, our experience has been that 5 to 6 hours is the maximum (when working full time or a little more). For requirements work, this might even be a little bit aggressive.
  • Writing verifiable requirements. We need requirements that specify (or at least allow us to identify) when we’re done. Scope creep isn’t always implementing more features, it can be over-implementing features. With test-driven development (TDD) processes, the tests are written first (they fail), and the developer writes code until the tests pass. Once the tests pass, the code is done. Doing this requires us to write testable requirements. Practically speaking, this may only be realistic when using a continuous integration approach to code development.
  • Managing smaller work chunks. Timeboxing (more details) is very effective for controlling the size of iterations. Iterations are important not only for the feedback cycle, but because they reduce the size of the “desert patches.”
  • Feedback into the estimation cycle. Timeboxes become even more effective when we keep track of our estimation accuracy, and refine those estimates on the fly to help in replanning future releases. This is a key step in the maturation of a process or company from a CMMI perspective.
  • Better communication of release content. Planning releases based on use cases / use case versions is a great way to target communication for external (to the team) stakeholders. It can also be really effective for helping people avoid the desert-crossing syndrome. Essentially you’re providing a map (“The next watering hole is just over that sand dune”) that helps keep efforts in context. One reason the desert image is so powerful is that we imagine being lost in a sea of sand – the hopelessness is a function of both the reality and the perception. Better communication helps address perceptions, while other elements help with the reality.

Conclusion

Overworking the team is bad. Burning them out in the middle of the project is worse. Prevention is the solution, through iterative development, better communication, and improved estimation/planning skills.

Software Product Delivery – 20 Rules?

comedy and tragedy

Rishikesh Tembe shared twenty rules for software product delivery last month. His rules are from the perspective of a former software developer. Some we like. Some, not so much.

We Like

Rishikesh has a short post with 20 rules. Among those rules are some concepts that we feel like expanding upon:

8. Do the riskiest part of the project first.

This is a great idea (for developers and project managers) for a couple reasons. Risk may be a function of technical feasibility or customer accceptance/value. Risk may even be an artifact of how we go about developing our product – for example, offshoring introduces risks. Risk is generally a nebulous, but bad thing to have in a project. If a problem is known, a solution can be identified, addressed, and implemented.

Risks represent the unknown unknowns, or those problems that we don’t understand well enough to address them. Big risks can be very scary – they jeopardize the ROI of our project. Lack of user adoption can affect the expected value to our customers, which will directly or indirectly affect our bottom line.

Note: This is not meant to imply that prioritization of requirements by release should be subservient to risk-mitigation. The most important stuff should always be done first. Within a particular release, address the riskiest issues first. Prioritize first. Mitigate second.

10. Make sure you’re in total control of your toolset and improve it systematically

This alludes to the general benefits of operating at a higher CMMI level. As a CMMI level five organization, we would quantitatively measure and improve not only our tools but our processes.

16. Build regression testing into the build process.

We’ve talked about continuous integration in the past. In our opinion, no other approach should be used to develop software.

Not So Much

11. Do not take the clients’ deadlines literally – first accept the project, then renegotiate the deadline.

Clients have deadlines for reasons. They may be market-driven, or driven by internal politics or budget cycles. While it is possible that a deadline is arbitrary, it is usually associated with a compelling event of some sort. Any discussion of deadlines should be approached in the same way we manage scope creep as a relationship-building exercise.

Project constraints, such as budgets and deadlines, are things that should be addressed collaboratively. Our clients are our partners, not our opposition.

13. Document the interfaces perfectly, but don’t document code (see next point).
14. Be fanatical about the readability of code.

Absolutes are rarely rational end-points, although presenting goals as absolutes can be effective in motivating directional change in an organization (with no expectation of actually achieving the goal). More on that some other time.

Broken Windows, as Gladwell describes them, are absolutely detractors from any environment, and serve to degrade the performance of those who operate in the environment. Code readability, micro kitchens, flexible schedules. There are many soft ROI elements that make up the environment of a software developer. Readability of code, like readability of requirements, is important. A friend of mine used to joke that perl is a “write-only” language. Notwithstanding his joke, code needs to be readable. Even in perl.

Perl serves as a great example – elegance of code does not always coincide with syntactic simplicity. Avoiding commenting of the code is just a bad idea. People read code. Make it readable.

There are times when incurring a temporary code-debt is pragmatically more useful than delaying a release or a feature in order to polish the code.

Conclusion

There are some good ideas and some bad ones in the list. Most of them are thought provoking, regardless. We may eventually find a list of absolute rules we should all follow, and it would overlap with this list somewhat. But for now, we’re still looking.

Software Silver Bullet

silver bullet

“I believe the hard part of building software to be the specification, design, and testing of this conceptual construct,[…] If this is true, building software will always be hard. There is inherently no silver bullet.”
Frederick P. Brooks, Jr. Computer Magazine, Apr 1987

Hat Tip To Joel

Joel Spolsky wrote an article, Lego Programming, about how mainstream media gets it wrong. And he ended the article with an interesting quote:

None of them believed Frederick P. Brooks, in 1987: “Not only are there no silver bullets now in view, the very nature of software makes it unlikely that there will be any—no inventions that will do for software productivity, reliability, and simplicity what electronics, transistors, and large-scale integration did for computer hardware[…]”

Spolsky quoting Brooks

I used to have a department manager who regularly asked for silver bullets – not metaphorically, literally. So I just had to check out Mr. Brooks’ article.

Brooks in 1987

Any article about software that still rings true after almost 20 years is amazing. Brooks starts us off right away with his conjecture that the big problem of complexity in software will never be solved. He proposes that an order of magnitude improvement might be found, but that the problem will never go away. Never? We still have the problem today.

Brooks identifies elements of the essence of software that make it forever a were-wolf (no silver bullet).

  • Complexity. Brooks cites the non-linear increase in complexity of software with size of software. Even with OO constructs and abstraction layers, this is still true today. With a decade of enterprise software experience, I will vouch for this, in terms of complexity versus “size of the software.” The recent thread on Vista’s shutdown options is another good example – not because there are so many ways to do it, but rather because the team spent over a year implementing it. As to complexity vs. “value of the software”, I’m not certain it is non-linear. Applications are getting larger (generally), but the amount of value they provide is growing much faster. Not cause-and-effect, but correlating. Businesses are re-engineering to utilize software in their processes. Those savings are large, and inferring from the increases in spending on software, are more valuable than in the past. Is achieving “the same value” easier today than twenty years ago? Yes. So much so that we don’t even try to do it – we try to do harder, more valuable stuff.
  • Conformity. Brooks argues that there are no guiding principles that cause software engineers to conform (stylistically, representationally, etc). Therefore, they will spend some cycles dealing with “arbitrary complexity.” Fair point.
  • Changeability. Software becomes complex because it can. Brooks points out that cars don’t change often, because it isn’t easy to change them. Software has no such barrier. If you’ve ever spent time on a working farm, and seen the myriad of wonderful things that can be accomplished with bailing wire, twine, and duct tape, you know that the impetus to change things is not limited to software. Without the natural barriers enjoyed by manufactured goods, software will be changeable, and therefore hard.
  • Invisibility. Brooks argues that because of the representational power of software, we represent constructs that we inherently can not visualize. Some smart physicists at Nicholas Copernicus University agree, and spend considerable effort “simplifying” multi-dimensional data into a framework we can accept.

Brooks discusses several advances in software, and approaches that people hope will be a silver bullet. He posits that none of them have been or will be (as of 1987). We’re approaching two decades later, and none of them have. There are some prescient quotes in his article.

Effective Strategies

Brooks also discusses some effective strategies to change the essence of software complexity and find a silver bullet.

1. Buy Vs. Build

He proposes a very outside-the-box solution – don’t build the software, buy it. Buying customized software is not what Brooks proposes. That would be no more effective than Kanban was (shifting the overhead burden onto suppliers who subsequently raised prices). He proposes using non-customized software, and adapting business processes to those supported by the off-the-shelf software.

2. Requirements Refinement and Rapid Prototyping

His opening paragraph is a very strong statement – and one we agree with:

The hardest single part of building a software system is deciding precisely what to build. No other part of the conceptual work is as difficult as establishing the detailed technical requirements, including all the interfaces to people, to machines, and to other software systems. No other part of the work so cripples the resulting system if done wrong. No other part is more difficult to rectify later.

Brooks goes on to make a statement that seems eerily similar to one I’ve heard Kent Beck say:

I would go a step further and assert that it is really impossible for a client, even working with a software engineer, to specify completely, precisely, and correctly the exact requirements of a modern software product before trying some versions of the product.

From this perspective, Brooks promotes incremental development as the right approach. In 1987!

3. Great Designers

We’ve always said that people trump process. Brooks said it first.

Conclusion

Software is hard. Requirements are harder. Nothing (according to Brooks) will ever change that.

Skip The Requirements, Empower The Developers

stop sign

Enough of the debates about requirements and what we call them. Why don’t we just hire great developers and empower them to work directly with the customers?

Lost Garden

Danc, at Lost Garden has an article that asks just this question, and addresses the challenges of directly empowering developers.

A couple quotes to set the tone of their article:

What would happen if the developers possessed a deep understanding of their customers needs and desires? Suddenly, those thousand little decisions aren’t introducing random noise into the product. Instead, they are pushing the product forward.

And…

One philosophy is that we need better specs and those damned monkeys need to implement the specs exactly as designed. Better command and control system and more rigorous process is obviously the trick to success. This tends to generate poor results.

Getting Closer To The Customer

Danc offers these tips to help the developers get closer to their customers:

  • Use your own product
  • Onsite customers
  • Observe customers using your product
  • Hire psychologists and ethnologists to study your customers
  • Listen to lead users
  • Reduce feedback cycle time
  • Improve objectivity of results
  • Improve clarity of results

The only inconsistent point in his article seems to be the suggestion about using ethnologists. Don’t they present a barrier to developers in the same way that a product manager would by synthesizing and prioritizing “the voice of the customer?”

Freeing developers from a job where they are coding “to spec” will definitely empower the developers. I believe that having someone trained in prioritization and interpretation of needs (a product manager, for example) can help keep the team aligned on the important stuff by providing insight into what is important. This is the same argument for having “experts” interpret customer behaviors.

Check it out.