“What CMMI level should we use?” is not the right question, but it is the question most people ask.
The CMMI (Capability Maturity Model Integration) of a software development process is the measure of that process’s capability. The goal of the measurement is to provide an assessment of the capability of a process with respect to creating software. Our foundation series post on CMMI provides background information, while this post focuses on the danger of misusing CMMI ratings.
1. The CMMI measurement is (mostly) a facade.
With the exception of a CMMI level five (Optimizing) process, having a CMMI rating doesn’t mean that the process is good. It means that the process is documented and managed (CMMI level two), standardized within the company (CMMI level three), or quantitatively measured (CMMI level four). Even CMMI level five status doesn’t tell us how good a process is, only that the team is actively focused on improving the process.
Having a documented process doesn’t make it a good process. This is the main flaw. If we documented a process that included steps like “developers create use cases” and “to certify a release, the developer installs the final build on his laptop”, we would qualify for CMMI level two. If we standardize on our poor process, we reach the next CMMI level. And we could measure “lines of code written per hour” and other skew quantifications of activity to achieve CMMI level four.The CMMI measurement isn’t entirely worthless – Carnegie Mellon has a track record of doing really great and smart stuff – CMMI is the best normalized measurement that anyone could come up with that would be one-size-fits-all. The problem is that in order to make the measurement apply to everyone, it has been neutered to the point of not providing very much valuable information.
It is important to know that a company has a process and measures it’s performance. It provides very valuable insight to know when a company is also optimizing that process (CMMI level five).
CMMI alone does not tell us enough about the process.
Which team would we rather have developing software for us – a CMMI level 3 team, or a CMMI level 2 team? We absolutely can not answer without more information. If our conversation with a potential outsourcer goes like this, then we have a problem:
“What is your CMMI level?”
“We operate our business at CMMI level four.”
If however, our conversation goes more like this, we’re in a good place:
“Our technical guys have reviewed your process, and we like it. How long have your people been using it, and can you give us a couple references of companies for whom you’ve used this process?”
“Thank you. We received CMMI level four certification for this process two years ago. Since this is our standard process, all of our reference accounts have benefited from this process – you can contact any of them.”
The key difference is that we’ve actually reviewed the process to determine it’s value. The CMMI rating gives us some assurance that the process is followed rigorously. ISO9000 certification, in the hardware world, suffers from the exact same problem. In a nutshell, ISO9000 requires companies to say what they do, and do what they say. It provides no insight into the value of what the company chooses to do.
2. CMMI ratings create a false sense of security.
It is very tempting for companies to advertise their CMMI level, especially with outsourcing companies, and especially with the global providers. These companies can capitalize on the human instinct – out of sight is out of mind. When companies outsource, they want to be able to “not worry about it”, and CMMI ratings can be engender a false sense of confidence in the outsourcing provider.
In addition to the implicit presumption that a documented process is a good process, it is also easy to assume that people who follow a process are at least competent at what they do. There is no reason to presume this without reviewing the quality of their work. We should always talk to referrals to find out their level of satisfaction with an outsourcer.
When we’re managing our own team, it is easy to fall in the “our process is broken” trap. Very few people will tell you that it was their fault. We’ve not yet heard someone say “I was not smart enough to solve the problem.” or “If I had worked harder, we would have made it.” We have repeatedly heard “The process is broken.” and “I need better tools / a bigger team / more time and budget.”
The process may very well be broken. Eventually, Chicken Little was right. But achieving a CMMI level without fixing the process doesn’t fix the process.
3. Standardized processes can shackle innovators.
Many people thrive on having a structured environment and process in which to work. They actually do better work when given concrete tasks, discrete deliverables, and monitored timelines. Very few innovators work best this way. When creating differentiated products, an innovator may best be served with a differentiated process. As a result, people who tend to gravitate towards standardized processes tend to create standardized (me-too) products.
Many innovative companies, like IDEO or Frog, solve a wide range of problems, from software to electronics, to toothpaste dispensers. A single unified process would be either stifling or irrelevant if all of those teams had to use it.
4. Focusing on the process means not focusing on the product
In a well-known play in American football, the quarterback throws a long pass to a widereceiver who is running down the field. The receiver attempts to catch the ball, avoid a tackle from the nearby defender, and keep running. The receiver will occasionally drop the football, because he is too focused on avoiding the tackle and on running. The commentators will point out that he needs to not think about getting tackled until he actually catches the football. The receiver does not have his eye on the ball, metaphorically speaking.
Driving and rewarding our teams for the CMMI level of the process they follow is like rewarding them for avoiding tackles and running. If this becomes a higher priority than writing great software (catching the pass), then they will do a methodical and rigorous job of following the process, and if we’re lucky, write great software along the way. Our goal is the great software – we need to make sure we are managing our teams with the software as the highest priority.
When teams are focused on writing great software, then a great process can help them. And CMMI can provide some affirmation (but not validation) that they are following a good process.
The right question
The right question is “How good is our process?”
A good process can make a good team very good, and can make a great team invincible. A good process helps an incompetent team by providing us good information about their incompetence. A bad process at best annoys good and great people, but more commonly it dillutes their efforts or even derails their projects. It is important to understand the quality of the process being followed by the team. And investments in improving the process can be worthwhile (subject to the 80/20 rule).
CMMI, unfortunately, can not tell us if the team is competent. It can not tell us if the process is good. It can only tell us that a process is being followed (or measured).
We’ve used the phrase neccessary but not sufficient repeatedly when we describe important elements of software product success. CMMI ratings fall into the same category.
In fairness, a team with a CMMI level five process is actively applying their ongoing analysis (CMMI level 4) to improving their process. This is the one piece of CMMI data from which we are more likely to infer that the process is a good one. However, as the SEI points out themselves in their documentation:
Reaching CMMI level 4 or 5 for a process area is conceptually feasible but may not be economical except, perhaps, in situations where the product domain has become very stable for an extended period of time.
CMMI ratings are not what drive us.