Monthly Archives: March 2006

Software design and specification and making movies

movie reel

Alan Cooper presents the analogy that software development is like making movies in his book, The Inmates are Running the Asylum. [This is a fantastic book for getting an understanding of exactly how Cooper’s perspective evolved over the last decade.] Cooper is presenting the analogy in the context of validating the business case for investing in interaction design.

Cooper points out that they’ve been making movies for a lot longer than we’ve been making software, and he’s exactly right that there is something to learn from the film industry.

How the movie industry works

The movie industry manages movies in three phases:

  • Pre-production. Determining what the movie will be about, raising funds, storyboarding and designing the movie, getting actors signed, writing the script, etc.
  • Production. Shooting the film. Directors, actors, and crew all working very expensively to get the film shot.
  • Post-production. Tweaking and finalizing the film.

How software development parallels movie making

Software development involves three phases as well: Decide what to do, do it, and deliver it.

The interesting thing to note is that the film industry universally invests time upfront in pre-production, to minimize the costs of production. They recognize that production is more expensive than pre or post-production. Many software teams take the same approach, although Agile development explicitly does not. We gleaned some insight into Cooper’s perspective from our coverage of a debate between Cooper and Kent Beck.

If we accept Cooper’s premise that production is more expensive than pre-production, then software should follow the same model.

It’s worth noting that an agile process results in more design, not less. Beck might argue that redesigning as we go is less expensive, because we improve our ability to understand what we actually want to create during the process of creating it. Cooper disagrees.

As much as we like Cooper’s insights, the movie cost structure is not paralleled in the software development structure. When we hire developers, it is analogous to the old movie studios keeping actors on retainer – the cost is “fixed.” And the infrastructure costs of production (set creation, for example) are not affected by the time spent in production – they too are fixed. If we have a project with contractor developers, then we have a variable cost, and we lose money while those developers are “sitting around.” However, today’s projects leverage outsourced overseas contractors more and more – and these actors are a lot cheaper than script writers.

What we know in spite of the analogy’s flaws

We absolutely save time and money by defining requirements before we write the software. We also know that it is important to design before we code.

Neither of these statements conflicts with agile philosophies, if we take the approach of treating “design everything” with “design this one thing” similarly. An agile approach will simply have multiple design/implement cycles, each focused on a subset of the software (and allowing for a redesign phase prior to delivery).

Definition of payback period

calendar

Another tool for financial decision making

We’ve talked previously about using ROI to determine which projects to fund. This isn’t the only way to make those decisions, as Ski points out with the concept of flush. Payback period is the measure of how quickly an investment returns the invested amount, or the break-even point in the investment.

Using payback period

When we are managing a business or a project by cash-flow (versus income), we care very much about how quickly we get our money back. It is a reality that some companies (or project sponsors) choose the less-profitable investment if they get their money back faster. The most common reasons for this decision would be if a private company is strapped for cash, and does not want to use debt to finance operations. Public companies can also be faced with this situation, if they find that market valuations are predominantly driven by cash flow instead of profitability. A bootstrapped start-up company may also be forced to make short-term investment decisions based upon cash flow.

Companies faced with a high level of uncertainty for their investments will also find payback period analysis attractive. Since payback calculations will have the same inherent risk (and error) as expected value calculations, this presents a false sense of security. It does, however, provide a mechanism for controlling risk by favoring “get my money back sooner” projects over those with longer payback periods.

When using payback period to make investment decisions, companies will usually have a standard time period, such as two years or two quarters. Any project that has a payback period of less than the standard will be acceptable. Those projects that take longer to return the original investment than the standard period will not be acceptable.

Definition of payback period

From Philip Cooley’s Business Financial Management:

A popular procedure in practice that measures the time required to recapture through cash inflows a project’s net investment cash outflow. Payback ignores time value of money and post-payback cash flows. It measures the return of capital, not the return on capital.

Problems with payback period

Payback period analysis is not considered to be an ideal evaluation mechanism for three reasons.

  1. The time value of money (used in NPV calculations) is not considered. $100,000 three years from now is considered to be equivalent to $100,000 today.
  2. Cash flows beyond the payback period are ignored, thereby ignoring the ultimate ROI of the investment.
  3. The required rate of return (IRR) for the project is ignored. Less profitable projects will be favored if they return the initial investment more quickly than more profitable projects.

Conclusion

We suggest using payback period only when cash-strapped (with an aversion to debt-financing), or as a tie-breaker against apparently equivalent projects (based upon ROI).

Software testing series: Pairwise testing

testing equipment
Before we explain pairwise testing, let’s describe the problem it solves

Very large and complex systems can be very difficult and expensive to test. We inherit legacy systems with multiple man-years of development effort already in place.  These systems are in the field and of unknown quality. With these systems, there are frequently huge gaps in the requirements documentation. Pairwise testing provides a way to test these large, existing systems. And on many projects, we’re called in because there is a quality problem.

We are faced with the challenge of quickly improving, or at least quickly demonstrating momentum and improvement in the quality of this existing software. We may not have the time to go re-gather the requirements, document them, and validate them through testing before our sponsor pulls the plug (or gets fired). We’re therefore faced with the need to approach the problem with blackbox (or black box) testing techniques.

For a complex system, the amount of testing required can be overwhelming. Imaging a product with 20 controls in the user interface, each of which has 5 possible values. We would have to test 5^20 different combinations (95,367,431,640,625) to cover every possible set of user inputs.

The power of pairwise

With pairwise programming, we can achieve on the order of 90% coverage of our code in this example with 54 tests! The exact amount of coverage will vary from application to application, but analysis consistently puts the value in the neighborhood of 90%. The following are some results from pairwise.org.

We measured the coverage of combinatorial design test sets for 10 Unix commands: basename, cb, comm, crypt, sleep, sort, touch, tty, uniq, and wc. […] The pairwise tests gave over 90 percent block coverage.

Our initial trial of this was on a subset Nortel’s internal e-mail system where we able cover 97% of branches with less than 100 valid and invalid testcases, as opposed to 27 trillion exhaustive testcases.

[…] a set of 29 pair-wise AETG tests gave 90% block coverage for the UNIX sort command. We also compared pair-wise testing with random input testing and found that pair-wise testing gave better coverage.

Got our attention!

How does pairwise testing work?

Pairwise testing builds upon an understanding of the way bugs manifest in software. Usually, a bug is caused not by a single variable causing a bug, but by the unique combination of two variables causing a bug. For example, imagine a control that calculates and displays shipping charges in an eCommerce website. The website also calculates taxes for shipped products (when there is a store in the same state as the recipient, sales taxes are charged, otherwise, they are not). Both controls were implemented and tested and work great. However, when shipping to a customer in a state that charges taxes, the shipping calculation is incorrect. It is the interplay of the two variables that causes the bug to manifest.

If we test every unique combination of every pair of variables in the application, we will uncover all of these bugs. Studies have shown that the overwhelming majority of bugs are caused by the interplay of two variables. We can increase the number of combinations to look at every three, four, or more variables as well – this is called N-wise testing. Pairwise testing is N-wise testing where N=2.

How do we determine the set of tests to run?

There are several commercial and free software packages that will calculate the required pairwise test suite for a given set of variables, and some that will calculate N-wise tests as well. Our favorite is a public domain (free) software package called jenny, written by Bob Jenkins. jenny will calculate N-wise test suites, and its default mode is to calculate pairwise tests. jenny is a command line tool, written in C, and is very easy to use. To calculate the pairwise tests for our example (20 controls, each with 5 possible inputs), we simply type the following:

jenny 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 > output.txt

And jenny generates results that look like the following:

1a 2d 3c 4d 5c 6b 7c 8c 9a 10c 11b 12e 13b 14d 15a 16c 17a 18d 19a 20e
1b 2e 3a 4a 5d 6c 7b 8e 9d 10a 11e 12d 13c 14c 15c 16e 17c 18a 19d 20d
1c 2b 3e 4b 5e 6a 7a 8d 9e 10d 11d 12a 13e 14e 15b 16b 17e 18e 19b 20c
1d 2a 3d 4c 5a 6d 7d 8b 9b 10e 11c 12b 13d 14b 15d 16d 17d 18b 19e 20a
1e 2c 3b 4e 5b 6e 7e 8a 9c 10b 11a 12c 13a 14a 15e 16a 17b 18c 19c 20b
1a 2a 3c 4e 5e 6a 7b 8c 9d 10b 11b 12b 13e 14a 15d 16d 17c 18c 19b 20d […]

Where the numbers represent each of the 20 controls, and the letters represent each of the five possible selections.

What’s the catch?

There are two obvious catches. First, when you use a tool like jenny, we must run all of the tests that it identifies, we can’t pick and choose. Second, pairwise testing doesn’t find everything. What if our example bug before about taxes and shipping only manifested when the user is a first time customer? Pairwise testing would not catch it. We would need to use N-wise testing with N >= 3. Our experience has been that N=3 is effective for almost all bugs.

There is also a sneaky catch – test generators like jenny assume that the order of variables is irrelevant. Sometimes we are testing dynamic user interfaces, where the order of value selection in controls is relevant. There is a solution to this, and we will update this post with a link to that solution when it is available.

– – –

Check out the index of software testing series posts for more testing articles.

Organizing a software migration project

copy machine

In an earlier post on requirements for migration projects, we defined a continuum of migration projects, ranging from completely new business processes to identical processes.

migration continuum

In this post we will look at why companies approve identical-process migration, or duplication projects, and provide some tips on how to organize these projects.

When the project is defined with a single requirement of “duplicate that” neither traditional nor agile requirements management methods will be very effective. The key to succeeding on a duplication project is to organize the work effectively. We can selectively apply techniques from these methods once we understand the motivation for and the details of the project.

Understanding why

We will make much better decisions about our project if we understand why it is being commissioned as a duplication project. Many executives and many companies are inherently risk-averse. They will tend to use an existing system as long as they can. Unfortunately, this means that there is often a compelling event that triggered the approval of a migration project. This compelling event will raise the stakes, and often define the project timeline for us.

When faced with this scenario, the least risky thing to do is to duplicate the existing system, and not spend precious time exploring ways to improve the application. This is a short-term, tactical decision. When the decision is made at a sufficiently high level, it is often immutable (but not really, as scope creep will happen anyway).

Upside downside

The upside (for us) to this pragmatic approach is that the scope is clearly defined, and the deliverables can be scheduled and managed with a minimum of surprises along the way. This is the opposite of agile (clumsy?). In our post on Alan Cooper’s definition of interaction design, we see that customers can’t tell us what they want. Kent Beck tells us that they may not know initially, but they will figure it out as the project iterates. With a duplication project, we are expressly being told “don’t do what we need, do what we ask.”

The downside for us is that scope creep is inevitable. The legacy software implementation represents an outdated way of solving the particular problems. The organization will have learned about how to make the particular processes better between the time that the legacy system was scoped and the time when the duplication project was started. In spite of an executive mandate to not change anything, many developers, designers, and managers will insist on incorporating a fix or two, adding a feature or two, and otherwise tweaking the project.

These suggestions are actually good ones – it is much more efficient to incorporate those changes while rewriting the application. When this happens, we need to reclassify the project as minor (or major) changes to the process, and address scope and budgeting issues associated with the new functionality. We also need to manage the expectations and relationship with our executive, who is not expecting changes to the behavior of the application.

Decomposition

Incremental process approaches teach us not to deliver a waterfall project. Although agile processes are founded on the precept that our ability to identify the requirements improves as the project progresses (and the requirements will change as users experience the work in progress). There are also tactical benefits to making multiple deliveries within the scope of a single project.

We can demonstrate momentum by creating multiple mini-releases. These releases also give us feedback to help us adapt if we are slipping our project schedule. With early feedback, we can adjust our staffing or schedule before it’s too late. We can also incorporate and distribute testing throughout the process.

The challenge is in determining how best to decompose the work.

Different approaches

There are two general ways to approach the decomposition of the project. The circumstances of the project will determine which is best for the particular project.

1. Decompose by legacy project module

A straightforward way to communicate the migration project schedule is to decompose the project based upon the existing functional modules of the legacy application. If the different modules of the legacy application are relatively disconnected, this approach can work reasonably well. The benefit of this approach is that it is very easy to communicate status and demonstrate progress. The downside is that it will only enable use cases that are restricted to single (ore previously delivered) modules.

2. Decompose by actors and use cases

Another approach to decomposition is one that allows for gradual transition from the legacy system to the new application. First we identifying the actors who use the system and the use cases they perform. We then determine the sequence in which actors should migrate to the new system, and scope the use cases by mini-release based upon the actors who perform them.

Other considerations

There are always other considerations that can have a massive impact on how we approach migration projects, and change management in general. Two factors that always require consideration in a software application duplication project are covered here.

Architectural entanglement

The underlying architecture of the new application may make one of the two approaches easier than the other. We want to choose an approach that doesn’t require us to build out the entire back-end for the first deployment. If the defined use cases require implementation of the majority of the architecture, then a modular decomposition may be more effective.

Data migration

Migrating existing data is almost always part of system duplication. The biggest risk is that the migration introduces data errors. The easiest way to prevent these errors is to round-trip the data. Round-tripping the data means that we can convert the data from the legacy schema to the new schema, and then convert it back again. There is definitely extra effort required with this approach. But it provides outstanding risk mitigation. When migrating data from multiple legacy data stores into a single data repository, this is even more important.

Conclusion

Since we aren’t dealing with traditional requirements or value-based prioritization, our focus is on organization of the project. We still use decomposition and deliver multiple mini-releases, because of the tactical benefits in improved project execution.

We must sell the software first

For sale sign

We write a lot about value-driven prioritization of software requirements. It’s easy (when defining requirements) to forget that we have to sell the product before anyone gets any value from it. With internal use software for large companies (like enterprise software, intranets, erp systems), “sell it” means “get high user adoption rates.” High user rates are key to getting ROI when process-improvement is one of the targets of the software.

Closing the sale means creating the perception of value. Creating actual value does not assure that someone will believe that there is value prior to actually using the software. Sustaining a perception of value over time (or across multiple customers) requires that there be actual value underneath the perceived value. A reasonable way to think about perceived value is to think about it as desireability.

Kathy Sierra posted a while ago on her blog, Creating Passionate Users, ten ways to make products more desirable.

Her list, which has more detail in her post:

  1. Pay attention to style.
  2. Pay attention to the emotional appeal.
  3. Show it in action…with real people.
  4. Don’t use pictures of generic shiny happy people that have become cliches.
  5. Make sure it’s clear to prospective users how this helps them kick ass.
  6. Appeal to as many senses as possible.
  7. Make it meaningful.
  8. Make it justifiable, so the user doesn’t have to feel guilty.
  9. Support a community of users.
  10. Never underestimate the power of fun.

Conclusion

When marketing the product, follow Kathy’s advice above

When prioritizing the requirements for early releases, make sure and invest some time on the surprise and delight features.

When defining functional requirements, designing, scoping and implementing the solution, think about the aestetics and usability of the application.

Managing scope creep is not a zero-sum game

labrador

Is your relationship with your dog better than it is with your supplier?

Dogs epitomize loyalty. They are social animals, as are people, and to them – relationships are important. Relationships are built on trust, but they are sustained with loyalty. And relationships are critical to having a successful product, process, or company.

We’ve never had a project where we didn’t have to address scope creep. As a supplier, we prioritize loyalty and relationships above incremental profitability. Project management techniques for addressing scope creep do us a disservice by starting with the presumption that resources have to be managed in a zero-sum game (every new feature must displace an existing feature). In this post we will talk about the opportunity to strengthen the relationship with our customer as part of addressing scope creep. It is not a zero-sum game.

The zero-sum game false premise

Old-school project managers talk about the “magic triangle” as if it were a law of physics. They contend that for a given project, you can either fix scope, cost, or time. Today’s more enlightened project managers add the dimension of quality, or explicitly call out that quality is part of scope in the original triangle. A simple idea, easy to communicate, and tactically effective.

The problem isn’t that the triangle is bad, the problem is that it is central to discussions of approving and incorporating changes in the requirements or scope for a software project. Drawing the triangle on the white board closes relationship opportunities as effectively as an off-color joke in a job interview.
When we can not modify the budget or delivery schedule (or quality!) for a given scope of committed requirements for a project, we can use the triangle imagery to help drive decisions within the meeting. However, our approach to handling requests for changes and additions to the requirements also impacts our relationship with our client. The people we interact with on a daily basis for projects usually can not or will not consider making changes to the area of the triangle. But the right relationships can either convince them to do so, or to ask that it be done.

Loyalty sustains business relationships

David Maister has a good post responding to a reader’s question about how to handle scope creep. David asks us how we would respond if the scope-creeping request had been made by a family member or close friend. Most of us would answer that we would just do it, at least within reason. Why? Because we are optimizing on our relationships first, and time management second.

As long as we are bidding and meeting our goals for the profitability of a project, we should be willing to consider investing time or money (our dime) on the relationship. We spend money on marketing and pursuits and sales cycles. Think of this as the same thing. Of course there is a limit to how much we should be willing to invest – 1 week over the course of a 6 month project is not unreasonable.

When requests are too large (they blow our “relationship marketing budget”), we have to bring in the triangle. When too many requests come in and we use up our budget, we again have to employ the techniques of presenting the clients with tradeoffs and decisions. But until we reach that point, we should absorb the (controlled) costs as an investment in customer loyalty.

We can present the conversation in a number of different ways – depending on the individuals with whom we are working. But the ideas we want to get across are:

This is important to you. We’re willing to invest the time to make it happen without cutting something else, because our relationship is important to us.

Profitability sustains businesses

We can’t do “too much” of this. In addition to not wanting to be “walked on” or feeling professionaly abused, we also want to meet the financial goals of the project. That’s why we establish a budget up front. We’ll keep that budget a secret, to make sure the customer doesn’t just use it up. If we share the data, we run the risk of automatically getting enough scope creep to fill up our “investment bucket.” Make sure the ROI for the project will still be met – that drives the hard line in the sand.

Reapplying David’s analogy, we would eventually “cut off” our deadbeat brother. With our budgeting in advance, we know exactly where we have to cutoff the “free” scope creep.

Conclusion

The most important part of a customer relationship is the relationship. Look to scope creep as an opportunity to improve the relationship, without tearing the financial envelope for the project.

Once we’ve used up our loyalty budget, we can then apply the techniques taught in project management classes. They are good techniques, but they aren’t the most important thing to focus on.

What CMMI level should we use?

engineering scale

“What CMMI level should we use?” is not the right question, but it is the question most people ask.

The CMMI (Capability Maturity Model Integration) of a software development process is the measure of that process’s capability. The goal of the measurement is to provide an assessment of the capability of a process with respect to creating software. Our foundation series post on CMMI provides background information, while this post focuses on the danger of misusing CMMI ratings.

1. The CMMI measurement is (mostly) a facade.

With the exception of a CMMI level five (Optimizing) process, having a CMMI rating doesn’t mean that the process is good. It means that the process is documented and managed (CMMI level two), standardized within the company (CMMI level three), or quantitatively measured (CMMI level four). Even CMMI level five status doesn’t tell us how good a process is, only that the team is actively focused on improving the process.

Having a documented process doesn’t make it a good process. This is the main flaw. If we documented a process that included steps like “developers create use cases” and “to certify a release, the developer installs the final build on his laptop”, we would qualify for CMMI level two. If we standardize on our poor process, we reach the next CMMI level. And we could measure “lines of code written per hour” and other skew quantifications of activity to achieve CMMI level four.The CMMI measurement isn’t entirely worthless – Carnegie Mellon has a track record of doing really great and smart stuff – CMMI is the best normalized measurement that anyone could come up with that would be one-size-fits-all. The problem is that in order to make the measurement apply to everyone, it has been neutered to the point of not providing very much valuable information.

It is important to know that a company has a process and measures it’s performance. It provides very valuable insight to know when a company is also optimizing that process (CMMI level five).

mask

CMMI alone does not tell us enough about the process.

Which team would we rather have developing software for us – a CMMI level 3 team, or a CMMI level 2 team? We absolutely can not answer without more information. If our conversation with a potential outsourcer goes like this, then we have a problem:

“What is your CMMI level?”

“We operate our business at CMMI level four.”

“You’re hired!”

If however, our conversation goes more like this, we’re in a good place:

“Our technical guys have reviewed your process, and we like it. How long have your people been using it, and can you give us a couple references of companies for whom you’ve used this process?”

“Thank you. We received CMMI level four certification for this process two years ago. Since this is our standard process, all of our reference accounts have benefited from this process – you can contact any of them.”

“You’re hired!”

The key difference is that we’ve actually reviewed the process to determine it’s value. The CMMI rating gives us some assurance that the process is followed rigorously. ISO9000 certification, in the hardware world, suffers from the exact same problem. In a nutshell, ISO9000 requires companies to say what they do, and do what they say. It provides no insight into the value of what the company chooses to do.

2. CMMI ratings create a false sense of security.

It is very tempting for companies to advertise their CMMI level, especially with outsourcing companies, and especially with the global providers. These companies can capitalize on the human instinct – out of sight is out of mind. When companies outsource, they want to be able to “not worry about it”, and CMMI ratings can be engender a false sense of confidence in the outsourcing provider.

In addition to the implicit presumption that a documented process is a good process, it is also easy to assume that people who follow a process are at least competent at what they do. There is no reason to presume this without reviewing the quality of their work. We should always talk to referrals to find out their level of satisfaction with an outsourcer.

When we’re managing our own team, it is easy to fall in the “our process is broken” trap. Very few people will tell you that it was their fault. We’ve not yet heard someone say “I was not smart enough to solve the problem.” or “If I had worked harder, we would have made it.” We have repeatedly heard “The process is broken.” and “I need better tools / a bigger team / more time and budget.”

The process may very well be broken. Eventually, Chicken Little was right. But achieving a CMMI level without fixing the process doesn’t fix the process.

handcuffs

3. Standardized processes can shackle innovators.

Many people thrive on having a structured environment and process in which to work. They actually do better work when given concrete tasks, discrete deliverables, and monitored timelines. Very few innovators work best this way. When creating differentiated products, an innovator may best be served with a differentiated process. As a result, people who tend to gravitate towards standardized processes tend to create standardized (me-too) products.
Many innovative companies, like IDEO or Frog, solve a wide range of problems, from software to electronics, to toothpaste dispensers. A single unified process would be either stifling or irrelevant if all of those teams had to use it.

american football

4. Focusing on the process means not focusing on the product

In a well-known play in American football, the quarterback throws a long pass to a widereceiver who is running down the field. The receiver attempts to catch the ball, avoid a tackle from the nearby defender, and keep running. The receiver will occasionally drop the football, because he is too focused on avoiding the tackle and on running. The commentators will point out that he needs to not think about getting tackled until he actually catches the football. The receiver does not have his eye on the ball, metaphorically speaking.

Driving and rewarding our teams for the CMMI level of the process they follow is like rewarding them for avoiding tackles and running. If this becomes a higher priority than writing great software (catching the pass), then they will do a methodical and rigorous job of following the process, and if we’re lucky, write great software along the way. Our goal is the great software – we need to make sure we are managing our teams with the software as the highest priority.

When teams are focused on writing great software, then a great process can help them. And CMMI can provide some affirmation (but not validation) that they are following a good process.

The right question

The right question is “How good is our process?”

A good process can make a good team very good, and can make a great team invincible. A good process helps an incompetent team by providing us good information about their incompetence. A bad process at best annoys good and great people, but more commonly it dillutes their efforts or even derails their projects. It is important to understand the quality of the process being followed by the team. And investments in improving the process can be worthwhile (subject to the 80/20 rule).

CMMI, unfortunately, can not tell us if the team is competent. It can not tell us if the process is good. It can only tell us that a process is being followed (or measured).

Conclusion

We’ve used the phrase neccessary but not sufficient repeatedly when we describe important elements of software product success. CMMI ratings fall into the same category.

In fairness, a team with a CMMI level five process is actively applying their ongoing analysis (CMMI level 4) to improving their process. This is the one piece of CMMI data from which we are more likely to infer that the process is a good one. However, as the SEI points out themselves in their documentation:

Reaching CMMI level 4 or 5 for a process area is conceptually feasible but may not be economical except, perhaps, in situations where the product domain has become very stable for an extended period of time.

With our focus on great software, we have to prioritize innovation, and more specifically differentiated innovation. By definition, this precludes us being in “stable” product domain.

CMMI ratings are not what drive us.

Foundation Series: CMMI Levels Explained

CMU classroom

CMMI is the initialism for Capability Maturity Model Integration.

CMMI is a numeric scale used to “rate” the maturity of a software development process or team. Maturity can be thought of like enlightenment. An immature process is not much different from the old “infinite monkeys” yarn – maybe we get it right, but probably not. A fully matured or enlightened process not only does it right, but improves itself over time.

The Software Engineering Institute (SEI) at Carnegie Mellon (Go Tartans! BSME90) created the CMM model for software engineering in the late 80’s and early 90’s. In an effort to consolidate multiple CMM models for different process areas, the SEI team created the CMMI in 2002. In this post, we will understand what each level represents.

Technically, the name of the model is the Capability Maturity Model Integration for Software Engineering, or SW-CMM, but in practice people just use CMM. The 645 page document can be found on the CMU SEI site.

Continue reading Foundation Series: CMMI Levels Explained

Software requirements for migration projects

migrating birds

Joy Beatty, director of services at Seilevel, has published an excellent two-part post about gathering requirements for migration projects (here and here). A common enterprise software project is the replacement of a legacy system – duplicating functionality in a new system that exists in an old system. This type of project usually has very different constraints than a “new application” project. We will look at the characteristics of migration projects to understand how we should approach them to assure success.

The techniques that have been designed for development of new applications do not always work when migrating an existing software system to a new platform. In a migration project, change management plays a key role in determining how we manage the requirements, prioritization, and scheduling of delivery. The impact that change-management has is determined by where the migration project lies on the migration continuum.

The migration continuum

We can think of every software development project as being a migration. There is always a before, and there is always an after. Every project is a migration from before to after. In the following diagram, we show the continuum of migration projects. On the extreme left we are running projects where there is no precedent for the new software. On the extreme right, we are rewriting an existing application, designed to do exactly the same things, in exactly the same way.
migration continuum

It’s rare that the before state is “no process.” New software development is really a migration from “some process” to “some other process.” The projects that Joy describes are on the right side of the spectrum – migration from “some process” to “the same thing” or incorporate very minor process changes.

We can break this continuum up into four separate areas. Our individual projects may not fit squarely into one of the areas. It may straddle the boundary between two areas. When faced with one of these projects, we should use our judgment to apply ideas from each area.

Completely new process.

This is uncommon. We have a completely new process when we are implementing software to support a business process that is completely new to our customers.

Imagine a company that sells tires directly to consumers over the internet. If that company decides to establish a network of distributors and resellers, then new processes will be introduced for the company. Sales channel management, product distribution, promotions, joint marketing, etc. A company that introduces an online ordering website to augment their catalog sales will also introduce a series of new processes.
If we are building software in conjunction with a change to our customer’s business, then it is a completely new project.

Major process changes

This is the most common “new software” project. Enterprise customers rarely engage software companies to help them “do something new” – they often engage us to “do something better.” IT departments historically are not innovators for their companies, and they are generally treated as cost centers, not profit centers. IT projects are therefore often targeted at reducing the costs of existing processes. When a company talks about software to support a new process, they almost always mean a radically different process.

A company that has traditionally published a quarterly price book for their products has an initiative to create customer-specific pricing for their products. This is a process change, and a major one. Previously, someone determined prices for all customers (perhaps with different prices for different groups of customers), and those prices were communicated via the mass-publishing of a price book.

In the new version of the pricing process, prices are determined for each customer, perhaps to reward higher volume or more loyal customers with special pricing. Pricing may change in the new system on a daily basis (perhaps raising prices as inventory levels drop for a product). The communication mechanism may be very different – customers may need to log in to a web site to get current quotes, or they may have to call a sales rep who has access to an internal pricing system.

We can validate for ourselves that the process is pre-existing by stepping back and taking a high level view. In both processes, prices are determined with a goal of maximizing profits (or market share). Prices are communicated to customers. The differences are “in the details” even though they can mean significant changes, and have significant value to customers.

Minor process changes

Alan Cooper suggests that most customers are not innovators (at least with respect to software). Their ideas are evolutionary, not revolutionary improvements to existing processes.

A company that sells products internationally has sales teams in each of their target-market countries. The company’s product managers determine which products are available in each country. When a salesperson wants to sell a product, they have to confirm that the product is available for sale in that country, then contact a pricing specialist to determine the proper price to charge in the local currency.

In the modified process, an intranet page is created that accesses a database of country-availability information for each product. The salesperson can now access the website, find the product (and it’s country-specific part number) and the proper price to charge in the local currency.

This is a minor process change. Note that the absence of software in the previous process does not mean that the process did not exist. It has known constraints, objectives and goals. It is relatively straightforward to define requirements for this type of project, as Joy points out.

Identical process

We are migrating to an identical process when there is already a system in place, and the goal is to replace it with the same system, but on a new platform. This is common when migrating to new software and hardware platforms (moving from a standalone application onto a department web server, consolidation of SAP instances, etc). These are the IT-driven consolidation or cost reduction projects. These can are the projects with the highest propensity to be waterfall projects, as stakeholders will have a driving objective of system replacement (or sunsetting). “Must do everything the existing system does” is the common theme.

Often, scope creep affects these projects, as there are always opportunities to improve existing systems. The customer may have been living with inconveniences that were too expensive to fix in the old system. Someone will try and attach these minor improvements to the migration project like congressmen attaching riders to a budget bill.

This isn’t a bad idea, as the improvements are usually valuable, and may have good ROI characteristics if incorporated into the scope of the migration project. If these requests are ignored, they may become lost opportunities. What we have to make sure we remember is that these improvements are “minor process changes” and we may need to manage them distinctly or differently than the other areas of the project.

How do we use this continuum framework?

One size does not fit all when it comes to software development processes.

Stakeholder priorities and objectives will vary. The level of effort in different areas of the process will vary. For example, interaction design efforts will not be well received in an identical process project, but they are invaluable for major process changes or the introduction of new processes.

Different techniques can be used in different places on the migration continuum. As Joy points out, in an identical process project, someone can even read the existing source code to reverse engineer a particular required behavior.

We talk more about how best to approach projects in different places along the continuum in Organizing a software migration project.