In an earlier post on requirements for migration projects, we defined a continuum of migration projects, ranging from completely new business processes to identical processes.
In this post we will look at why companies approve identical-process migration, or duplication projects, and provide some tips on how to organize these projects.
When the project is defined with a single requirement of “duplicate that” neither traditional nor agile requirements management methods will be very effective. The key to succeeding on a duplication project is to organize the work effectively. We can selectively apply techniques from these methods once we understand the motivation for and the details of the project.
We will make much better decisions about our project if we understand why it is being commissioned as a duplication project. Many executives and many companies are inherently risk-averse. They will tend to use an existing system as long as they can. Unfortunately, this means that there is often a compelling event that triggered the approval of a migration project. This compelling event will raise the stakes, and often define the project timeline for us.
When faced with this scenario, the least risky thing to do is to duplicate the existing system, and not spend precious time exploring ways to improve the application. This is a short-term, tactical decision. When the decision is made at a sufficiently high level, it is often immutable (but not really, as scope creep will happen anyway).
The upside (for us) to this pragmatic approach is that the scope is clearly defined, and the deliverables can be scheduled and managed with a minimum of surprises along the way. This is the opposite of agile (clumsy?). In our post on Alan Cooper’s definition of interaction design, we see that customers can’t tell us what they want. Kent Beck tells us that they may not know initially, but they will figure it out as the project iterates. With a duplication project, we are expressly being told “don’t do what we need, do what we ask.”
The downside for us is that scope creep is inevitable. The legacy software implementation represents an outdated way of solving the particular problems. The organization will have learned about how to make the particular processes better between the time that the legacy system was scoped and the time when the duplication project was started. In spite of an executive mandate to not change anything, many developers, designers, and managers will insist on incorporating a fix or two, adding a feature or two, and otherwise tweaking the project.
These suggestions are actually good ones – it is much more efficient to incorporate those changes while rewriting the application. When this happens, we need to reclassify the project as minor (or major) changes to the process, and address scope and budgeting issues associated with the new functionality. We also need to manage the expectations and relationship with our executive, who is not expecting changes to the behavior of the application.
Incremental process approaches teach us not to deliver a waterfall project. Although agile processes are founded on the precept that our ability to identify the requirements improves as the project progresses (and the requirements will change as users experience the work in progress). There are also tactical benefits to making multiple deliveries within the scope of a single project.
We can demonstrate momentum by creating multiple mini-releases. These releases also give us feedback to help us adapt if we are slipping our project schedule. With early feedback, we can adjust our staffing or schedule before it’s too late. We can also incorporate and distribute testing throughout the process.
The challenge is in determining how best to decompose the work.
There are two general ways to approach the decomposition of the project. The circumstances of the project will determine which is best for the particular project.
1. Decompose by legacy project module
A straightforward way to communicate the migration project schedule is to decompose the project based upon the existing functional modules of the legacy application. If the different modules of the legacy application are relatively disconnected, this approach can work reasonably well. The benefit of this approach is that it is very easy to communicate status and demonstrate progress. The downside is that it will only enable use cases that are restricted to single (ore previously delivered) modules.
2. Decompose by actors and use cases
Another approach to decomposition is one that allows for gradual transition from the legacy system to the new application. First we identifying the actors who use the system and the use cases they perform. We then determine the sequence in which actors should migrate to the new system, and scope the use cases by mini-release based upon the actors who perform them.
There are always other considerations that can have a massive impact on how we approach migration projects, and change management in general. Two factors that always require consideration in a software application duplication project are covered here.
The underlying architecture of the new application may make one of the two approaches easier than the other. We want to choose an approach that doesn’t require us to build out the entire back-end for the first deployment. If the defined use cases require implementation of the majority of the architecture, then a modular decomposition may be more effective.
Migrating existing data is almost always part of system duplication. The biggest risk is that the migration introduces data errors. The easiest way to prevent these errors is to round-trip the data. Round-tripping the data means that we can convert the data from the legacy schema to the new schema, and then convert it back again. There is definitely extra effort required with this approach. But it provides outstanding risk mitigation. When migrating data from multiple legacy data stores into a single data repository, this is even more important.
Since we aren’t dealing with traditional requirements or value-based prioritization, our focus is on organization of the project. We still use decomposition and deliver multiple mini-releases, because of the tactical benefits in improved project execution.