How do you prevent analysis paralysis? That’s the question Barbara opens up for discussion on the Business Analyst Blog. The answer is somewhat simple. You stop as soon as you believe you have something that reasonably covers the goals (or use cases) that you are trying to address. When you have requirement completeness, you move on. This answer is both naive and enlightened- especially when you consider the benefits of an agile development process.
The Naive Answer
The vicious cycle that Barbara describes is “Think, research, document. Think, research, document.”
The only way to stop is to simply stop. But when? People who are good at analysis are also very good at getting down into the details. So how do you stop going into the details? And when do you know that you’ve uncovered everything?
The approach you should use is one that builds on the structured requirements approach for documenting requirements.
The “think, research, document. repeat” process is the process of requirements elicitation. You gain an understanding of what is required, you document and confirm it, and you move on.
The same “stopping” approach applies to developing use cases and developing functional requirements. We’ll address it in terms of use cases, both because the problem seems to happen more with use cases, and to simplify the language in our writing.
There are really two questions that have to be answered:
- Do we understand the needs well enough (not missing anything, no mistakes, etc)?
- Have we documented to a sufficient level of detail our (current) understanding of the requirement.
Determining the right level of detail is not really an analysis paralysis problem. The right level of detail is determined primarily by the level of domain expertise of the consumers of the documents. Less expertise leads to more required detail. Further, the available means of communication affect the level of detail. When part of the team is several thousand miles and several hours away from the people documenting requirements, there is more dependence on the artifacts as communication vehicles. Again – a need for more detail. But that isn’t the point of this article, so forget about it for now.
We review use cases for completeness. For a given customer goal, use cases are identified that enable the goal. The set of use cases must completely enable the goal. And use cases that do not support the goal should not be defined. As soon as the set of use cases is identified that completely address the goal, stop. Don’t identify any more use cases.
Identify the normal flow, and then the alternate flows and exception flows that get the most common cases. Don’t try and identify the alternate flows for rare situations, as you won’t implement them anyway. Use the 80/20 rule. If the additional flow does not happen frequently enough to have a material impact on the goal, don’t document it. If the use case scenario is rare, don’t worry about it. Don’t identify any more flows.
I haven’t seen people get trapped in the analysis of “what happens regularly?” I see people get trapped in the analysis of “what could conceivably happen?” So you just don’t do it.
The Enlightened Answer
Agile development processes were designed with the presumption that requirements must change, primarily because they can not be defined with any confidence, until someone already has a solution in hand. This is way oversimplifying, but the gist of it is that these techniques were designed to protect implementation teams from ill-defined and ever-changing requirements. And they work.
One benefit of incremental development is that you deliver the most valuable stuff first. You prioritize with an eye towards maximizing delivered value. And that means you will implement the use cases that are the most valuable. Those are the ones that deliver the highest proportion of the value of the goals that they support. And delivering that value means that they happen regularly. You use the 80/20 rule to avoid implementing the low-value, rare use case flows.
So even if someone did successfully define all of the conceptually possible flows through a use case, you wouldn’t implement them all. And even if you did implement some of the obscure flows, you would do them later – only after delivering all of the common, regular, valuable use cases and flows.
Conclusion
The enlightened answer is the same as the naive one. Capture and review the requirements for the common cases. Feel comfortable that they are at least generally correct. Implement them. Get feedback from the users, after prototyping or implementing the solution. Fix your mistakes in the next release, if fixing those mistakes is more valuable than living with them for a short while and implementing something else.
BA Pairing prevents this from happening. Also, it is on the top of the agenda for an IM to make sure that the flow of stories/requirements is maintained.
For me, its all about time, its the one resource you can’t replace. So, we have the creation of the “time-box” (have to look up who first used the term and concept…). Get a scope agreed too, and then go to it for an also-agreed-to period of time; 4 weeks works for me. Presuming reasonable access to business SME’s and no debilitating project constraints, anyone can deliver a whole lot of requirements in 4 weeks.
Hey guys, thanks as always for reading and commenting.
Rajeev – how has BA pairing worked for you on this?
David – definitely agree that timeboxing the requirements cuts this off at the knees too. I just wish there were a ‘one size fits all’ number we could use with a smaller time increment. For example – “spend 8 hours eliciting and writing a ‘complex’ use case.” I think we can all apply limits like that, but there isn’t a magic number we can say applies to everyone.