Juggling The Elements of An Iteration

juggling

You expect analysis to happen before design, and both to happen before implementation and testing. But how much should these activities be staggered? When a project is being run with monthly releases, it might seem logical to have each group working on a different release. For example, the test team working on the current release (3), the developers on the next release (4), and architects and analysts working on releases 5 and 6 respectively.

If your team is this staggered, you have a problem. It takes four months for a requirement to be released from the time the analyst has documented it.

The Pipelining Anti-Pattern

Mike Griffiths has an awesome article on the LeadingAnswers blog where he describes the Pipelining Anti-Pattern.

A pattern in the software world is a generalized description of a design, a solution, a process, or some other solution to a common problem. Patterns are intended to be re-used. Anti-patterns are the same thing – except they represent descriptions of common bad practices. An anti-pattern can be a very effective tool for describing a bad way to do something. It is especially effective when the anti-pattern is prevalent – most readers start out with “oh no – that’s what we do.” It also helps when the anti-pattern is followed with a prescription for a good way to address the common problem, after pointing out the weaknesses and flaws in the anti-pattern. Mike does a great job with this one.

Pipelining, or as one of Mike’s commenters calls it, full-pipelining, results in a project being managed as a series of tiny waterfalls – handing off “approved requirements” to yield “completed designs” followed by implementation, and ultimately by tests. If you want a more detailed understanding, and have been lucky enough to never live through this, now would be a good time to check out Mike’s article.

This can result in months of delay between when a need is identified and when it is satisfied. During those delays, requirements change. Teams focus on different releases. Switching gears to try and collaborate is both hard and expensive.

There has to be a better way.

Completely Insane Juggling

At the opposite extreme, everyone is working on the same thing at the same time. An analyst meets with users in the morning, gathers some requirements, and that afternoon the developers begin implementation and the test team starts defining the validation tests.

Also during the afternoon, the analyst discovers that the requirement was half-baked – he missed several pieces. He completes his analysis, writes it up and shares the changes with the rest of the team the next morning. The developers have to completely throw away the design and the work they’ve already done – so they start over.

By the next day, the analyst has discovered that there are bigger fish to fry – after reviewing his preliminary findings with the business sponsor, he concludes that there are other far more valuable goals to be addressing. By the time he gets those defined, the developers have already finished the lower-value implementation, and don’t have time to get the most important features into this release.

We’ve lost an important premise – that the most important things get built first. And we’ve got some serious inefficiencies, as team members are forced to discard work as their priorities and requirements change underneath them.

There has to be a better way.

Imagine driving on a cliff-side highway. On one side you have steep rock walls, and on the other, a long, quiet fall. You really need to avoid either extreme.

cliff road

Finding a Middle Ground

Some staggering is required. There’s a reason that the pipelining anti-pattern ever came into existence. There are benefits to waiting until you know what to do before you start doing it. But if you wait too long, the cost of delays ends up exceeding the benefits of efficiency. At the same time, all of the roles on the team require collaboration – having people distinctly separated introduces the inefficiency of context switching. And taken to extremes, introduces unwarranted delays.

The main body of Mike’s article reads almost like an encouragement to go with the “fully synchronized” team. However, in the comments, there is some clarification and acknowledgment that having analysts run 1/4 to 1/2 iterations ahead of the rest of the team is a good balance in practice. We’ve seen that work effectively. We take an approach that is a little bit different, but nets out to about the same thing.

Synchronizing Activities

First, we look at synchronizing development and testing. Following a continuous integration methodology, we keep development and test in sync. Part of the development process must be testing of their code. While we may rely on a QA role to assure that the code addresses the needs, the developers must assure that the code does what they intend. That way, any issues that arise out of QA are issues of misinterpretation of the spec – not bad code. This helps avoid the dreaded code-freeze.

Second, QA can begin defining the requirements-validation tests as soon as there is a specification to validate. This happens in parallel with development. Test cases should be defined from use case scenarios. And use cases are implementation agnostic – so you don’t have to wait for the code to be completed. You need to know how the UI is designed to create test scripts, but not to create the test plan. So there may be a little lag in execution (of the test team, relative to the development team), but there is very significant overlap.

Third, we deal with the tough one – defining the spec. Mike sums this up as two activities – business modeling and analysis/requirements. His approach to exploring the tasks (versus the roles) really helps sidestep any turf wars – or at least sets the stage for rational discussion of the things that need to happen. We’ll keep this loose definition, and interpret it as follows: business modeling – determining what needs to be accomplished (definition of goals, prioritization, etc), analysis / requirements – defining user stories (or use cases) and the supporting requirements (or specifications).

Before development begins, you have to explore the breadth of the project at some level of depth, in order to both scope and prioritize. Many teams call this “iteration 0.” With that prioritization in place, modeling and analysis can begin in detail for the first release. This happens before the development team starts developing functionality. When the team members are started in parallel, this is a great time to get your test harness, source code control, and other operational infrastructure things in place. Developers can also use this time to get conversant in the domain, if they are not already.

This ultimately results in “just in time requirements”, with the development team as close behind as possible.

There is extensive collaboration with developers and testers during spec creation – helping to assure that what is being requested is both feasible and clearly understood. This collaboration also enables initial scoping to allow for scheduling of implementation tasks.

As soon as something is believed to be understood well enough, the development team can start. Our experience has been that this is usually a couple weeks after analysis begins – consistent with the “half a cycle” comments on Mike’s article. We manage changes to requirements based upon the complexity of the proposed change. If a change is too large, or the release too close, the change gets incorporated into the next release. There are a couple benefits to this approach. First, expectations are set with developers, so that they aren’t pressured in a way that might jeopardize quality for a given release. Second, expectations are maintained with customers, so that their expectations of what is delivered in a particular release are managed effectively.

My personal experience is that this is more effective, results in faster response to change, and when presented professionally, results in higher levels of customer satisfaction than “drop everything change it now” approaches.

What about UX?

We’ve been talking in terms of requirements, but the same general approach applies to user experience efforts. Initial ethnographic studies and branding (layout, look and feel, IA, internal standards) work can be started in iteration 0, with mockups, prototype development, usability studies (at varying degrees of fidelity) happening in parallel with specification development. And like spec-development, this involves collaboration both with users and developers.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

One thought on “Juggling The Elements of An Iteration

  1. I found web annotation to be indespensible for synchronizing activities across all project team members. My tool of choice for web annotation is Protonotes – http://www.protonotes.com/ – which my teams use to add notes to our constantly evolving html prototypes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.