The Agile Dragon

When Alan Cooper and Kent Beck debated the benefits of eXtreme Programming versus Interaction Design, they disagreed on a lot of things. One thing they agreed on is that Agile processes are designed to minimize the impact of changing requirements. Cooper believes that it makes more sense to minimize future change by understanding the requirements better up front. Beck believes that the requirements can not be understood by the team until something is delivered. Beck’s point is that the customer doesn’t understand the requirements until he has something in his hands. We’ve shown how this is both a strength and a weakness for Agile in the real world. In The Hobbit, the dragon Smaug was missing a scale on his belly, that made him vulnerable. Agile processes have a similar weak spot.

Irrelevance of Change

The interesting thing isn’t that Agile teams need changes in the requirements to succeed. The process is relatively immune to change. By waiting until the last responsible moment to elicit requirements, we don’t care how much those requirements change prior to getting them.

Relevance of Change

There’s a chink in Agile’s armor. Imagine we are half-way through a project, using an Agile process.

  • We know that requirements change.
  • We are about to gather requirements for the next iteration.
  • We waited to gather these requirements, because the requirements didn’t exist until the customer saw the last release.

If change happens, and we’ve finished half of the application, it stands to reason that half of the changes will be to stuff we’ve already built.

When 90% of the application is complete, won’t 90% of the changes be to stuff we’ve already done (and now have to do again)?

No Worse Than a Waterfall

Sure, with a waterfall model , 100% of the changes will be to stuff we’ve already built, because we (think we) finish it before we deliver it. With an Agile process, roughly half of the changes should be to functionality we’ve already delivered. This assumes a constant rate of output by the development team.

Good Requirements Matter

I don’t accept the premise that the customer doesn’t know what they want. Customers know what they want – higher profits, greater market share, etc. What customers don’t know is what software they want. With the right requirements, and the resulting implementation, they don’t care. Or at least, they shouldn’t care. Documenting these requirements is very hard. Most people can’t do it well. That doesn’t mean that the job can’t be done. Most people can’t run a four minute mile, or transpose music up a step when sightreading. Doesn’t mean it can’t be done.

Cooper believes that the interaction design process is the ideal one for writing software requirements. He may be right, the jury is still out. There are only two statements that we can make about the “best” software development process.

  1. The best process includes elements of each of the processes that currently claims to be best.
  2. The best process for project A (or team A, or company A) is not the best process for project B.

Conclusion

Without proclaiming what the best process is, we know that some things are very powerfull:

The “best” process is going to combine those strengths, whatever its called.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

10 thoughts on “The Agile Dragon

  1. There is one important advantage to Agile methods that you have missed: Agile methods deliver working software faster. The end of the first iteration (usually somewhere between one and four weeks) results in a little bit of software that actually works. This has the staggering advantage that the customer/sponsor can take that little bit of usable software… and use it! Even if the client waits for a few iterations before the software has enough functionality, and only then uses it, the advantage is still huge. In any process where all the requirements are gathered up front, regardless of how or what is done with them afterwards, there is a large delay in getting the first bit of usable working software in the client’s hands.

    This delay _costs_. It costs both money and opportunity. Regardless of how stable or unstable the requirements are, agile methods deliver a return on investment much much sooner than any other approach to building software. As you pointed out:

    “Customers know what they want – higher profits, greater market share, etc.”

    For both of these desires, the best way to get them is by the reduced cycle time that using an agile method provides.

  2. Hi, Scott.

    Your theory sounds reasonable, but it doesn’t match my experience at all. Looking through the data from a couple of previous projects, most items put in weekly iterations are new features. That matches my gut feel and my understanding of product managers: they want progress, not rework.

    Sure, continually adding new features to a working app involves touching old code, but modern tools (like automated refactoring) and practices (like automated unit and acceptance tests) make this easy. When Cooper made his argument, he compared writing code to pouring concrete; to whatever extend that was true then, modern approaches make that unnecessary now. And that environment gives developers the confidence not to spend a lot of time and effort writing future-proofing code. If they can accomodate change equally well at any point, why not wait?

    Another way to look at it is in terms of information. At the beginning of the project, you know the least you will ever know about the problem. It’s not even theoretically possible to gather all the information you need up front. Even if you perfectly understand the problem, the technology and your users; even if you perfectly imagine the consequences of every design choice; even if your execution is perfect: even then, the world is busy changing around you. New tech is released; competitors launch new projects; congress passes new laws; people crash planes into buildings.

    Given that change is inevitable and, baring utter perfection (and massive expense) the team will learn things as the project progresses, what then? Personally, I think the most responsible thing is to make decisions at the last responsible moment. That doesn’t mean not doing interaction design, user research, market research, user testing, and all the meaningful activities that Cooper, et al, advocate. It just means doing them in parallel with the development, timed to deliver them in a just-in-time fashion. Done well, how could that not be better?

  3. William, thanks for reading and commenting!

    You’ve made some great points! And refactoring is definitely key to incremental delivery – especially XP, where the idea is that at any point in time, the design is the best possible one for the requirements at that time.

    Although it may not be like pouring concrete, refactoring does come with a cost. Just like code-reuse comes with a cost (finding the code, understanding it, integrating to it). Nothing is free, although it may not require taking a jack-hammer to the previous implementation.

    You opened with an interesting point – that product managers like to go forward, not backwards. I really appreciate that you looked back at past projects to get data! Do you think, on those past projects, that the earlier requirements could have been (or should have been) refactored? As Scott Ambler says, ‘people trump process’ and there’s a corrolary that someone else said ‘politics trumps people.’ I wonder if the product manager was being as agile as the developers?

  4. Lidor, thanks again for joining in the discussion!

    The Agile Modeling site lists a set of core principles (which must be followed to be ‘Agile’), and a set of supplementary principles, that can be followed or modified as appropriate.

    http://www.agilemodeling.com/principles.htm

    Alistair Cockburn tells us that what is important is how people interact, not which process or tool they use to do it. Your post is spot-on too.

    Thanks again,
    Scott

  5. Good post and good comments by all. One additional observation, if I may: You say that refactoring comes at a cost. Refactoring is a way of paying off design debt or technical debt. If you wait and let the debt accumulate and then pay it off in a lump sum, then certainly refactoring is going to be expensive (except when compared with the cost of not doing it, of course). However, if the team follows TDD practice in a disciplined way, they will pay off their design debt in installments as they incur it. IMO this significantly reduces the cost of keeping the code base clean. Some further thoughts on that are here.

  6. Dave, welcome to Tyner Blain and thanks for commenting!

    I definitely agree that code-debt builds over time and gets more and more expensive. Plus there’s the risk of the whole ‘broken windows’ problem that Malcolm Gladwell outlines in The Tipping Point. I visualize it like the graph for a cascading diode – you can build up more and more code-debt (or design-debt, as you describe in the link), with its associated performance penalty, until you reach the ‘critical voltage’ and flood the diode. At this point, it becomes impractical to refactor – rewriting becomes more cost effective.

    Regardless, refactoring is better than not refactoring on any given project. The question is, how much up-front design/requirements should we do to minimize the need for revamping? Some changes to the code have immediate payback, others take much longer, and cost much more.

    Thanks again!

Leave a Reply to Dave Nicolette Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.