Agile Estimation, Prediction, and Commitment

Your boss wants a commitment. You want to offer a prediction. Agile, you say, only allows you to estimate and predict – not to commit. “Horse-hockey!” your boss exclaims, “I want one throat to choke, and it will be yours if you don’t make a commitment and meet it.” There’s a way to keep yourself off the corporate gallows – estimate, predict, and commit – using agile principles.

This is an article about agile product management and release planning.

Change and Uncertainty

In the dark ages before your team became agile, you would make estimates and commitments. You never exactly met your commitments, and no one really noticed. That was how the game was played. You made a commitment, everyone knew it would be wrong, but they expected it anyway. Maybe your boss handicapped your commitment, removing scope, lowering expectations, padding the schedule. Heck, that’s been the recipe for success since they planned the pyramids.

It makes sense.

  1. Your early estimates are wrong. When you add them up, the total will be wrong. If you do PERT estimation, the law of large numbers will help you in aggregate. But you’ll still be wrong.
  2. The outside demands on, and availability of, your people will change. Unplanned sick time, attrition, levels of commitment over time, lots of “people stuff” is really unknown.
  3. The needs of your customers will change. Markets evolve over time. You get smarter, your competitors get better, your customer’s expectations change.

Agile processes are designed to help you deliver what your customer actually needs, not what was originally asked for. Contrast the two worlds.

In the old world, you would commit to delivering a couple pyramids. After spending double your budget, with double the project duration, you would have delivered one pyramid. When you deliver it, you find out that sphinxes are all the rage. Oops.

Your team changed to agile, so that you could deliver the sphinx. But your Pharaoh still wants a commitment to deliver a couple pyramids (the smart ones will be expecting to get just one). You can stay true to agile, and still mollify your boss’ need to have a commitment, if you take advantage of the first-principles of why agile estimation works.

Estimation

A commitment is a factual prediction of the future. “This will take two weeks.” Nobody is prescient.

A factual prediction has to be nuanced. “I expect* this will take no more than two weeks.”

*in reality, this is shorthand for a mathematical prediction, such as “I expect, with 95% confidence, that this will take no more than two weeks.”

Few non-scientist, non-engineers, non-mathematicians understand that 95% confidence has a precise meaning. People usually interpret it to mean “a 5% chance that it will take more than two weeks.” What it really means is that if this exact same task were performed twenty thousand times (in a hypothetical world, of course), then nineteen thousand of those times, it would be completed in under two weeks – do you feel lucky?

To make a statement like this, you actually have to create a PERT estimate – identifying the best-case, worst-case, and most-likely case for how long a task will take.

Unfortunately, we’re rarely asked to make a commitment about a single task – but rather a large collection of tasks – well-defined, ill-defined, and undefined.

You can combine PERT estimates for the individual tasks, resulting in an overall estimate of the collection of tasks.

The beauty of this approach is that the central limit theorem, and the law of large numbers, work to help you estimate a collection of tasks – you can actually provide better estimates of a group of tasks than a single task. This obviously helps with the well-defined tasks that you know about at the start of the project. This even helps with the ill-defined tasks. Rationalists will argue that the key, then, is to do more up-front research to discover the undefined tasks – and then we’re set. As Frederick Brooks (Mythical Man-Month) points out in The Design of Design, this debate has been going on since Descartes and Locke. It is not a new idea.

Big Up-Front Design and Requirements (BUFD & BUFR) hasn’t worked particularly well, so far.

Don’t throw out the baby with the bath-water, however. The math of estimation is still important and useful. Even if empiricism is not the silver bullet.

Prediction

Estimation is a form of prediction. Even agile teams do it. In Scrum, you estimate a collection of user stories – in story points that represent complexity, and you predict how many points the team can complete in this sprint. Note the time factor. If you’re working a two-week sprint, there is very little risk of changes in staffing during a two-week period. There’s also very little risk that your market will change significantly in two weeks – and if it does, what are the odds that you will notice and materially change your requirements in two weeks?

Visually, let’s take that PERT estimate and turn it sideways – so we can introduce the dimension of time. Imagine you estimated all of the tasks (well-defined, ill-defined, and a guess about the undefined), as if they were all to happen in the first sprint. Ignore inter-task dependencies, and pretend you had unlimited resources and the ability to perform all tasks in parallel.

The graph above shows the aggregate estimate – the circle is your best prediction, with error bars representing your confidence interval in the estimate. If you were using PERT estimates, these could represent that 5% and 95% confidence lines. Subjectively pick something based on your team’s experience in the domain and your confidence in your guesses (about the undefined tasks).

We need a segue into the “best of waterfall” approach to estimating projects, to steal and invert a good idea.

The Cone of Uncertainty

The folks at Construx have published a nice explanation of the cone of uncertainty – an adaptation of an idea from Steven McConnell’s Software Estimation: Demystifying The Black Art (2006). That article uses his imagery with permission – so please go look at it there. The idea is that as the project becomes better defined (e.g. during the project), the amount of uncertainty is reduced.

The findings show that initial estimates are off by 400% (either low by a factor of 4 or high by a factor of 4)! Even after “nailing down” requirements, estimates are still off by 30% to 50%!

As bad as that sounds, it is actually worse. This is a prediction for the original project (delivering pyramids). Not only are your estimates wrong – but they are bad estimates for delivering the wrong product.

But – the core idea is sound – the further into the future you have to execute, the greater the mistakes in your estimate.

Taking that concept, and applying it to our diagram, we get the following:

The further into the future you are trying to predict, the less accuracy you have in your prediction. This reduction in accuracy is reflected as a widening of the confidence bands for your estimate.

  • A couple sprints’ worth of work is not much different than one sprint – so your estimation range is not much changed.
  • An entire release of sprints (say 6 to 10 sprints) has much more opportunity for the unknown to rear its head.

Now, your prediction is (probably) unusably vague and imprecise. “This set of tasks will take X plus or minus a factor of two.”

That’s the reality.

Note: This has always been the reality. People have historically reduced this “risk to timing” by hiding the “risk of change” aspects – and waterfall processes encourage you to deliver the wrong thing, as close to on-time as possible.

That’s not what we want to do, however.

We still want to deliver the (not-yet-defined) right product, as efficiently as possible. That’s the goal of agile. (For folks who haven’t been here at Tyner Blain for long – “right” includes both value and quality).

Refinement

Because we’re agile, and we’re willing to “get smarter” about our product over time, we have an opportunity to improve. Because of the nature of compounding estimates and the cone of uncertainty, our uncertainty gets smaller over time.

Let’s remove our artificial simplification that we could do everything “right now” and look at what we think we know right now, about the end of the release.

Our ability to predict the amount of effort (for today’s definition of the product) at the end of the release is not very good.

Our ability to predict (today’s definition of the product) one sprint into the future is much better.

After completing the first sprint, we are a little bit smarter – the ill-defined tasks are better defined. Maybe some of the undefined tasks are now ill-defined. The same cone of uncertainty is now a little bit smaller – we are a little bit smarter, and the time horizon of the release date is a little bit closer.

The trend continues – each sprint gets us closer to the release date, and with each sprint (assuming we get feedback from our customers, and continue to study our markets) we get a little bit smarter. We also get better at predicting the team’s velocity (how much “product” they can deliver during each sprint).

Commitment

Your boss still wants a commitment, however. And that’s where we get to change the way we look at this (again).

The above diagrams all display how we converge on an estimate for a stable body of work. However, we know that the body of work is constantly changing.

Backlog! [you say]

Yes! The backlog. The backlog is an ordered, prioritized list of user stories and bugs. I was talking with Luke Hohmann of Innovation Games last month, and one of the most popular online Innovation Games is now the one they created based on prioritizing by bang for the buck. Play it today online (for free!). How cool is that?

The backlog represents the work the team is going to do – in the order in which the team is going to do it. Over time, as we get smarter, we will add and remove items from the backlog – because we discover new capabilities that are important, and because we learn that some things aren’t worth doing. We will even re-order the backlog as we recognize shifting priorities in the markets (or in our changing strategy).

As this happens, it turns out that the items at the top of the list are least likely to get displaced, and therefore most likely to still be part of the product by the time we get to the release.

Instead of thinking about uncertainty in terms of how long it takes, think about uncertainty in terms of how much we complete in a fixed amount of time. In agile, generally, we apply a timebox approach to determining what gets built.

Now, uncertainty, instead of manifesting as “when do we finish?” becomes “what will we finish?”

Your boss is rational. She appreciates the constraints, she just wants to know what you can commit. Every boss I’ve worked with has been willing (sometimes only after much discussion) to treat this uncertainty in terms of what instead of when. They acknowledge that they need to translate (usually for their boss) into a “fixed” commitment.

The solution: commit to a subset of what you predict you can complete.

At the start of the release, you may have 500 points worth of stories. Based on your team’s expected velocity, and the number of sprints in the release, you predict that you can complete 320 points worth of stories (5 people on the team, a team velocity of 40 points per sprint, and 8 sprints in the release). Starting at the top of the backlog and working down, draw a cut-line at the last story you can complete (when you reach 320 points). This is your prediction.

Now the commitment part. You’ll have to figure out what you’re comfortable with. Maybe for 8 sprints (say, 16 weeks into the future), you may only be comfortable committing to half that amount – 160 points. Go back to the top of the backlog, and count down until you reach 160 points. Everything above the line is what you commit to delivering.

Maybe you are comfortable committing to 240 points, maybe only 80. This is like playing spades. The more you can commit to, without missing, the better off you are. Your tolerance for risk is different than mine.

You can also negotiate with your boss. Commit to 160 points now, and provide an update after every other sprint. More likely than not, you will be increasing the scope of your commitment with every update.

Mid-project updates of “we can do more” are always better than “we can do less.” And both are better than end-of-project surprises. This also allows you to have updates that look like this:

We didn’t know this at the start of the release, but X is really important to our customers – and we will be able to deliver X in addition to what we already committed. Without slipping the release date.

Conclusion

Making commitments with an agile process is not impossible. It just needs to be approached differently (if you want to stay true to agile). The end result: better predictions, more realistic commitments, and the likelihood that each update will be good news instead of bad.

[Update: Changed initial image. Thanks Dennis for the great photo!]

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

106 thoughts on “Agile Estimation, Prediction, and Commitment

  1. Pingback: Scott Gilbert
  2. A really detailed post.
    It does raise a lot of questions though. Like why do people always want a prediction when they know it’s exactly that. Plus one thing I always find strange is that no one every wants a realistic time fame. Everyone knows the time line you are giving won’t be possible but everyone is always happier hearing it :-)

    1. Hey Sam, welcome to Tyner Blain and thanks for the comment!

      I think it is perfectly reasonable for people to ask for commitments from the team. What the team delivers (or will deliver) is part of a bigger picture of corporate execution. Other elements of operation do depend on the team – sales training, coordination with other product releases, dependencies from other areas of the company, etc.

      There’s an impedance mismatch between incremental delivery and waterfall delivery, however – and it has to do primarily with expectation setting. When most of your organization is operating in big monotonic chunks (waterfall) and trying to coordinate those deliveries with a continuous process (agile delivers in smaller chunks, that manifest effectively as continuous improvement), it gets tough.

      The worldviews of commitment (tell me where you will be, so I can plan to coordinate with everyone else) and prediction (here’s a more accurate, if less precise, forecast) are subtly different. A commitment-model has consequences for being wrong (I suspect the old stat – 1/3 of software projects miss their commitments is still true – but I haven’t checked the data in a few years). A prediction-model does not have the same consequences baggage – the expectations are different going in.

      Getting back to your point about being realistic – some people are just unrealistic, about everything. Some people want you to “push” in order to achieve more. That’s great. The challenge is when they conflate aspirations that encourage more delivery with predictive forecasts that reflect likely outcomes. Everyone needs to be on the same page.

      Tracking (and comparing with previous) velocity is a good tool for encouraging “get faster” – it is measurement of results with a feedback loop. People outside of the craft of product creation don’t “see” velocity metrics, so they look for something, and end up latching on to commitments erroneously.

  3. Pingback: Jeff SKI Kinsey
  4. Pingback: Adrian Logan
  5. Pingback: Ron Geens
  6. Pingback: Alan Kleber
  7. Pingback: Yama
  8. Pingback: lisaw1
  9. Pingback: VasilyKomarov RSS
  10. Pingback: Brian Ahart
  11. Pingback: Guillaume Iacino
  12. Pingback: Mike Cottmeyer
  13. Pingback: Mike Schmidt
  14. Pingback: Alltop Agile
  15. Pingback: Robert Huberts
  16. Pingback: John Peltier
  17. Pingback: Andrej Ruckij
  18. Pingback: Dermot O'Connor
  19. Pingback: akumpera
  20. One of the pratical issue;let’s say you commit to 240 points and the velocity is actually 320.Due to parkinson’s law you would end up doing only 240 points.

    1. Hey Mahesh, great point and welcome to Tyner Blain!

      If you ignore your prediction (320) and only work towards the commitment (240), then I agree. The main idea I’m going for is that you operate towards you prediction (internally), and manage expectations through the commitment (externally).

  21. Pingback: Tom Mazzone
  22. I too have advocated exactly the approach you suggest. My experience is that most business people “get it”. They know that software projects are notorious for being late and over budget. They rarely believe what the software developers tell them, as you point out.

    The approach you suggest changes the game. We have a new definition of “commitment” — a new way of measuring “success”. Most business folks I’ve known are willing to give it a try. Be conservative with your agile commitments at first and you’ll establish a new environment based on trust. How cool would that be!

  23. Pingback: Rafael Nascimento
  24. Pingback: Kristoffer Berg
  25. Pingback: Richard Griffiths
  26. Pingback: Patrick Masi
  27. Great post. We’re going through a whole estimation and productivity drive at my workplace right now. My manager doesn’t ask for a throat to choke though – just body parts.

    In my experience, what throws estimates out is gaps in knowledge – technical and business knowledge. Gaps in technical knowledge can quite often be filled by answers found on the web, but business knowledge quite often can not.

    1. Great points, David! And welcome to Tyner Blain!

      There’s always been a debate about if you can learn enough up front to eliminate the gaps in knowledge (and create the perfect design or the perfect requirements). Personally, I think you can’t. A more talented product manager can get closer, faster, with less information – but no one can get it exactly right. “Good enough” is the mantra here (to avoid analysis paralysis).

      However, that debate is only relevant when your market is static – your customer’s expectations are not changing, your competitors are not innovating, and your product strategy does not include solving “new” problems (or the same problems for “new” customers). When any of those factors come into play, you’re better off with a process that is designed to adapt.

      A process designed to adapt is required to succeed when your market is changing. It also happens to help a lot when your initial insights are not perfect.

  28. Pingback: Vin D'Amico
  29. Pingback: SolutionsIQ
  30. Pingback: Matt Rogish
  31. Pingback: mbaker000
  32. Hi Scott,

    This is a great article with an astonishing level of detail, and I like the way you explain everything. A few questions that I would like to ask, however –

    – considering that Agile promotes open communication, available information radiators on the boards or in any tool, and free access to anyone to the room, won’t people ‘know’ that there is this huge gap between ‘prediction’ and ‘commitment’?

    – Will this gap between predict and commit make some of the team feel that its ok to do less than ‘commit’? Hey, if you can slide from one scale to the other, maybe you secretly expect them to do even lesser than the ‘commit’ amount?

    – Will the push mentality be happy with any committed number you give? Will there probably be some attempt to get you to ‘up’ the number, probably making you draw from that buffer you built…so where does it end?

    These are open questions and I will appreciate any input you can provide.

    thanks,
    pragati

    1. Hey Pragati, thanks for the great questions and welcome to Tyner Blain.

      The main problem I’m addressing with this approach is the clash between two philosophies – waterfall commitments that in practice are never met, and agile predictions that are likely to be close (but are not “commitments”). It is a mechanism for teams to set expectations with the rest of their organizations. It also works for a vendor to define obligations, versus expectations.

      You raise a really good point, and I would say it is describing the second order problem. The first order problem is that providing commitments with a non-trivial time horizon is just a bad idea. Those commitments are consistently wrong. An agile team could provide a commitment that matched their prediction, but it would be no more accurate than a commitment in the waterfall model. As such, it results in other parts of the organization saying “agile is no better than waterfall.” Those other parts of the organization tend to oversimplify and not appreciate that the pile of things that was delivered is a better pile of things – they only notice that it is smaller than was promised.

      There will absolutely be pressure to close the gap between committed and predicted. And that’s actually a good thing. The gap should be inversely proportional to your confidence in your predictions. A team transitioning to agile today does not know what its velocity is or will be. Moving from a big-bang release to a series of incremental deliveries will implicitly increase the team’s velocity, but by how much will vary.

      What I’ve seen pretty consistently (and heard anecdotally from others) is that by the third or fourth sprint, teams’ estimates converge pretty quickly and their velocity predictions (and story point estimates) become useful tools for predicting repeatable velocity. By the third sprint, the cone becomes narrower. The problem arises when you “have to” provide a commitment now, before you have confidence in your velocity.

      With each sprint, you can update your commitment. This creates a positive dynamic for two reasons. First, you’re giving an update to your stakeholders every two weeks (or whatever) instead of surprising them at the end of the release. Second, the updates will be predominantly (trending towards exclusively) increases in what you are committing. Not only are your stakeholders getting regular updates, but they are consistently positive updates.

      Your other question, a variation on “people will hit the target, even if they could do more” has validity. The key element is that the team operates towards the prediction, NOT the commitment. If the team is operating towards the commitment, then there’s no point in having the prediction.

      As to open communication – absolutely! Instead of communicating one expectation (the old waterfall commitment), you communicate both numbers. (A) Here’s the commitment you can use for dependency-planning with little risk, and (B) here’s our prediction of what is most likely to occur, which you can use for whatever you want. For stakeholders to use the prediction number, they have to trust the team. And that trust has to be earned. Once the team has earned the trust, and trained the rest of the organization, they may stop asking for the commitment. It just takes a while.

  33. Pingback: Brix
  34. Pingback: Boost New Media
  35. Pingback: Pragati B
  36. Pingback: TREPIED Thierry
  37. Pingback: Paul van Hagen
  38. Pingback: Despina Habash
  39. Pingback: novita
  40. Pingback: Keith Brophy
  41. Pingback: Rich Holman
  42. Pingback: Damon Cali
  43. Pingback: ProdMgmt Talk
  44. Pingback: Mikalai Alimenkou
  45. Pingback: ProdMgmt Talk
  46. Pingback: Jeffrey Davidson
  47. Pingback: Greg Bledsoe
  48. Pingback: AgileWrap
  49. Pingback: AgileWrap
  50. Pingback: Goutam Biswas

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.