Uselessly Wide Estimation Ranges

Estimating with ranges requires a level of transparency which may be uncomfortable because you are acknowledging what you don’t know. Doing this, however, cascades into multiple positive consequences. This is also a necessary component of outcome orientation.

Using Problem Statements to Make Choices

I talk about using problem statements to “shape” a product strategy. It occurred to me that if you aren’t already doing this, then my assertion may not make any sense. I want to take a step back and explain. Using problem statements is an operational approach to shaping and expressing a product strategy which makes it actionable. When developing a product strategy you have to do a couple things – you have to both develop a strategy designed to support the company’s strategy, and you have to express it an a way which makes it actionable. I’ve found problem statements to be an artifact which can be used to both support the thinking process and meet the direction-setting needs.

I landed on this approach as a consequence of seeing so many organizations who expressed a product strategy, completely decoupled from the work they do on their products. I had a client who would refer to the “red thread” connecting her intent with the ideation, decomposition, and execution of work by the teams in her organization. As a leader, she could express intent and rationale, and her teams could trace their efforts back to her purpose. The red thread. Most teams lack this.

Teresa Torres shared an article in 2016 about the opportunity solution tree, a technique she uses to weave the red thread from desired outcomes to solution ideas and the experiments you run to evaluate them. Continuous Discovery Habits was released in 2021 and lives on my short-list of books I recommend to product folks.

It’s working so well that I feel compelled to write a book about it. But that’s going to take time and I want you to have it today.

Teresa Torres, 2016 Article, Why This Opportunity Solution Tree is Changing the Way Product Teams Work

In her article, she identifies four gaps in systematic thinking we have to address to effectively connect that red thread from strategy to execution. The first gap Teresa identified is that we don’t examine our ideas before investing in them. The problem statement is an artifact you can use to wrangle, examine, and improve the ideas before deciding to invest in them. The cascade of actions associated with “what’s next?” can come from the problem statement.

Before we climb up the opportunity-solution tree, we have to decide if we are going to pursue this opportunity in the first place. This is where the shaping begins. The problem statement is evaluating the opportunity (I’ll switch here into “problem to solve” language vs. “opportunity to pursue” language for the rest of the article to help with readability) in the context of a number of possible pursuits.

In the previous article I touched on the importance of using a range to express your estimation instead of asserting with a discrete value. I poked at comparison as one of the decision-making patterns, and showed how the use of a range can inform the choice between an aggressive posture and a conservative footing. With the acknowledgement of the range of possible values, you can make a better, more elegant decision.

Fishing to Win

When I was a kid, my dad liked to go bass fishing and was a member of a fishing club. Sometimes on the weekends, we would go to a lake and “compete” with the other members of the club to see who could catch the most fish on a particular day. The prize for the competition was really just bragging rights within the club, and the purpose was to enjoy a day of fishing. Nonetheless, at 5 am or whatever unreasonably early hour the competition started, everyone was in their boat at the same starting point, and they all raced off to different parts of the lake to get to each person’s favorite secret fishing spot.

When I asked my dad why we spent 20 minutes racing to some arbitrary spot instead of leisurely driving the boat to the location, he explained that there was urgency – the fish were more likely to bite (and then get caught by him) early in the morning. It was a race against the clock. Kids can be annoying early in the morning, and I was no exception. I understood the urgency – but now I wanted to know if time was so critical, why we didn’t just start fishing immediately right at the starting point; everyone else was wasting time getting to their favorite secret cove, we could get a 20 minute head-start.

Dad explained the fish were unlikely to be at the starting point – the conditions were wrong. Too much direct sunlight, all of the commotion from all of the boats, the wrong depth of water, not enough underwater debris and hidey-holes for fish. I wasn’t thinking about any of those conditions – the lake was one big lake, the fish could be anywhere (TAM). Dad had a spot in mind where all of the conditions he considered made it more likely for fish to be present (SAM). If we could get there quickly enough, they would be still be active (SOM). Our best chances for winning the competition were to be where we had the best chances of catching fish.

You fish where the fish are.

You use the impact section of the problem statement to describe your understanding of where the fish are.

Reluctance When You Have No Idea

I had no idea how many fish might be in the lake – thousands? Millions? I still have no idea. ChatGPT estimates between 100,000 and 300,000 fish which met the rules of the competition could be in the lake. No one likes to be wrong. Even a large-language model AI is uncomfortable giving estimates – ChatGPT generated a 200-word-long caveat about the uncertainty in it’s estimate. Fine. We can work with this.

Every time – literally hundreds of times – when I help someone draft a problem statement to describe the work they are doing, they create a qualitative description of the problem.

The Problem of… We lose customers because our slow approval of claims causes delayed reimbursements
Affects Whom… Customers filing out-of-network service claims
The Impact of Which is… Our customers find other insurers because they struggle financially due to our delayed reimbursements
The Benefits of a Solution are… Higher retention rates of our insured because of improved service experience

Once you get to a well-formed, but only qualitative description of the problem like this example, you transition into trying to quantify it. And it is consistently a struggle. There is often some quantification which is available, to help shape the conversation, but not often the right information. In this example, someone may know that reimbursements on average take 120 days. Which is obviously not good. If a patient is floating the cost of weekly rehab sessions, the size of outstanding payments due could really become significant. This is a great example of the curse of knowledge problem. You know objectively 120 days is a problem, you know there is value in solving it. But you don’t know enough yet to commit to solving it, or even to design a solution.

You can start to fill in some quantification with a couple easy pieces of research. What is the churn rate? 80% of customers renew during open enrollment and 20% leave. You know (in this example) that 10% leave because of circumstances beyond your control (people relocate outside of our network, take new jobs which have different insurance plans, etc.). So the hypothesis becomes that no more than 10% of all customers leave because of delayed reimbursements. You also know that only 10% of customers file out of network service claims within a given year.

Now you can rewrite the problem statement, with quantification. Something people are reluctant to do.

The Problem of… We lose customers because our slow approval of claims causes delayed reimbursements
Affects Whom… The 10% of customers filing out-of-network service claims
The Impact of Which is… 10% of our customers find other insurers because they struggle financially due to our delayed reimbursements
The Benefits of a Solution are… Higher retention rates of our insured because of improved service experience

Now you’re making an assertion – 10% of all customers who leave do it because of the delayed reimbursements. This was actually the upper-bound of possibility. 10% of (all) customers leave every year, 10% of (all) customers experience delayed reimbursements. What I’ve seen consistently is that people are uncomfortable with documenting the (obviously incorrect) discrete value, so they just simply don’t do any quantification at all.

With this reluctance, product folks often aren’t encouraged (or required) to ask “what is the range, because I know 10% is wrong?” In the companies where they work they don’t have to. This is one of those places where the system – the process in place, the people around the product person, the policies and leaders – undermines good product work. No one requests or expects quantification. A bad process biases teams towards bad practices which drive bad decisions which result in bad products. When you invest to fix the process to support good practices, you also have to learn and perform those practices.

As an individual product person you can start doing things better right now, and your corner of the company will start improving. When you don’t know, what should you do?

What to Do When You Have No Idea

Once you apply probabilistic thinking and describe a range of possible values, you have a problem statement which looks like the following:

The Problem of… We lose customers because our slow approval of claims causes delayed reimbursements
Affects Whom… The 10% of customers filing out-of-network service claims
The Impact of Which is… Between 0% and 10% of our customers find other insurers because they struggle financially due to our delayed reimbursements
The Benefits of a Solution are… We will reduce abandonment rates of our insured by between 0% and 100% because of improved service experience.

Note: If all of the people who leave are leaving because of this problem, then completely solving it would result in no one ever leaving.

This is an exciting and productive way to say “I have no idea how many people left because of delayed reimbursements.” This is exciting, first in comparison with the alternative, and second because it helps you see what to do next.

The alternative is to just start building something which reduces the size of the problem, without actually knowing the size of the problem. Should you shorten the period of delay by speeding up the current process? If so, how much shorter does it need to be? Should you pull more service providers into your network so that fewer people file claims? Should you establish some maximum-reimbursement-due value, where reaching the threshold triggers a check disbursement?

This is what organizations normally do. They start with a poorly-formed surface-level description of the situation and run full speed towards doing something. Being busy feels better than not. How much should you be willing to spend to do whatever you picked? Just use up the available budget. How do you declare success? Launching what you built. That’s it. Without a definition of what it means to solve the problem, you can pat yourself on the back for having done something. It doesn’t matter if it wasn’t “good enough” because “good enough” was not defined.

Contrast that approach with a better, higher quality, decision-making process – where you are faced with the questions:

  • Should we solve this problem?
  • Is this problem large enough to deserve our attention?
  • Is this problem larger than the other problems we might choose to solve instead?
  • Is the potential benefit of solving this problem high enough to justify the expense of solving it?

The Uselessly Wide Range

The current problem statement describes a range of possible benefits from no benefit to effectively infinite (because retention becomes 100%). What I find helps is to treat the problem statement as describing your current beliefs. Because you (so far) lack the information to refine your beliefs, you acknowledge that you have no idea. I describe this as a “uselessly wide range.” And that’s good. It is healthy for the system because you are being transparent and explicit about what you believe. You are acknowledging that you lack the information to make this decision well.

You don’t know how to answer any of the “should we do it?” questions, because you don’t have enough information. If the benefit is 0% – a possibility identified in your problem statement – then definitely no, don’t do it. If the benefit is 100% – then definitely yes, you should do it, do it now, and move heaven and earth to do it.

The first question you need to answer is “should we do it?” and as a binary-decision, you will have a go/no-go line in the sand. There’s some value, somewhere between 0% and 10% where the problem is big enough that you should commit to solving it. You’re placing a bet, based on your belief that the value is actually above your line. But right now, the actual value is just as likely to be below your line as above it.

You’re making a decision where the likelihood of being wrong is equal to the likelihood of being right.

Imagine, given other constraints, that it would be a good decision to solve this problem if and only if you could reduce abandonment rates by at least 50%. What this means in this example is that for the 10% of all customers who leave every year, half of them – 5% of all customers – will have to choose to stay instead. Your estimate expresses what you believe. OK, you’ve got some clarity of purpose – if you can cut abandonment rates in half by addressing this delayed reimbursement problem, it is worth doing.

Your current belief is that it could go either way. You need to do something to update your beliefs, to improve your ability to make the decision.

Invest to Remove Uncertainty

My second reason for excitement is that by identifying the reason why you cannot make the decision well, you identify the steps to take to improve your ability to make the decision.

If the outcome of a decision in question is highly uncertain and has significant consequences, then measurements that reduce uncertainty about it have a high value.

Doug Hubbard, How to Measure Anything: Finding the Value of Intangibles

A major improvement in my personal thinking about this stuff came when reviewing some of the work Hubbard delivered for a shared client – once you know the value of making the right decision versus the wrong decision, you know the value of doing something to get smarter. If you can update your beliefs so that you believe the truth is more likely to be on one side of your go/no-go line than it is to be on the other side, you’ve created value. This is the cost benefit of measurement and experimentation during the decision-making process.

You can think of this like stacking the odds in your favor. By learning more, you update your beliefs – and therefore you improve your ability to make the decision. You aren’t changing the situation, but you are changing your understanding of the situation – you are reducing uncertainty.

You have data about how many people leave, and you have independent data about how many people experienced reimbursement delays. 10% in each case. And you know that you are willing to invest to solve this problem if you believe at least half of the people who left did it because of the delayed reimbursements. You could do something as simple as surveying a sample of people who left.

The question you ask yourself is “what should I expect to find if at least half of the people who chose to leave (versus those who left because of uncontrollable factors) did so because of delayed reimbursements?” This is the input to your survey-design. If your belief is correct, at least 25% of respondents would cite this as the reason. If your survey has a +/- 5% confidence interval, then you can refine your thinking.

The result you would expect would be ‘anything greater than 25% of survey respondents, among all those who left’ – half of a half. Your survey may have a +/5% confidence interval, so might approach things by forming the hypothesis:

  • We believe at least 25% of people leave during open enrollment because of delayed reimbursements for service claims. We will know we are right if we see at least 30% indicating this when asked. We will know we are wrong if no more than 20% respond accordingly.

What this hypothesis tells you is just enough to know if it is a good idea to consider solving this problem. Imagine the result of your survey is that 5% of respondents indicate the reimbursement-delay problem as being the reason they left. You now update your beliefs based on the new-to-you information. Your problem statement would look like the following:

The Problem of… We lose customers because our slow approval of claims causes delayed reimbursements
Affects Whom… The 10% of customers filing out-of-network service claims
The Impact of Which is… Between 0% and 1% of our customers find other insurers because they struggle financially due to our delayed reimbursements
The Benefits of a Solution are… We will reduce abandonment rates of our insured by between 0% and 10% because of improved service experience.

You now know (you believe) that at most 1% of your customers are leaving because of the reimbursement delays. In this example, what we’ve stipulated is that this is not a big enough problem to invest to solve. What you might be thinking is that we make those investment decisions based on cost-benefit (among other things), and none of this article talks about benefit at all. And that’s correct.

The impact you identify is a description of the magnitude of the problem – and it represents an upper-bound of potential benefit. No possible solution can actually exceed in value the scale of the problem it is trying to address. There’s a lot more nuance in diving into the solution piece – you have to introduce two key concepts, first around solvability of the problem, and the second around incremental efforts to progressively reduce the size of the problem.

Almost always, the right investment decision is to reduce the impact of a problem, not eliminate it. Every investment to make things better comes at the opportunity cost of not making some other investment to make things better in some other way. What is particularly useful about the impact section of the problem statement in practice is you can identify the upper-bound of realizable value from solving the problem, and make better decisions about which problems to pursue.

Conclusion

Don’t let high degrees of uncertainty about the nature of a problem dissuade you from describing what you do understand and believe. Use that description – and the power of quantification – to identify when there is value in learning before making a decision. This is the kind of shift which markedly improves your product operations by improving your decisions about what work to put into the system. It requires your organization to be willing to be transparent about knowledge and open to learning when learning has value. This is part of what it means to be outcome oriented, as well – if you’re not doing these things your process is probably still output-oriented, you are operating a feature-factory.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.