Cause & Effect and Product Risk

When deciding how to invest in your product, you need to take into account the risks that your investments will not return the outcomes you desire. One class of risks is business risk, and in product management we can influence the business risk of invalid intentionality – what I could call “building the wrong thing.”

Cause and Effect is Foundational

There are two significant problems I see product teams facing which are deeply connected to each other.  A team needs to solve both problems in order to improve, and has to solve one of the problems before it can solve the other.  There’s a dependency between the two, and a natural sequencing – you have to solve one before you’re ready to solve the other. 

The first problem is that teams are building backlogs without the underpinnings of cause and effect. The teams operating as order takers – “build this because someone said so.” This is the primary anti-pattern I see when first meeting with teams.

One approach to addressing this is to manage the system more effectively as a “pull” system than as a “push” system. We manage the generation of demand on the team to match their capacity. This is required to prevent future problems, sure. But this is the “brute force” solution – there is also a “finesse” aspect to solving this problem, through the application of product management craft. We need the thinking to shift to one based on cause and effect, so that the behavior can shift to one of collaboration.

Most people talk about this today in terms of being outcome oriented (versus output-oriented). If you are playing buzzword bingo, it is a required underpinning of being a product-driven organization that your investments are outcome-based.

A simple before and after puts a fine point on this first problem:

We are more likely to succeed when we build things we believe will lead to outcomes. But isn’t our imaginary CEO just implicitly assuming an outcome? Sure – but what outcome? That’s the root of the first problem which gives us the clue in how to address it. We need everyone to know how this investment will result in that outcome.

This is why we use impact maps – to make the explicit connections between what we choose to build and why we choose to build it. There are other techniques for solving this problem – and you need to solve this problem. I find the impact map to be the most elegant.

Working backwards (right to left in an impact map) we can reverse engineer some intent by exploring the “order” we’ve been asked to take:

[Larger impact map]
  1. We build X to solve problem A for persona B
  2. Once problem A is solved, persona B can change their behavior C
  3. Once behavior C is changed, it will lead to measurable outcome D
  4. Measurable outcome D is the actionable aspect of benefit Y

When we collaborate to develop a shared understanding that explicitly connects “building X” with “realizing benefit Y” we address the first problem. Impact maps are great for this, because in the process of creating the connections, we intentionally challenge those connections and explore others; creating a better plan, shifting to a bigger goal. All while providing deep context throughout the teams, increasing likelihood of successful delivery and effective stakeholder management.

Before we can acknowledge the second problem, we have to layer on another dimension of developing plans – acknowledging and addressing risks.  This plan, even well articulated, has implicit risks which need to be explicit. Implicit risks live as assumptions, which we can make explicit by formalizing as hypotheses.

  1. We assume building X will solve problem A for persona B
  2. We assume solving problem A will trigger behavior change C
  3. We assume behavior change C will contribute to outcome D

We use this expression to acknowledge risk, which allows us to make decisions.  Some risks we avoid and some we embrace.  And some we address. When the potential value is too high to ignore, but the risk is too high to endure, we run an experiment to eliminate uncertainty.  

That experiment is easiest to execute, in my experience, when formulated as a hypothesis which can be tested.  The environment in which we are running experiments to de-risk our plan is one where all of the pressure from others is to execute the plan.  And the plans themselves are complex, and the implications on our organization are often broad.  Having a clear structure for isolating variables (get it) and focusing on hypothesis-testing helps manage the organization, and helps retain clarity when focusing on addressing the risk.

Hypotheses describe assumed cause and effect

We believe [something which happens – a “cause”]

Will result in [something else happening – an “effect”]

When exposing assumptions within an impact map, I focus on two classes of assumption – one in the problem space and one in the solution space.  I form solution hypotheses and outcome hypotheses.  I’m regularly working with teams who are new to experimental design, so having some clear definitions helps connect these practices to how they already view their responsibilities.

A solution hypothesis on top of an impact map is a statement which says “if we build this thing, it will solve this problem and cause or enable the desired behavior change.” The cause is whatever we choose to build, and the effect is the resultant behavior change.  In the world of measurement, this behavior change is a leading indicator (of expected value). We have something we can do, and something we can measure to see if it works.

An outcome hypothesis on top of an impact map is a statement which says “if we cause this behavior change (better, more, new activities, etc.) we will realize this desired outcome.” This is a lagging indicator in measurement world. Now we have a full-cycle way to measure the impact of what we chose to build.

Creating this sort of qualitative alignment of cause and effect is the first step in identifying the implicit value in your plan – the second step is to work to quantify it. 

When we assert a solution hypothesis, it can look like this: 

We believe that adding a ‘one click purchase’ button to every product page in our mobile store will result in an increase in orders from existing customers

Does that tell us this is a good idea to pursue?  No. We have no frame of reference to know if it is cost-justified. 

[Ultimately, cost-justification is a function of opportunity cost and not execution cost, but that’s another topic for another day. Just listen to what Don Reinertsen has to say about opportunity cost and the cost of delay.]

My point here is we have not yet quantified the value. Let’s try it:

We believe that adding a ‘one click purchase’ button to every product page in our mobile store will result in an increase in orders of 1% from 30% of existing customers.

The dynamic of the conversation suddenly changes from “well sure, that sounds like a fine idea” to “and how exactly did you come up with those numbers?” This is exactly what we want to happen!

Quantifying our hypotheses gives us a way to hold ourselves accountable. This is a powerful tool – and the first reaction I see in most people when asked to quantify their assertions is that they suddenly become uncomfortable.

Sometimes they are uncomfortable because they fear reprisal because their management will hold them accountable if they were to publish estimates. [There are two separate problems here, one organizational, and one of conflating estimates and commitments].

Fairly universally, there is a valid source of discomfort as well. Shifting from being output-oriented to outcome-oriented requires you to shift your locus of accountability from your zone of control (what you build) to your zone of influence (whether it works or not). Hypotheses help us manage the uncertainty implicit in this shift.

Motivated Reasoning – stay tuned

And this leads us to the second problem – we run the risk of interpreting results to support our hypotheses, instead of using them to objectively assess our hypotheses. We’ll talk about this, and how to deal with it, next time.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

One thought on “Cause & Effect and Product Risk

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.