Interrelation Digraphs As Prioritization Tool

opthamologists tool

Prioritization can be hard, especially when we’re dealing with a lot of variables. Peter Abilla, at shmula.com takes a fairly esoteric tool (interrelation digraphs) and applies it as a prioritization tool. Opthamologists have learned that they can’t show us a bunch of blurry images and have us tell them which one looks the best, and then prescribe a corrective lense. They have to ask us “Is it better like this? Or better like this?” Peter’s approach does the same thing, but with a quantitative edge.

Interrelation Digraphs

An interrelation digraph, also known as a relations diagram, can be used to identify the complex web of interdependencies around hard to understand concepts. In this example, skymark shows the interdependencies of several urban-blight factors and dynamics. The arrows represent influencers and influencees. For example, “there is a high crime rate” is drawn as an influencer of “property values fall.”

This approach to describing a situation allows you to quickly see where changes can have the most impact. Those boxes with the most outgoing arrows have the greatest direct impact. The diagram also allows us to track (visually) any indirect impacts. The goal is to be able to focus our effort (for fixing social issues) on those factors that have the greatest influence.

Applied to Prioritization

Peter’s article shows a way of creating similar graphs to prioritize a set of features.
The eye-doctor realizes that people can easily make relative comparisons – “is A better than B?” and then uses a successive series of questions to find the proper prescription. The opthamologist is using a search heuristic to narrow in on the right corrective lense. People may not be able to say why A is better than B, but it is generally an easy answer to reach.

Imagine a list of features A through E. A relationship diagram with each arrow showing “Target” is better than “Source”, we end up with a diagram that looks like the following:

digraph

For a set of features, A through E, where there are obvious relative prioritizations, this graph is staightforward. For n items being prioritized, the sum of incoming connectors for each item i in the list would be n-i. This is basically a graph of what is known in HCI circles as “card sorting.” In card sorting, features are written on individual cards. First, two features are compared, and placed in order. A third feature is compared to the top card – and if it is higher priority, it is placed on the top of the stack. If it is lower priority, it is compared to the next card in the stack. If the new feature is lower in priority than the last card, it is placed on the bottom of the stack.

blender

The Blender Redux

The problem with this approach is that it homogenizes the inputs, and the features in the middle end up on the top. Imagine one person rated the features A-E, and a second person rated them E-A. If you combined the two sets of prioritization by adding the results, you would end up with feature C at the top, followed by B and D, with A and E tied for last place. The least common denominator wins.

We end up devaluing the passionate opinions of any minority groups who provide us with inputs. We talked about this when considering improved application of market research data. To strike the point home, imagine that we were prioritizing the following features about a new car:

  • A) Extremely Fast
  • B) Comfortable Interior
  • C) Low Cost
  • D) Quiet Ride
  • E) Radically Good Gas Mileage

One person values a sports car, the other is environmentally concious. If we lumped their results together, we would end up with a car that was cheap, comfortable and quiet – but not very fast, and with low fuel efficiency. How many of these cars would we sell?

Imagine how many we would sell if we had a very efficient sports car.

A Better Way

We can avoid the blender-effect first by segmenting our markets, and second by weighting the scores in a prioritization exercise. Here are two approaches to voting on ideas in a brainstorming session. Both of these approaches afford us a way to quantify and capture passion. These approaches would help keep speed and efficiency near the top of the list in our example.

The key difference is that scores aren’t normalized into binary comparisons – passion manifests as a multiplier – making the “dramatically better” ideas influence the score “dramatically more.”

Conclusion

This technique is effective for determining relative prioritization for a single individual, but it falls short when combining data from multiple people.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.