Rating Your Competition – Comparing Products Part 7

At this point in the product comparison series, you know who your customers are, which problems are important to them, and which products compete to solve those problems. It’s time to score the competing products and see how the solutions your product provides (or will provide) will stack up. This is the latest in a series on comparing products, jump back to the start of the series if you came here first, but hurry up :).

Overall Product Comparison Process

This is a relatively long series. Each article will start with a recap of the overall process.

Getting useful information from comparing products requires you to:

  1. Introduction & Overview (so that the step-numbers align with the article numbers)
  2. Identify your customers.
  3. Articulate the problems they care about solving.
  4. Determine how important solving each problem is, relative to the other problems, for your customers.
  5. Characterize how important it is for you to solve the problems of each group of customers.
  6. Discover which (competitive) products your customers consider to be your competition.
  7. Assess how effectively each competitive product solves each important problem. (This article)
  8. Assess how effectively each competitive product solves each important problem, for each important group of customers.

With this information, you can create a point of view about how your product compares to the others.

Summarizing Effectiveness

Earlier in the series, we identified (and refined) the list of important, relevant problems that our target customers have.

[larger image]

This is the “ruler” by which each competitive product is going to be measured. We also identified several competitors.

[larger image]

The next step is to assess how effectively each competitive product (including your own) solves each important problem. Then you need to assign a “score” for how effectively each product solves each problem. To do that, for each problem you have to articulate an opinion about what it means to solve the problem poorly or completely, or anywhere in-between.

Read Anywhere – Previously clarified as “Be able to read content in multiple physical environments / on multiple devices, and not lose my place in the book.”

Environments can be indoors/outdoors, extremely cold to moderate to extremely hot temperatures, with variable lighting from a dark room to direct sunlight. It might also capture environmental context – sitting, walking, riding on a bus, driving, etc.; quiet to noisy; physically serene, or getting bumped a lot (like in a crowded coffee shop).

Start by defining the endpoints. I’ve been using a 9-point scale in this type of analysis, to provide enough granularity to make relative comparisons. For read anywhere, a score of 1 would mean “can be used to read in a single, idealized environment / location.” A score of 9 would mean “can be used to read in any realistic situation.” Mapping out the scores in-between 1 and 9 requires you to think about the nature of the problem being solved – and here’s where Kano analysis is useful (again).

This capability is a good example of an extreme more-is-better capability. Increasing the range of environments where the product can be used (to read) provides a perceivable benefit to customers, but with diminishing returns. Also, there is some minimum bar, or table stakes, of environments where the user needs to be able to read, or the product is not considered a viable solution. On the high end, being able to read literally anywhere, would truly distinguish one product – making that capability a delighter and a strong differentiator.

[larger image]

How do you decide “What is a 3 score?” You inform these relative scores based on user research (ideally), and your subjective opinion (when you don’t have research). For folks who haven’t been reading Tyner Blain articles for the last few years – a product manager is market driven, which means you need to use market data to do this as a product manager; however, using your own opinion as a product designer is better than having no data at all.

For this example, my [manufactured, invented, made up for this series of articles,] data defines scores for the Read Anywhere capability as follows:

  1. Below The Bar – User must be seated, and have power and internet connectivity (at the time of reading) in order to use the device.
  2. na
  3. Usable but Very Annoying – User can read while standing without being connected to power or the internet.
  4. Not Quite Happy, but Whatever – User must be indoors, with reasonable lighting and temperature.
  5. Meh – User can read while riding on the bus or in a car.
  6. OK, but Nothing Special– User can read in outdoor lighting.
  7. Good – User can read pretty much anywhere except really noisy, jarring, and / or low-light environments.
  8. na
  9. Wow! – User can read anywhere that the user would want to read.

Note that there are no entries for 2 or 8, to reflect that there’s a step-function decrease (or increase) in perceived value at this point in the curve. Plotted on the Kano analysis extreme more is better curve, this rating scale looks like the following:

[larger image]

One of the things you can see when looking at the scoring system visually is that you may have to make significant investments in order to make incremental improvements, depending on where you are on the curve. For example, moving from (4) to (6) looks hard – you are objectively increasing the capability measurably – and increasing the subjectively perceived value by a comparable amount.

A small shift in scoring – for example, from (7) to (9) – can have a dramatic impact on perceived value (specifically, satisfaction received from improving the capability). Conversely, improving from (6) to (7) may be really hard (expensive) to do, but realistically only improves the way your market perceives your product by a marginal amount.

This view can help you answer questions like

  • Why does the iPod shuffle outperform the Sansa Clip so dramatically? Because the iTunes-centric ecosystem turns the dial from (7) to (9).
  • Why doesn’t our detergent outsell theirs, when ours gets clothes cleaner in soft water? Because moving from (6) to (7) doesn’t make much of a difference.

Zooming in on the low-end of the curve:

[larger image]

And at the high end of this capability, we see:

[larger image]

Applying this rating scale to the competitive products [more made-up data here] we see

[larger image]

It may be that different personas would use markedly different criteria for scoring relative capability. So far, when I’ve done this, I’ve found that different persona care different amounts about the scores for a particular capability, but generally use the same approach to scoring. When I do come across different personas that would use markedly different scoring criteria for the same capability, I will create use the appropriate scoring criteria for each persona, and add more complexity to this process. To date, I haven’t had to do that.

Scoring All of the Capabilities for All of the Products

Applying the same process (determine the nature of each capability, determine the criteria for each capability, assign a score to each product for each capability) will result in something that looks like the following [manufactured to illustrate the concepts] data:

[larger image]

Alternate Scoring Method

You could also approach determining the score for a single capability like this by giving points for each characteristic, versus trying to define a continuum like the example above. For example, you could give

  • 2 points for being able to use the device without power.
  • 1 point for being able to read when you are not connected to the internet.
  • 1 point for being able to read in bright sunlight
  • 1 point for being able to read in a dark room
  • etcetera

Then tally up the score for each product, to describe how well they meet the need that users have to read anywhere.

Interpreting the Comparison

If you were to stop here, you would just conclude that the iPad2 is the best, and that the two Kindle products are “close,” and that the Nook and using a PC are in last place by a wide margin. However, you aren’t going to stop here.

Stopping here would ignore completely that different customers care different amounts about solving different problems.

In the next article in this series, we’ll see how to incorporate that info, and answer the two questions you need to be able to answer:

  1. Which product is best for a specific persona?
  2. Which product is best overall.

Summary

To create a competitive product, you need to know how your product stacks up against the competition – and that means you need to know how effective your solutions are (or are perceived to be) at solving the problems your customers will pay to solve. You can compare today’s product to assess your current position, and you can inform the decisions about what your product needs to be.

Recapping the overall flow of this series of articles on product comparison

Getting useful information from comparing products requires you to:

  1. Introduction and Overview (so that the step-numbers align with the article numbers)
  2. Identify your customers.
  3. Articulate the problems your customers care about solving.
  4. Determine how important solving each problem is, relative to the other problems, for your customers.
  5. Characterize how important it is for you to solve the problems of each group of customers.
  6. Discover which (competitive) products your customers consider to be your competition.
  7. Assess how effectively each competitive product solves each important problem. (This article)
  8. Assess how effectively each competitive product solves each important problem, for each important group of customers.

With this information, you can create a point of view about how your product compares to other products.

Taking it to the next level, as a product manager, your decisions about tomorrow’s product should be in the context of where you expect tomorrow’s competition (and tomorrow’s customers) to be. There is a danger – especially after investing in the “state of the industry” analysis above – that you will continue to compete in yesterday’s race, when you should probably be innovating to redefine the game. The comparison you just did is not a waste (because it looks at today’s problems – they become the table stakes for tomorrow).

Remember, this exercise informs future product decisions, it does not define them.

Attributions

Thanks Rick Cox for the height comparison photo.

  • Scott Sehlhorst

    Scott Sehlhorst is a product management and strategy consultant with over 30 years of experience in engineering, software development, and business. Scott founded Tyner Blain in 2005 to focus on helping companies, teams, and product managers build better products. Follow him on LinkedIn, and connect to see how Scott can help your organization.

15 thoughts on “Rating Your Competition – Comparing Products Part 7

  1. Pingback: Vasily Komarov
  2. Pingback: Joshua Duncan
  3. Pingback: April Dunford
  4. Pingback: Kenneth Cheung
  5. Pingback: Larry McKeogh
  6. Pingback: TheHiredGuns
  7. Pingback: Alltop Agile
  8. Pingback: Alan Kleber
  9. Pingback: Giles Farrow
  10. Pingback: Gerhard Hipfinger
  11. Pingback: Lori Witzel
  12. Pingback: Breanne Swim
  13. Pingback: Markus Grundmann
  14. Pingback: UX Feeder
  15. Pingback: Karol M McCloskey

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.