Recently, the gadget-reviewer crowd has caught on to something we’ve known for a long time. Comparing products is not about comparing specs, it is about comparing how well the products solve problems that customers will pay to solve. That begs the question – how should you compare products? Read on to see the product comparison technique I recommend.
Inspiration
Writing about how to compare products has been on my backlog for about two years, as a key component of how to perform competitive analysis. This topic is easily a chapter-length discussion, perhaps that’s what delayed writing an article-length discussion. A flurry of recent articles, The Death of the Spec, Do Device Specs Really Matter Anymore, and Device Specs, from Techcrunch, iDownloadBlog, and Daring Fireball, respectively, all discussed how the Amazon Kindle Fire will likely succeed, in spite of not having particularly good specs (specifications). The Techcrunch article also opines that Consumer Reports, because of its fixation on specs, has made itself irrelevant as a company providing advice to consumers.
John Gruber hits closest to the mark in the Daring Fireball article where he says
Specs are something the device makers worry about insofar as how they affect the experience of using the device. Just like how focal length and lens aperture are something the cinematographer worries about insofar as how they affect what the viewer will see on screen.
John Gruber
Combine this with a conversation I had last night after recording the latest Start With the Customer Podcast, about writing more frequent articles on Tyner Blain.
Product Management and Specs
The discussions are mostly centered on the utility of specifications, speeds and feeds, in informing a customer’s buying decision. As Pragmatic Marketing espouses, we should be market-driven, with a focus on solving the problems that customers will pay to solve. A screen resolution does not solve a problem, but an easy-to-read screen does – it prevents eye-strain and makes a long-session reading experience (like you would have when reading a book, versus reading this article) better. Avoiding eye-strain is a problem people are willing to pay to solve – we’ve seen that with the success of products that use e-Ink technology.
Let’s look at how this example impacts what you do as a product manager.
You want your team to create a product that avoids eye-strain. But you also know that you need to write requirements that are unambiguous and measurable. We know from research that higher resolution displays (to a point) delay eye fatigue. We also know, from Apple’s successful marketing, that promotion of a retina display (326 dpi, for a phone) is effective, at least with buyer personas, at addressing the perceived problem as well.
A “normal” consumer is not going to be able to make an informed comparison of the likely difference in eye strain over time – the problem the consumer actually cares about – when looking at the specs for a 225 dpi device and a 250 dpi device. The authors of the articles are exactly right about that – but we, as product managers, already know this.
The problem comes when writing the requirements for your product team – do you just say “create a low eye-strain screen” and trust the team to pick a resolution that is effective? In an ideal world, yes, you would. Someone (that someone may have to be you) on your team would do the research and come back and tell you that a 225 dpi interface will cause moderate eye-strain in 80% of people after 12 hours of continuous reading (but only 20% of people after 4 hours), and that that number drops to 20% of people experiencing eye strain after 12 hours when using a 250 dpi screen**. You have an understanding of the value of a 250 dpi resolution over a 225 dpi resolution – an additional 8 hours of reading time for the majority of your customers. Your customers will not know this (unless you tell them). But you know it, and that’s enough.
Now you have to understand the incremental cost of creating a product with a 250 dpi resolution versus one with a 225 dpi resolution. In this example, the incremental device cost is $25 per device for the next 12 months (given projected manufacturing levels). At your target margins, this would reflect in an increase of $40 in device pricing to your customers.
This higher-capability, higher-priced product will simultaneously appeal to more customers (people who read for more than 4 hours at a time), and fewer customers (people who are price sensitive). Your hypothesis (backed with market-research) indicates that you will generate 50% more profit by offering the lower-capability resolution (225 dpi), because most of the people in your target market do not regularly read for 12 hours at a time – and those that do will not “blame” your product for their eyestrain – they will “blame” their own behavior.
Now you’re ready to to specify that your product will be built with a 225 dpi resolution interface. Not because a 225 dpi interface is in any way intrinsically valuable, but because it is the measurable and unambiguous requirement needed to satisfy your goal (achieving profitability) based on a solution to a more is better (see more articles on Kano analysis) market problem (eye strain that occurs in long reading sessions).
Specifications are useful, because they help you characterize capabilities. The diagram above depicts a more is better characteristic, as described in Kano analysis. The resolution measurements (dpi) give you a measurable criterion for what you are building, that translates into a measurable criterion (hours of continuous use without eye strain) that your customers will realize. That criterion reflects the horizontal axis of the diagram. The longer your customer can read without straining her eyes, the more capable your product is.
The vertical axis reflects how much your customer cares about solving the eye-strain problem. Note that there are diminishing returns. Enabling 4 hours of use (without eye strain) versus 2 hours of use is more highly valued than enabling 6 hours of use versus 4 hours (or 8 versus 6).
The above analysis compared the selection of discrete components, but it also applies when looking at incremental investment. The speed at which the screen refreshes when turning pages is a good example. You can have an e-Ink screen that refreshes in 1 second, or one that refreshes in 0.5 seconds, each time your user turns the page. There is no difference in incremental cost, and your team is operating with a fixed budget of time and resources – so there’s no impact on the allocated fixed costs (or margins). You are faced with a different set of compromises – what are you willing to give up (by reducing investment in other aspects of your product) to achieve incremental improvement in page-turning responsiveness? It may be that page-turning snappiness is the one idea of your product (it was a differentiator for the Barnes & Noble Nook, but now the e-Ink Kindle has achieved parity). Or you may be dealing with one of the “host of others” capabilities, in which case you simply want to satisfice.
Bigger Picture
The above example shows how specs matter (indirectly) in the everyday lives of product managers. Specs are the measurements by which we evaluate the effectiveness of our products at solving the problems our customers care about.
If you are designing a product that has no competition – for customers that have no alternatives, this would be enough. It would be more than enough, actually, because you could create “any” product, and it would sell. Your customers always have alternatives – so you always have competition. Sometimes, your competition is “build your own” or even “tolerate the problem.” But for any interesting market problem, you have at least one competitor trying to solve it too (or you will. Very soon).
Comparing Products Matters
As a product manager, you need to know what your product needs to be (or do) to be competitive. That’s where comparing products matters.
The above analysis looks at a single problem (eye strain) for a single product (yours), to try and determine what specs to give to your team. In a competitive environment (that means you), you need to put the effectiveness of your product in context from your customer’s point of view. That involves identifying
- Who are your customers and what problems do they care about solving?
- How important, relative to each other, are the solutions to those problems, to your customers?
- How important, relative to each other, is each group of customers?
- What solutions (products) do your customers consider as alternatives (competition) to your product?
- How effective is each product at solving each problem?
From that information, you can synthesize a point of view on how competitive your product is (or will be).
That point of view helps you identify
- Which problems you need to invest in solving, or solving more, or solving more effectively.
Summary & Series
Specification is not a good tool for helping non-expert customers compare products. It is, however, a good tool – an unambiguous measurement tool – that product managers can use when specifying how a product should be built, or how one product compares to another.
Products must be compared based on their effectiveness (and perceived effectiveness) at solving problems that customers will pay to solve. Those comparisons should also take into account the relative importance of those problems, as well as the relative importance of different groups of customers, to the success of the product.
Even at 1600 words, this article barely introduces the topic of comparing products.
Recapping the overall flow of this series of articles on product comparison (I’ll update this article with links to future articles in a series on comparing products.):
Getting useful information from comparing products requires you to:
- Introduction and Overview (so that the step-numbers align with the article numbers) (This article)
- Identify your customers.
- Articulate the problems your customers care about solving.
- Determine how important solving each problem is, relative to the other problems, for your customers.
- Characterize how important it is for you to solve the problems of each group of customers.
- Discover which (competitive) products your customers consider to be your competition.
- Assess how effectively each competitive product solves each important problem.
- Assess how effectively each competitive product solves each important problem, for each important group of customers.
With this information, you can create a point of view about how your product compares to the others.
Attributions & Notes
* Thanks Roger Wollstadt for the original Volkswagon Assembly photo
** The dpi-related eyestrain statistics are fictional, and written to demonstrate the importance of measurement only
Scott,
Great post. Will you be exploring how to take this into the product marketing realm?
Tj
Thanks, Tim!
I’ll probably only inadvertantly cover aspects of product marketing, where they happen to overlap with product management. I’ve got colleagues who are product marketers, but I have never held the role.
Product management and product marketing both have a need to understand their markets – one to design the right products, and the other to enable that product to succeed in the market. I’ll be describing chunks of the “understand the market” activities, that can definitely be used by people with both responsibilities.
Maybe as I build out the series, you and other folks can chime in with the “missing pieces” that would be needed to apply this stuff as a product marketer.
Scott,
I was just having this discussion around some competitive testing we are about to do with one of the labs. One of the aspects we will be testing is latency (which will be measured in milliseconds). Users will begin to notice latency somewhere between 150 and 350 milliseconds. However, what is really important is the user experience. If every vendor can load the page in about the same time, competing on latency makes no sense to anyone. If we can load the page in 2 seconds where everyone else takes 7 or more, then we will make some noise around it. If we are all about the same, then the discussion moves to the next point of importance for the customer.
I will be happy to throw in my PMM 2c as I am sure Josh Duncan will as you go along. This is a great topic for both communities and one that is not well understood.
Tj
Thanks Tim!
The research I’ve done (in the last couple of months) shows that http://www.useit.com/papers/responsetime.html is still relevant for perceived response times. I was focusing on touch-interfaces and combined HW/SW response times, but between that article and some other critiques, reviews; I’m comfortable that that is still the right framework for converting “machine response times” into “perceived response times.”
I think your point about differentiation is the most important aspect. Will capture that articulation later in this series.
Scott, I like how you define the two roles PM – design the right product; PMktg – enable market success. Both work together and understand the market.
Looking forward to your continuing dialog.
Cheers! Karol
Thanks, Karol, and welcome to Tyner Blain!
Well done, Scott.
We as product managers often uncover market problems that prospects face but of which they aren’t aware. But even when prospects are aware of the problems they face, they don’t typically articulate them as a specification.
A large part of a product manager’s job is to explicate market problems, which means articulating them in a much more precise and measurable way than the prospective customer has even considered.
Thanks, Roger!
Great point about “not being aware of problems” as a customer. I used to struggle with “problems” versus “opportunities.” When I was still coding and leading teams that were coding, I was always putting things in terms of opportunities (to improve the product, to solve a difficult problem, to write elegant code, etc) – just applying a positive perspective to the work.
When I first started doing requirements, I tried couching things in the language of “market opportunities” instead of “problems.” I couldn’t quite make that stick, in terms of language.
Your point reminds me that “opportunity” is really what we’re talking about here for customers. You have an opportunity to eliminate a problem – and someone smart (maybe Barb Nelson?) made the point to me that people are willing to pay more for “make the problem go away” than “give me something nice.”
It makes sense to me that customers are willing to pay more (or are more willing to pay) to solve a problem that they already acknowledge. However, uncovering latent problems has value too. Your product may not take off right away – your customers have to have an epiphany and realize the value of your solution.
I couldn’t agree more that the discovery, synthesis, and articulation of market problems is absolutely key. I’ll also point out that documenting your rationale is important too, and often overlooked. “Of course you will remember” is fine for a couple months, but a year down the road when you (or someone else) is exploring new vistas for a product, it helps to be able to achieve clarity in understanding why the product is where (or what) it is today.
I just saw that @alltop_agile tweated this article – thanks!
BTW, I do this stuff on agile projects, not just waterfall. The key is incremental investment in the analysis (just as an agile developer incrementally invests in the code).
Thank you for the nice review.
It is perfect time to make some plans for the future and it’s time to be happy. I’ve read this post and if I could I want to suggest you few interesting things or tips. Perhaps you could write next articles referring to this article. I wish to read even more things about it!