You want your software to be used, not to sit on the shelf. You can’t achieve the ROI of your software if people don’t use it. And you can’t achieve the ROI of your software by forcing people to use it either. Some will fail to achieve the benefits, and others will delay using it or refuse to use it entirely. You have to make them want to use it, and you have to design the software for the users who must use it. Otherwise, you won’t achieve the ROI.
The ROI of User Adoption
The image above is great – it shows one person choosing to use the software, and one person choosing to do the work by hand instead. It is a classic user adoption problem. The good thing about it is that you can talk in concrete terms about the goals of your software, and how user adoption plays a role.
Let’s start with a definition of return on investment (and read the full article for more details and examples)
ROI is the acronym for return on investment. Another way to think of it is “How much profit will we make if we invest in this project?” Profit is revenue minus costs. Technically, the question should be “How much profit will we make, relative to our investment, if we invest in this project?”
OK, with a definition of ROI, you have to develop a model for return, and one for investment. The model for investment is built by looking at the cost of developing the software. That defines how much it costs. Assume it costs $1,000,000 to develop and deploy the solution.
For our user adoption example, assume that the software saves the company $1 every time it is used, relative to whatever the company does in the absence of the software. There are several ways to calculate the ROI of design, we’re picking one Assume we have 1000 users, each of whom would use (or not use) the software 10 times per day. That is a potential savings of $10,000 per day. Call it $2,000,000 per year once all the users are using it all the time.
Unfortunately, many people stop there. They see a 6-month payback period, they fund the project, and they stop thinking about it – the money is already in the bank. But you’re not going to do that.
Imagine that this is your project. You write perfect requirements, you have an excellent development team, and the software works perfectly – saving a dollar every time it is used. The problem is, only 100 people are willing to use the software. Now, instead of a 6-month payback period, it takes five years to recover the cost of developing the software.
The False Promise of the Mandate
“Not a problem for us,” you say. “Everyone must use the software. Or they’re fired!”
Many corporate IT departments work this way. The business tells them what they need (“Save us a dollar!”), and the IT guys gather requirements, develop a perfect application and confirm that it saves a dollar every time someone uses it. And then they mandate that everyone use it. Simple. 6-month payback.
There are a couple problems with this approach.
- Not everyone uses it right away, even if everyone is required to use it.
- Not everyone uses it effectively, even when required to use it.
The explanation is a little less obvious, but the brutal reality of it is just as true.
Delayed User Adoption
You issue a mandate – all 1000 users will use the software! The users are conveniently spread out into 10 business units, each with 100 users. One of those business units will go first. And your software is a disaster (we’ll get to #2 in a minute). The plan was to pilot the software with the first 100 users, then roll it out a month later to the other 900. No problem – 7 month payback, and less risk to the business.
The problem is that the people running the business units – and thereby managing the users – are smart too. They see the train wreck that Tom’s department became, and they refuse to use the software. They have their own financial targets, and they don’t want to jeopardize them. These business unit directors, if they lack the power, will escalate, until their demands are heard.
What’s the first place in the hierarchy where there’s a common manager? Is it the CEO? Maybe you’re lucky, and it is only a couple VP’s arguing about it. What conclusion do you think they will reach when presented with the following arguments:
- If we use the software, we will miss our numbers – it slows people down. And that will cost you $X.
- But we want them to use the software – they have to. If they were smarter, it would save money
Even if the argument isn’t accurate – who will win? Best case, your roll-out is delayed. Worst case, it is killed before it ever gets a chance to work.
Ineffective User Adoption
It turns out, after watching the pilot group, that expert users can in fact save a dollar every time. Competent users break even – there’s no savings for them. And beginners were better off with the old solution. It was less efficient, but the new way is so hard that they can’t do it well.
You’ve mandated that people use the software. Since they didn’t otherwise want to use it, you had to spend a bunch of energy (and money, in opportunity cost) to make it happen. The rollout was delayed, and at the end of the day, you break even. Most users have no savings, and the few that do are offset by those that caused an increase in cost.
Mandating user adoption creates a false sense of confidence, and does not assure ROI – it only assures adoption. We’ll say that again, to let it sink in.
Mandating user adoption does not assure ROI – it only assures adoption.
So what can you do?
Measuring User Adoption
As product managers who care about user adoption, our first thought is “measure it!” The biggest challenge is in measuring the right thing.
We can’t just measure how many clicks, because while that might correlate with ease of use – it does not necessarily cause increased user adoption.
To stay aligned with the ROI model, we have to measure, directly, that user adoption is meeting the forecast used to create the ROI model. The only measurement that is assured to be an accurate reflection of user adoption is measurement of user adoption directly.
Prevent Mistakes, Then Monitor Them
If we measure poor user adoption after it has proven to be inadequate, that doesn’t help very much. It is generally accepted that a good design leads to higher user adoption rates. In the case of the “false mandate,” good design yields faster roll-outs and allows more users to be more effective with the software.
There are also other reasons to invest in good design.
The right solution is to invest in good design – targeting competent users (not novice users) to achieve the desired ROI. A good design is easier to learn too. You can track the rate of improvement among users.
Here’s a chart from an article that goes into learning curves in more depth:
The graph shows that for a weekly frequency, after 16 weeks, the task time has reduced from 300 seconds to 100 seconds. With a daily frequency, the task time is even lower – about 60 seconds. This graph shows nothing other than converting the academic learning curve graph into one that incorporates calendar time and frequency of occurance.
Note the odd, vertical drops in task time during week 1. This is just a manifestation of looking at a weekly time scale. For a task that happens once per hour, the user will rack up 40 repetitions during the first week. From a decision-making standpoint, you don’t have time to react to that rapid rate of learning, so it is ok that it is collapsed on this graph.
Software Usability and Learning Curves
Thanks, now go hire some designers and achieve your ROI!