Perpetually intermediate (competent) users. Users who briefly exist as novice users and never become experts. Most of your users are competent, and you should design for them. Competent users have different needs and different expectations than novice or expert users. How do you know your user’s competency levels, so you can design for them?
User competency is a concept I first read about in Alan Cooper’s The Inmates Are Running The Asylum. Cooper’s contention is that the level of expertise of your users follows a bell curve, or normal distribution.
When we’re designing software we need to keep in mind that most of our users will be competent – neither experts nor beginners. Alan Cooper’s studies tell us that user skill levels follow a bell curve. He talks about competent users as perpetual intermediates. Some users drop out of the bell curve when they stop using our software. The rare user becomes an expert. Most users only learn enough to get their real job done.
Competent Users and Software Requirements
Cooper contends that the combination of learning curves and natural user tendencies to stop learning or abandon a software application are the sources of this distribution of user competency.
Experience curves represent the diminishing costs over time of manufacturing something repeatedly. The process of manufacturing something gets more efficient as you get smarter about the process. The process of using software is at least analogous to, if not a specific example of a manufacturing process. If you’re using an email application, you are manufacturing email messages. In a CRM system, you are manufacturing contacts, or contact reports, etc.
By treating any “using the software to do something” interactions as a process, you can measure the cost (how long it takes) of the user’s interactions. Applying the math behind experience curves, you can predict the reduction in cost (to your users) over time, for any set of interactions. Experience curves take into account that some processes are inherently more learnable than others. This property of learnability is reflected as an efficiency coefficient – how efficiently someone can learn ways to reduce the cost (time) needed to perform the interactions.
This gives us an approach to quantitatively model user competency. Having a definition allows us to model competence. And measuring competence allows us to manage product design in the context of user competency – designing for competent users.
The first step to measuring competency is to define the model. I am proposing a definition in this article that I suspect will yield insights (to help us manage our products). I was unable to find any quantified definitions of competence, when researching it as part of a client engagement. If you have, or know of a model, please share it in the discussion below this article.
- A competent user is someone who learns to perform a task in half the time it initially took them.
- An expert user is someone who can complete a task in 10% of the initial time.
This definition is guided by an expectation that Alan Cooper’s premise about perpetually intermediate users is true. Being a novice user is a very transient state, and becoming an expert is very infrequent. The goal of the definition is to be able to segment your users and make well-informed design decisions.
A Proposal, Not a Doctrine
This is a proposal that doubling performance reflects competence, and achieving a ten-times improvement represents expertise. It may be that some different measure of performance improvement more accurately reflects competence and expertise. We have to test it to know.
The experience curve is defined mathematically by Henderson’s Law. It states that the time to complete a task is a function of the number of times you have previously done that task, adjusted by the “elasticity” of the cost of that task. In other words, some tasks are easier to improve than others. If you populate a table with the results of applying Henderson’s Law, you get the following:
- Each row in the table represents the Nth repetition of a task.
- Each column represents how easy that task is to learn – progressing from “hard to improve” on the left, to “easy to improve” on the right.
- Each cell in the table represents how long it would take to perform the Nth repetition of the task, as a function of how easy it is to improve your performance at the task.
The above definitions represent what experience curves predict mathematically, when the task initially takes 60 seconds. The following definitions reflect the proposed user competency model:
- A user is a novice user until she has learned enough to cut the time-on-task in half. These cells have a white background (and are in the upper left area of the table).
- A user is a competent user when the time needed to complete the task is between one half and one tenth of the initial time. These cells are shown with a yellow background (and are in the central area of the table).
- A user is an expert user when she can complete the task in less than one tenth of the time required to initially complete the task. Theses cells are shown with a red background (and are in the lower right area of the table).
In the absence of empirical data, I used my intuition to suggest that a representative experience curve for a typical task performed in software would be one with an “elasticity” of 50%. For a task with those learning characteristics:
- A novice user would cut the needed time in half on the 4th repetition of the task, and would be considered to be a competent user.
- A competent user would further reduce the time needed to one tenth of the initial time on the 100th repetition of the task, and would be considered an expert user.
- The 50% elasticity column is surrounded by a black border, and the number of repetitions required to advance to competent or expert status is also highlighted with a black border.
Since different processes have different learning characteristics, you have to figure out how easy it is for your users to improve at your processes. To do that, you have to study (or at least measure) your users’ interactions with your software. In the 50% curve highlighted above, a user is capable of cutting their time-on-task in half by the fourth time they perform the task.
If data from your initial testing (or measurement) reveals this to be true, then you have selected the correct curve (the correct column in the table). If it takes more or less time to reach this level of improvement, shift to the left or right to find the appropriate curve. If software-interactions are reasonable analogs to manufacturing processes, then the experience curve projects an expected rate of improvement on task.
The following graph isolates the time-on-task data for a user who is learning to improve when repeating a task (process) that matches the 50% elasticity curve.
Note that the number of repetitions of the task (the X axis) is represented as a logarithmic scale. The data points along the curve correspond to the cell values in the table above (for the 50% column).
Shades of Gray
One nice thing about this quantitative approach to inferring competency by measuring usage is that your measurements are per-process. Users are not “purely novice” or “purely expert” – they can be experts at some processes, while remaining neophytes at other. There is also awareness, for any particular process, of “how much competency” a user has. This allows you to refine your assumptions of the steepness of the learning curve, and of the thresholds (doubled performance and ten-times performance improvements).
Improvement Over Time
Any particular learning curve can be considered relative to calendar time, to see how quickly a user will progress along the curve (as a function of frequency of use). This can be useful for determining the ROI of improvement in a particular process.
The following graph shows how an 80% learning curve overlays a calendar for tasks that happen daily, weekly, and hourly.
The graph shows that for a weekly frequency, after 16 weeks, the task time has reduced from 300 seconds to 100 seconds. With a daily frequency, the task time is even lower – about 60 seconds. This graph shows nothing other than converting the academic learning curve graph into one that incorporates calendar time and frequency of occurrence.
One approach to inferring user competency would be to measure how long a user has been using your software. The variation in how frequently different users perform the same task will introduce an error into that inference. You can avoid introducing that error into your modeling by counting the number of times a user has performed a task.
Applying the User Competency Model
The advice in previous articles, and from Cooper’s book, and from this great article on the Coding Horror site, encourages us to focus on the competent users.
I’m working with a client who needs to prioritize a set of capabilities and establish design principles for a product. We will incorporate this user competency model as part of our analysis.
Hopefully we’ll have an opportunity to collect data to validate and / or refine the model. I’m proposing that we first gain some insight into the which users (novice, competent, expert) drive the most revenue and profit from use of the product – to establish the importance of each category of user.
For this product, I suspect that we will find many more novice users than a normal distribution would predict. If that is true, the next question will be to understand if we are dealing with a normal behavioral dynamic, or if characteristics of the current product “force” novice users to abandon it before they achieve competence.
Either way, we will have a framework for prioritizing the goals of the novice, competent, and expert users.
How would you apply a model like this to improving your product?