Modeling User Competency

Perpetually intermediate (competent) users.  Users who briefly exist as novice users and never become experts. Most of your users are competent, and you should design for them.  Competent users have different needs and different expectations than novice or expert users.  How do you know your user’s competency levels, so you can design for them?

User Competency

User competency is a concept I first read about in Alan Cooper’s The Inmates Are Running The Asylum.  Cooper’s contention is that the level of expertise of your users follows a bell curve, or normal distribution.

When we’re designing software we need to keep in mind that most of our users will be competent – neither experts nor beginners. Alan Cooper’s studies tell us that user skill levels follow a bell curve. He talks about competent users as perpetual intermediates. Some users drop out of the bell curve when they stop using our software. The rare user becomes an expert. Most users only learn enough to get their real job done.
Competent Users and Software Requirements

Cooper contends that the combination of learning curves and natural user tendencies to stop learning or abandon a software application are the sources of this distribution of user competency.

Experience Curves

Experience curves represent the diminishing costs over time of manufacturing something repeatedly.  The process of manufacturing something gets more efficient as you get smarter about the process.  The process of using software is at least analogous to, if not a specific example of a manufacturing process.  If you’re using an email application, you are manufacturing email messages.  In a CRM system, you are manufacturing contacts, or contact reports, etc.

By treating any “using the software to do something” interactions as a process, you can measure the cost (how long it takes) of the user’s interactions.  Applying the math behind experience curves, you can predict the reduction in cost (to your users) over time, for any set of interactions.  Experience curves take into account that some processes are inherently more learnable than others.  This property of learnability is reflected as an efficiency coefficient – how efficiently someone can learn ways to reduce the cost (time) needed to perform the interactions.

This gives us an approach to quantitatively model user competency.  Having a definition allows us to model competence.  And measuring competence allows us to manage product design in the context of user competency – designing for competent users.

Defining Competence

The first step to measuring competency is to define the model.  I am proposing a definition in this article that I suspect will yield insights (to help us manage our products). I was unable to find any quantified definitions of competence, when researching it as part of a client engagement.  If you have, or know of a model, please share it in the discussion below this article.

  • A competent user is someone who learns to perform a task in half the time it initially took them.
  • An expert user is someone who can complete a task in 10% of the initial time.

This definition is guided by an expectation that Alan Cooper’s premise about perpetually intermediate users is true.  Being a novice user is a very transient state, and becoming an expert is very infrequent.  The goal of the definition is to be able to segment your users and make well-informed design decisions.

A Proposal, Not a Doctrine

This is a proposal that doubling performance reflects competence, and achieving a ten-times improvement represents expertise.  It may be that some different measure of performance improvement more accurately reflects competence and expertise.  We have to test it to know.

The experience curve is defined mathematically by Henderson’s Law.  It states that the time to complete a task is a function of the number of times you have previously done that task, adjusted by the “elasticity” of the cost of that task.  In other words, some tasks are easier to improve than others.  If you populate a table with the results of applying Henderson’s Law, you get the following:

[larger image]

  • Each row in the table represents the Nth repetition of a task.
  • Each column represents how easy that task is to learn – progressing from “hard to improve” on the left, to “easy to improve” on the right.
  • Each cell in the table represents how long it would take to perform the Nth repetition of the task, as a function of how easy it is to improve your performance at the task.

The above definitions represent what experience curves predict mathematically, when the task initially takes 60 seconds.  The following definitions reflect the proposed user competency model:

  • A user is a novice user until she has learned enough to cut the time-on-task in half.  These cells have a white background (and are in the upper left area of the table).
  • A user is a competent user when the time needed to complete the task is between one half and one tenth of the initial time.  These cells are shown with a yellow background (and are in the central area of the table).
  • A user is an expert user when she can complete the task in less than one tenth of the time required to initially complete the task.  Theses cells are shown with a red background (and are in the lower right area of the table).

In the absence of empirical data, I used my intuition to suggest that a representative experience curve for a typical task performed in software would be one with an “elasticity” of 50%.  For a task with those learning characteristics:

  • A novice user would cut the needed time in half on the 4th repetition of the task, and would be considered to be a competent user.
  • A competent user would further reduce the time needed to one tenth of the initial time on the 100th repetition of the task, and would be considered an expert user.
  • The 50% elasticity column is surrounded by a black border, and the number of repetitions required to advance to competent or expert status is also highlighted with a black border.

Inferring Competence

Since different processes have different learning characteristics, you have to figure out how easy it is for your users to improve at your processes.  To do that, you have to study (or at least measure) your users’ interactions with your software.  In the 50% curve highlighted above, a user is capable of cutting their time-on-task in half by the fourth time they perform the task.

If data from your initial testing (or measurement) reveals this to be true, then you have selected the correct curve (the correct column in the table).  If it takes more or less time to reach this level of improvement, shift to the left or right to find the appropriate curve.  If software-interactions are reasonable analogs to manufacturing processes, then the experience curve projects an expected rate of improvement on task.

The following graph isolates the time-on-task data for a user who is learning to improve when repeating a task (process) that matches the 50% elasticity curve.

[larger image]

Note that the number of repetitions of the task (the X axis) is represented as a logarithmic scale.  The data points along the curve correspond to the cell values in the table above (for the 50% column).

Shades of Gray

One nice thing about this quantitative approach to inferring competency by measuring usage is that your measurements are per-process.  Users are not “purely novice” or “purely expert” – they can be experts at some processes, while remaining neophytes at other.  There is also awareness, for any particular process, of “how much competency” a user has.  This allows you to refine your assumptions of the steepness of the learning curve, and of the thresholds (doubled performance and ten-times performance improvements).

Improvement Over Time

Any particular learning curve can be considered relative to calendar time, to see how quickly a user will progress along the curve (as a function of frequency of use).  This can be useful for determining the ROI of improvement in a particular process.

The following graph shows how an 80% learning curve overlays a calendar for tasks that happen daily, weekly, and hourly.

[larger image]

The graph shows that for a weekly frequency, after 16 weeks, the task time has reduced from 300 seconds to 100 seconds. With a daily frequency, the task time is even lower – about 60 seconds. This graph shows nothing other than converting the academic learning curve graph into one that incorporates calendar time and frequency of occurrence.

Software Usability and Learning Curves

One approach to inferring user competency would be to measure how long a user has been using your software.  The variation in how frequently different users perform the same task will introduce an error into that inference.  You can avoid introducing that error into your modeling by counting the number of times a user has performed a task.

Applying the User Competency Model

The advice in previous articles, and from Cooper’s book, and from this great article on the  Coding Horror site, encourages us to focus on the competent users.

I’m working with a client who needs to prioritize a set of capabilities and establish design principles for a product.  We will incorporate this user competency model as part of our analysis.

Hopefully we’ll have an opportunity to collect data to validate and / or refine the model.  I’m proposing that we first gain some insight into the which users (novice, competent, expert) drive the most revenue and profit from use of the product – to establish the importance of each category of user.

For this product, I suspect that we will find many more novice users than a normal distribution would predict.  If that is true, the next question will be to understand if we are dealing with a normal behavioral dynamic, or if characteristics of the current product “force” novice users to abandon it before they achieve competence.

Either way, we will have a framework for prioritizing the goals of the novice, competent, and expert users.

How would you apply a model like this to improving your product?

20 thoughts on “Modeling User Competency

  1. Fascinating and timely information for me. We are doing a reengineering project with a company that is implementing an ERP/CRM system. They were are photocopier/file cabinet type of company.

    Over the strenuous objections of the vendor, we created entirely new interfaces for the users. Only things users would be required to do were presented and they were all in one menu and labeled in terms users knew. For example:
    Enter Billing
    My Billings
    My Late Customers

    Then we wrote “How To” guides with pictures walking users through each step with pictures. Again the ERP/CRM vendor strenuously complained that users should “discover” new ways of doing tasks. At this point we fired all the vendors consultants.

    It all seemed unnecessarily simple to the vendor and they hated that we renamed everything and created non-standard forms. However, by greatly simplifying the interface and defining precise processes for everything we wanted users to do, we moved the whole company from “novice” to what the vendor would call “expert”. There definiation of expert was based on what one could expect a standard user to be able to do and how long it would take to get them there.

    In 2 months, we had created a star/example consultant. Sheis 67 years old, doesn’t own a computer or cell phone, but she is loading documents into our portal for distribution to our clients and having clients return documents to her through our portal. This is in addition to billing, tracking and updating customer information.

    Basic training that defines what “competent” is and gives precise steps that novice, inexperienced or resistant users can follow can really raise the bar of productivity and performance. Just one example presents this benefit. Our company distributes 4 to 6 documents a year to 6,000 clients and it costs $4 to ship a document one way. We will cut our shipping costs by about two-thirds and pay for the system in its first year.

    Thanks for a great theoretical presentation of what we were doing for practical purposes without thinking about the theoretical underpinnings.

    1. Thanks, Andrew – great stuff! Reminds me a lot of this article on inductive user interfaces (versus a UI where the user has to deduce what to do).

      It also sounds like you experienced something with similar impact as collaborative effects – there’s an interesting article in the Harvard Business Review about how collaborative learning can accelerate / invalidate the projected learning rate defined by experience curves.

      Great real-world data too, I’m sure it will help other folks here! I appreciate it.

  2. Pingback: Len Lipkin
  3. Pingback: Scott Sehlhorst
  4. Pingback: ayemoah
  5. Pingback: Rolf Götz
  6. Pingback: Scott Sehlhorst
  7. Pingback: Scott Sehlhorst
  8. Pingback: Dennis Stevens
  9. Pingback: Jeff SKI Kinsey
  10. Very interesting article. I read About Face 3 book recently and liked the idea to design for Competent users. We are re-designing TargetProcess now with this in mind.

    Also I see this as an interesting approach to evaluate competitors. We may ask people to do same tasks in various products and compare results.

    1. Hey Michael, thanks and welcome to Tyner Blain!

      I haven’t checked out About Face 3 yet – did you read 2 or 1? How does 3 compare?

      Really interesting idea about using suitability (to users at a given competence level) as an aspect of competitive analysis – I love it!

  11. Pingback: Michael Dubakov
  12. Pingback: Steve Mitchell

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.